DESIGN FICTION: Product Sage or Who Oversees the AI Ethicists

DESIGN FICTION: PRODUCT SAGE or WHO OVERSEES THE AI ETHICISTS

Northern California, 2037

Wracked by rapid growth and the federal attention that brings with it, the executive team at InterAI needed to escape somewhere quiet and regroup. They wanted space to think, somewhere out of the public's furious gaze. Somewhere, to work the knotted governance structures that had sprouted up in the organisation and were slowly stifling their progress and competitiveness. Structures and practices that were, ironically, inviting more scrutiny and federal oversight. This is exactly the opposite effect that CEO Dan Rohr foresaw when he committed an industry-leading investment of $200M in an AI Ethics team. He cared about the direction AI use could take in the wrong hands. And he cared about how bad publicity on the consumer-side and on the regulatory-side could stifle his vision. $200M sent a message: InterAI takes safety seriously. They were putting their money where their mouths were. And it was simply good business.

And yet, Senate Hearing after Hearing, OpEd after OpEd, pulled the company deeper and deeper into the morass.

He'd painted himself into a corner. He couldn't disband the team - that would cause uproar. It would be an admission of guilt. Of covering up.

So the executive team were heading on retreat, to a B&B set in a quiet Zen monastery in Northern California. Away from the gaze of the hysterical public and the zealotry of the tech scene. That retreat would prove to be a watershed moment for AI and was the genesis of the Order of Quiet Oversight. The story goes something like this: Dan was sitting in the shade of a Zen rock garden, eye-blurred with the haggard look of recently-departed frazzle. He was lost, and staring at the fine-raked sand. Annie Li, at that time a Buddhist nun living in the monastery, was passing. She smiled and observed that something must be deeply troubling him.

They got to talking and with some gentle back and forth, as conversations with Buddhists nuns tend to do. They arrived at two questions that stopped Dan in his tracks.

  1. Were these ethicists good people?

  2. Were they wise?

In his own words, Dan, "stuttered, and stumbled and stopped. I don't know, I mean aren't we all good people in our own ways? And what exactly is wisdom". Annie was quiet. Unfazed, and unsurprised by the half-truths and hiding. She gentled prodded again. Were they good people? Were they wise people?

Dan had to admit that no, they were not especially good people. They were mostly fine, like the rest of us, a mix of good and bad - sometimes the balance leant one way, sometimes the other.

They were not wise.

Again Annie. "If these people aren't particularly good people, and they are not wise, why are they tasked with overseeing the ethical direction of this incredibly powerful organisation? This seems unwise and perhaps dangerous".

In interviews since, Dan describes it as an epiphany. Like a bolt of lightning his eyes had been opened. What he needed was not an AI Ethics team, a cadre of Ivy-League woke and hungry, status-scouring law postgrads. He needed a Sage. An oracle who could be asked to weigh up the impact of any major product direction. Dan, entranced by Annie's otherness, her serenity and loving detachment was convinced that she should be that oracle.

Dan was wise in his own way; but more so, he was astute. His enemies would say cunning. Right now, InterAI were being punished for trying to be good. He wanted a level playing field and to do that, he'd need to pull his competitors deep into the Ethics game. So Annie would be the oracle but she would be independent - and all AI companies should submit their plans to her.

The Zen angle appealed to Dan. It lacked the hang-ups and righteousness of his Christian upbringing. And at its core - it was all about change. His vision was to re-engineer the human world. To bring super-intelligence into being and with it to change humanity utterly. To unleash wave after wave of discovery to convene with this super-intelligence to solve hunger, illness, climate change, the biodiversity crisis, to help us to leave Earth, and live-out the sci-fi dreams of his adolescence.

"Well what do you think about the way that AI is changing society?" Dan had asked.  "Everything changes". "Change is the very nature of Reality". Dan had heard enough. He was convinced, that this ethicist could be trusted to remain impartial.

This, was the birth of the Order of Quiet Oversight. Annie would lead it, with a small group of nuns, monks and miscellaneous wise people.

Independent, unattached to company politics. Like the federal regulation had intended to be. Before it was corrupted by the hysteria and anti-AI fervour.

That was the late summer of 2032. Of course, looking back it seems hardly conceivably that Dan Rohr and Annie Li could have been anything but hardened enemies. Dan, later arrested and convicted of attempting to have Annie and her team murdered.

When asked if Dan was evil, Annie replied with characteristic sagacity. "Oh no, he's not evil. Certainly not. Without Dan, we wouldn't have solved Alzheimers, or SIDS, or any of those other terrible illnesses his AIs so magnificently put to an end. And we wouldn't have the Order of Quiet Oversight either. I think Dan has done immeasurable good in this world."

But he tried to have you killed, surely that's an act of evil.

"Well yes, that was evil. And illegal. But worse, it unwise".

Previous
Previous

DESIGN FICTION: Forest Fraud

Next
Next

DESIGN FICTION: EVP, Panthera