Headline
A Privacy Hero's Final Wish: An Institute to Redirect AI's Future
Peter Eckersley did groundbreaking work to encrypt the web. After his sudden death, a new organization he founded is carrying out his vision to steer artificial intelligence toward “human flourishing.”
About a week before the privacy and technology luminary Peter Eckersley unexpectedly died last September, he reached out to artificial intelligence entrepreneur Deger Turan. Eckersley wanted to persuade Turan to be the president of Eckersley’s brainchild, a new institute that aimed to do nothing less ambitious than course-correct AI’s evolution to safeguard the future of humanity.
Eckersley explained he couldn’t run this project himself: He was having serious health issues due to colon cancer, and the organization’s co-founder, Brittney Gallagher, was very pregnant and about to go on maternity leave. But Turan wouldn’t be alone, Eckersley assured him—as soon as his illness was resolved, he’d be back to serve as the group’s chief scientist. They agreed to meet in San Francisco a few days later to hash out a plan.
By the time Turan’s plane landed in San Francisco, Eckersley had died—a tragedy that has sent still-resonating shockwaves through his friends, family, and the tech world. Instead of a meeting with Eckersley to draw up the institute’s roadmap, Turan found himself attending his friend’s funeral.
In the preceding days, while still fully expecting to recover, Eckersley had already told the board of his nascent organization—the AI Objectives Institute, or AOI—that Turan would be its president. The 44-year-old technologist and activist had also written a rough, incomplete will in Google Docs, in the unlikely event of his death. It began by naming AOI as the inheritor of all his US-based assets. “We started something important,” Eckersley wrote. “I’d want to see whether the people involved could get it a little further.”
Turan had never actually had the chance to tell Eckersley he accepted his request. But as soon as he learned of Eckersley’s death, he knew that the role at AOI was not only the most important work he could be doing but also a way to help establish a central pillar of his friend’s legacy. “So I said yes,” Turan says. “Let’s do this—not a little further, but all the way.”
Peter Eckersley on an evening bike ride in San Francisco in 2021.
Photograph: Laura Helen Winn
Yesterday, hundreds in Eckersley’s community of friends and colleagues packed the pews for an unusual sort of memorial service at the church-like sanctuary of the Internet Archive in San Francisco—a symposium with a series of talks devoted not just to remembrances of Eckersley as a person but a tour of his life’s work. Facing a shrine to Eckersley at the back of the hall filled with his writings, his beloved road bike, and some samples of his Victorian goth punk wardrobe, Turan, Gallagher, and 10 other speakers gave presentations about Eckersley’s long list of contributions: his years pushing Silicon Valley towards better privacy-preserving technologies, his co-founding of a groundbreaking project to encrypt the entire web, and his late-life pivot to improving the safety and ethics of AI.
The event also served as a kind of soft launch for AOI, the organization that will now carry on Eckersley’s work after his death. Eckersley envisioned the institute as an incubator and applied laboratory that would work with major AI labs to that take on the problem Eckersley had come to believe was, perhaps, even more important than the privacy and cybersecurity work to which he’d devoted decades of his career: redirecting the future of artificial intelligence away from the forces causing suffering in the world, toward what he described as “human flourishing.”
“We need to make AI not just who we are, but what we aspire to be,” Turan said in his speech at the memorial event, after playing a recording of the phone call in which Eckersley had recruited him. “So it can lift us in that direction.”
The mission Eckersley conceived of for AOI emerged from a growing sense over the last decade that AI has an “alignment problem”: That its evolution is hurtling forward at an ever-accelerating rate, but with simplistic goals that are out of step with those of humanity’s health and happiness. Instead of ushering in a paradise of superabundance and creative leisure for all, Eckersley believed that, on its current trajectory, AI is far more likely to amplify all the forces that are already wrecking the world: environmental destruction, exploitation of the poor, and rampant nationalism, to name a few.
AOI’s goal, as Turan and Gallagher describe it, is not to try to restrain AI’s progress but to steer its objectives away from those single-minded, destructive forces. They argue that’s humanity’s best hope of preventing, for instance, hyperintelligent software that can brainwash humans through advertising or propaganda, corporations with god-like strategies and powers for harvesting every last hydrocarbon from the earth, or automated hacking systems that can penetrate any network in the world to cause global mayhem. “AI failures won’t look like nanobots crawling all over us all of the sudden,” Turan says. “These are economic and environmental disasters that will look very recognizable, similar to the things that are happening right now.”
Gallagher, now AOI’s executive director, emphasizes that Eckersley’s vision for the institute wasn’t that of a doomsaying Cassandra, but of a shepherd that could guide AI toward his idealistic dreams for the future. “He was never thinking about how to prevent a dystopia. His eternally optimistic way of thinking was, ‘how do we make the utopia?’” she says. “What can we do to build a better world, and how can artificial intelligence work toward human flourishing?”
To that end, AOI is already working on a handful of example projects to push AI onto that path, now with the help of nine core contributors and a handful of grants including $485,000 from the Survival and Flourishing fund. The pilot project that’s furthest along, called “Talk to the City,” is designed to use a ChatGPT-like interface to survey millions of people in a city to both understand their needs and to advocate for them in discussions with policymakers, journalists, and other citizens. Turan describes the experiment as a tool for collective organizing and for governments, enabling a form of democracy more nuanced than simple elections or referendums. He says interested beta testers for the project include everyone from the organizers of Burning Man’s Black Rocky City to staffers at the United Nations.
Another prototype, called “Mindful Mirror,” will serve as a kind of personal interactive journal, a chatbot that converses with its user to help them process the events of their daily life. A third, called “Lucid Lens,” will function as a browser plugin that highlights content it detects as being designed to cause outrage or “dopamine loops” that manipulate users in ways they’d rather be aware of or avoid.
Those initial projects might sound modest compared to AOI’s lofty futurist goals. But the story of Eckersley’s long, renowned career in cybersecurity and privacy was one of building similarly simple-sounding tools that could serve as levers to effect profound changes. Working for the Electronic Frontier Foundation (EFF) for a dozen years, eventually as its chief scientist, Eckersley helped build projects like Privacy Badger, a browser plugin to stop web trackers, and HTTPS Everywhere, another plugin that made your browser navigate to the HTTPS-encrypted version of a website whenever possible. He co-created the SSL Observatory, which scanned the entire internet to determine how much of it was encrypted. Another Eckersley project, a site called Panopticlick, audited a user’s browser protection against tracking. And the EFF’s Secure Messaging Scorecard rated messaging apps on their privacy and security features, helping usher in a world where billions of WhatsApp users’ messages are end-to-end encrypted by default.
“His genius was finding the tiny little hack that would open up a big story,” EFF executive director Cindy Cohen said in her speech at Eckersley’s memorial service. “The demonstration that would make manifest what people needed to see about how technology was working.”
Perhaps Eckersley’s most celebrated work of all was his cofounding of Let’s Encrypt, a free alternative to the certificate authority companies that enable website owners to use HTTPS encryption. By removing a key hurdle to switching on a site’s encryption, Let’s Encrypt measurably transformed the web: Let’s Encrypt has so far issued free certificates to more than 300 million websites. Today, more than 90 percent of the web is encrypted, compared to less than an estimated 40 percent when Let’s Encrypt launched in 2015. “We have Peter Eckersley to thank for that,” Let’s Encrypt co-founder Alex Halderman said at Eckersley’s memorial.
Even as Eckersley was in the midst of launching those influential projects, he was already thinking about an entirely different area of technology where he felt he might make an even bigger long-term impact. By 2013, Eckersley was talking to computer scientists like Anders Sandberg, Stuart Russell, and Nick Bostrom who were focused on “civilizational risk” from AI, says Brian Christian, a scientist and author of the AI-focused books The Most Human Human and Algorithms to Live By, and who served as the master of ceremonies for Eckersley’s memorial event.
By the time he left the EFF in 2018, Christian says, Eckersley had decided it was time to re-focus his efforts on shaping AI’s future. “He saw it has higher stakes, in a way,” says Christian. “He ultimately ended up concluding that the gravity of AI, even in its more hypothetical harms, was so great that it felt urgent, that it was the most important thing to him.” Christian says Eckersley was so persuasive about the magnitude of the problem that he transformed Christian’s thinking about AI’s future, too. He dedicated his 2020 book on the topic, The Alignment Problem, “to Peter, who convinced me.”
Eckersley’s sister Nicole, who gave the final speech of the evening, said that she had begun to hear from her brother about his vision for something like AOI in 2020. And even when his health suddenly took a precipitous decline in the late summer of 2022, the institute remained his focus. “Even from his hospital bed, he was charging full speed ahead on AOI. All of his last wishes and instructions were about the survival of this incredibly important project,” she said. “We want to see Peter’s plans come to fruition. We want to keep engaged with this incredible community. We want to stop the robots from eating us and crapping out money.”
“So I hope you’ll all see this not just as a memorial,” she concluded, “but as the start of an incredible living legacy.”