When this newsletter launched exactly one year ago today, we promised to bring you “a unique — and uniquely useful — look at questions that are addressed elsewhere as primarily business opportunities or technological challenges.”
We had a few driving questions: What do policymakers need to know about world-changing technologies? What do tech leaders need to know about policy? Could we even get them… talking to each other?
We’re still working on that last one. But what we have brought you is a matter of public record: Scoops on potentially revolutionary technologies like Web3, a blow-by-blow account of the nascent governing structure of the metaverse and a procession of thinkers on the transformation AI is already causing, and how we might guide it.
Yeah, about that. In just a year, AI has gone from a powerful, exciting new technology still somewhat on the horizon to a culture-and-news-dominating, potentially even apocalyptic force. Change is always happening in the tech world, but sometimes it happens fast. And as the late Intel chief Gordon Moore might have said, that speed begets more speed, with seemingly no end in sight.
The future already looks a lot different than it looked in April 2022. And we don’t expect it to look the same next year, or next month, or even next week. There’s a lot of anxiety that AI in particular could change the future much, much faster than we’re ready to address.
With that in mind I spoke yesterday with Peter Leyden, founder of the strategic foresight firm Reinvent Futures and author of “The Great Progression: 2025 to 2050” — a firmly optimistic reading of how technology will change society in radical ways — about how the rise of generative AI has shaken up the landscape, and what he sees on the horizon from here.
“This is the kind of explosive moment that a lot of us were waiting for, but it wasn’t quite clear when it was going to happen,” Leyden said. “I’ve been through many, many different tech cycles around, say, crypto, that haven’t gone down this path… this is the first one that is really on the scale of the introduction of the internet.”
Tech giants have been spending big on AI for more than a decade, with Google’s acquisition of DeepMind as a signal moment. Devoted sports viewers might remember one particularly inescapable 2010s-era commercial featuring the rapper Common proselytizing about AI on Microsoft’s behalf. And there is, of course, a long cultural history of AI speculation, dating back to James Cameron’s Terminator and beyond.
“There is a kind of parallel to the mid-’90s, where people had a very hard time understanding both the digitization of the world and the globalization of the world that were happening,” Leyden said. “We’re seeing a similar tipping point with generative AI.”
From that perspective, the current generative AI boom begs for a historical analogue. How about… America Online? It might seem hopelessly dated now, but like ChatGPT it was a ubiquitous product that brought a revolutionary technology into millions of homes. From the perspective of 20 years from now, a semi-sophisticated chatbot might seem like the “You’ve got mail” of its time.
AI might seem a chiefly digital disruptor right now, but Leyden, who has a pretty good track record as a prognosticator, believes it could revolutionize real-world sectors from education to manufacturing to even housing.
“We’ve always thought those things are too expensive and can’t be solved by technology, and we’ve finally now crossed the threshold to say ‘Oh wait, now we could apply technology to it,’” Leyden said. “The next five to 10 years are going to be amazing as this superpower starts to make its way through all these fields.”
AI is also already powering innovation in other fields like energy, biotech, and media. That’s where it’s an especially salient comparison with the internet as a whole, not just a platform like social media. It’s an engine, not the vehicle itself, and there are millions of designs yet to be built around it.
Largely for that reason, it’s nearly impossible to predict what’s going to happen next with AI. Maybe “artificial general intelligence” really will arise, posing an entirely different set of problems than the current policy concerns of regulating bias and accountability in decision-making algorithms. Or maybe it will start solving problems, wickedly difficult ones, like nuclear fusion and mortality and space survival.
To get back to our mission here: We can’t know. What we can do is continue to cover the bleeding edge of these technologies as they exist now, and where the people in charge of building and governing them aim to steer their development — and, by proxy, ours.
A pair of George Mason University technologists are recommending the government take a novel, deliberate approach to AI regulation.
In an essay for GMU’s Mercatus Center publication Discourse, Matthew Mittelsteadt and Brent Skorup propose a framework they call “AI Progress,” “a novel framework to help guide AI progress and AI policy decisions.” Their big ideas, among a handful of others:
- To “unbundle” AI technologies, treating artificial intelligence as a suite of technologies to be tackled individually by use instead of en masse a la the White House’s AI Bill of Rights
- To prioritize economic growth (unsurprising for the libertarian-leaning Mercatus Center), noting that “There are… AI technologies that could transform transportation and healthcare, two major sectors we think are most promising for AI and economic growth.”
- To “demand empiricism and time”, avoiding speculation about future, unrealized potential AI harms in favor of evaluating the data we have now about its use
“People will need time to understand the limitations of this technology, when not to use it and when to trust it (or not),” they write nearing their conclusion. “These norms cannot be developed without giving people the leeway needed to learn and apply these innovations.”
Health and tech heavy hitters are teaming up to make their own recommendations about how AI should be used specifically in the world of health care.
As POLITICO’s Ben Leonard reported today for Pro subscribers, the Coalition for Health AI, which includes Google, Microsoft, Stanford and Johns Hopkins, released a “Blueprint for Trustworthy AI” that calls for high transparency and safety standards for the tech’s use in medicine.
“We have a Wild West of algorithms,” Michael Pencina, coalition co-founder and director of Duke AI Health, told Ben. “There’s so much focus on development and technological progress and not enough attention to its value, quality, ethical principles or health equity implications.”
The report also recommends heavy human monitoring of AI systems as they operate, and a high bar for data privacy and security. The coalition is holding a webinar this Wednesday to discuss its findings.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
h/t – www.politico.com