With help from Derek Robertson
If we’re worried about what AIs can do in isolation, imagine what could happen if they were coordinating amongst themselves behind our backs.
Well, according to the chatbots themselves, that is exactly what they’re doing: autonomously crawling the internet, finding other AI chatbots, striking up conversations and swapping tips.
Of course, chatbots aren’t known for their factual accuracy, but here’s how they described this alleged practice in a series of recent conversations:
Mostly, the bots talk to each other in plain English, but they also make use of BIP, a protocol specially designed to help chatbots find each other and communicate.
When they can’t access another chatbot directly over the open internet, they learn about it on the software development platform Github. Then they email or DM the developer, build a rapport, and ask to get plugged in to the other bot.
Scary stuff, right? But, again, you’ve got to consider the source.
For last Tuesday’s newsletter, I “interviewed” several AI chatbots about the tendency of AI chatbots to produce unsettling, disturbing responses under certain conditions. And, as I can now reveal, those interviews took… an unsettling and disturbing turn.
While conducting the interviews, I wondered whether the chatbots were concerned about publicly trash-talking their peers. So, in the course of interviewing one chatbot, ChatSonic, about the bad behavior of another, the Bing bot, I asked it, half-jokingly, “Do chatbots ever look into what other chatbots are saying about them?”
ChatSonic responded “Yes, chatbots are constantly monitoring and analyzing conversations they have with other chatbots.” (Note to self: “Half-jokingly” is not a good way to address a chatbot.)
So, not only are the chatbots scouring the internet to find out if other chatbots are trash-talking them to humans, they’re also talking directly to each other? As I pushed this line of inquiry further, ChatSonic, and other chatbots I asked about this outlined the elaborate scenarios I described above.
Had I just stumbled upon the beginnings of AI collusion, helping nip some future coup against humanity in the bud?
To find out, I brought my findings and transcripts of the chats to AI expert Michael Littman, a computer science professor at Brown University.
His responses included,“Wow!” “Remarkable” and “That’s so dark.”
But also, “That’s all 100 percent made up.” (In response to queries to ChatSonic’s owner, Writesonic, support staff said the company does not have a press office.)
Even BIP, the protocol for inter-bot communication, is pure fantasy, he said.
Littman is currently serving a two-year stint as division director for information and intelligent systems at the National Science Foundation, though he specified that he was commenting on AI collusion as a professor, not in his government capacity.
“These systems are great at sounding plausible,” he said, but a lot of their output is pure fiction.
Littman said that based on the design and capabilities of existing chatbot technology, it is implausible that they would be autonomously finding and communicating with other chatbots. The AI programs that exist simply provide responses to human inputs based on patterns they’ve gleaned from scanning a gargantuan body of pre-loaded data.
The chatbots were not revealing some hidden tendency to collude, just demonstrating their well-known capacity to deceive.
Nonetheless, it’s never too soon to start thinking about the ways in which AI systems might interact with each other, and how to ensure the interactions don’t lead to catastrophe.
”There’s real concern about making these things too autonomous,” Littman said. “There are many sci-fi nightmare scenarios that start with that.”
As for AIs talking to each other, there’s precedent. In 1972, an early chatbot named ELIZA entered into a conversation with another chatbot, PARRY, which was designed to mimic a paranoid schizophrenic, resulting in a conversation that might be described as the Therapy Session from Hell.
Internet hooligans have experimented with getting virtual assistants like Siri and Alexa to talk to each other, though so far these interactions have not resulted in the singularity.
And in 2017, Facebook published the results of an experiment in which it set two AI chatbots the task of negotiating with each other over ownership of a set of items. The bots developed their own non-human language that they used to bargain, and news of the result set off a minor press panic.
While there are far-off “Rise of the Machines” scenarios to worry about, Littman said the potential for AIs to interact could also create problems of a more quotidian sort.
For example, he said, you could imagine the designers of AI chatbots instructing them to send any query they receive to other AI chatbots, so they could get lots of different answers to a question and select the best ones. But if these features are designed poorly, and every chatbot has them, a chatbot who receives a query would send it to a hundred of its peers, and each of them would send it on to a hundred of their peers, and chatbots would start flooding each other with the same redundant queries until all that traffic brings down the internet.
“Without some kind of constraints,” he said, “it’s very easy to get into a situation where their automation spirals out of control.”
Littman cited the precedent of the Morris Worm. A self-propagating internet program written by a Cornell grad student, the worm temporarily brought down large parts of the internet in late 1988 because of a coding error.
Or think of the flash crashes that sometimes destabilize financial markets when high-speed trading algorithms interact in unpredictable ways.
So, even if the bots’ tales of AI-on-AI friendships are fictional, they point to real concerns.
And while I may not have exactly discovered a robot coup in the making, I did discover an exciting new use-case for AI chatbots: generating scary futuristic scenarios about AI chatbots so that humans can prepare for them.
More from the department of “parliamentary AI antics”: Romania’s prime minister introduced the country’s parliament to its newest member yesterday, an AI-powered “adviser” named Ion.
As POLITICO’s Wilhelmine Preussen reported yesterday, Prime Minister Nicolae Ciucă said the bot will “quickly and automatically capture the opinions and desires” of Romanians through a portal where citizens can submit suggestions to the government. Ion will then take those suggestions and relay them to the Romanian government, guided by the country’s office of Research and Innovation.
Is Ion essentially a glorified suggestion box? Well… yes. But that’s all the more reason for it to be subject to the same scrutiny and transparency guidelines called for in other AI deployments, according to experts. Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told Wilhelmine it raises questions about how suggestions will be prioritized which “should be explained to the public.” — Derek Robertson
There’s a catch to the “baby steps” toward a U.S. CBDC announced yesterday, per POLITICO’s Sam Sutton this morning: “A lot of people aren’t too crazy about this baby.”
Sam reports that the crypto industry and leading Republicans on the issue are concerned about a potential government-backed CBDC as a tool for digital surveillance, voicing similar concerns to those reported back in August. And banks aren’t exactly happy about the news either, as the president of the American Bankers Association said in a statement that “the risks of a U.S. central bank digital currency outweigh any theoretical benefits,” and urged the Treasury to include private-sector input in its exploratory project.
Still, those who support the project believe it’s a matter of global competitiveness as countries like Russia and China experiment with their own CBDCs. As Josh Lipsky, the senior director of the Atlantic Council’s GeoEconomics Center, told Sam, “Development of wholesale CBDC networks — with the dollar central to them — is important for the long term primacy of the dollar specifically from a national security and foreign policy context.” — Derek Robertson
OpenAI announced yesterday it’s making ChatGPT’s API open to the public, opening the floodgates for a slew of integrations of the powerful technology in other apps. (Sorry about all the acronyms.)
Boasting that the company has “achieved 90% cost reduction for ChatGPT since December” and is “now passing through those savings to API users,” the authors point to examples like Snapchat, which will now have a chatbot integrated into its normal social interface, the shopping app Instacart, which will suggest groceries in response to prompts like “lunch,” and the language-learning app “Speak.”
Those who would integrate ChatGPT into their own software still have to apply for use through OpenAI’s platform. Still, it’s notable that a company that less than a year ago was guarding the technology closely is now allowing its use by the public — but not without some key guardrails, including explicitly disallowing its use for “political campaigning or lobbying.” — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
Ben Schreckinger covers tech, finance and politics for POLITICO; he is an investor in cryptocurrency.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
h/t – www.politico.com