OpenAI, which like Google, Meta and Microsoft provides online chatbots and other AI tools that can compose social media posts, generate realistic images and write computer programs, said in the report that its tools have been used in influence campaigns that researchers have tracked for years, including Russian doppelganger activity and Chinese spamouflage.
OpenAI said the Doppelganger campaign used OpenAI’s technology to generate anti-Ukrainian commentary that was posted on X in English, French, German, Italian and Polish. The company’s tools were also used to translate and edit articles supporting Russia’s involvement in the war in Ukraine into English and French, and to convert anti-Ukrainian news articles into Facebook posts.
OpenAI said its tools were also used in a previously unknown Russian campaign that targeted users in Ukraine, Moldova, the Baltic states, and the United States, primarily through the Telegram messaging service. The campaign used artificial intelligence to generate commentary in Russian and English about the war in Ukraine, the political situation in Moldova, and U.S. politics. The campaign also used OpenAI tools to debug computer code that was apparently designed to automatically post messages to Telegram.
The political comments received few replies and “likes,” OpenAI said. The efforts were also sometimes sloppy. At one point, the campaign posted text that was clearly generated by AI, with one post reading, “As an AI language model, I am here to help and provide the needed comments.” At other times, it posted in broken English, leading OpenAI to call the effort “poor grammar.”
OpenAI said Spamouflage, which has been blamed on China, used OpenAI technology to debug code, seek advice on analyzing social media and research current events. Its tools were also used to create social media posts that discredited people who criticized the Chinese government.