Propagandists around the world keep trying to use ChatGPT, according to OpenAI report

Propagandists around the world keep trying to use ChatGPT, according to OpenAI report


Propagandists seeking to influence elections around the globe have tried to use ChatGPT in their operations, according to a report released Wednesday by the technology’s creator, OpenAI.

While ChatGPT is generally seen as one of the leading AI chatbots on the market it also heavily moderates how people use its product. OpenAI is the only major tech company to repeatedly release public reports about how bad actors have tried to misuse its Large Language Model, or LLM, product, giving some insight into how propagandists and criminal or state-backed hackers have tried to use the technology and may use it with other AI models.

OpenAI said in its report that this year it has stopped people who tried to use ChatGPT to generate content about elections in the U.S., Rwanda, India, and the European Union. It’s not clear whether any were widely seen.

In one instance, the company described an Iranian propaganda operation of fake English-language news websites that purported to reflect different American political stances, though it’s not clear that those sites have ever gotten substantial engagement from real people. They also used ChatGPT to create social media posts in support of those sites, according to the report.

In a media call last month, U.S. intelligence officials said that propagandists working for Iran, as well as Russia and China, have all incorporated AI into their ongoing propaganda operations aimed at U.S. voters but that none appear to have found major success.

Last month, the U.S. indicted three Iranian hackers it said were behind an ongoing operation to hack and release documents from Donald Trump’s presidential campaign.

Another operation that OpenAI says is linked to people in Rwanda was used to create partisan posts on X in favor of the Patriotic Front, the repressive party that has ruled Rwanda since the end of the country’s genocide in the early 1990s. They were part of a larger campaign that repeatedly spammed pro-party posts on X, a documented propaganda campaign that posted messages — often the same few messages — more than 650,000 times. 

The company also blocked two campaigns this year — one created social media comments about the E.U. parliamentary elections, and another created content about India’s general elections — very quickly after they began. Neither got any substantial interaction, OpenAI said, but it’s also not clear whether the people behind the campaigns simply moved to other AI models created by different companies.

OpenAI also described how one particular Iranian hacker group that targeted water and wastewater plants repeatedly tried to use ChatGPT in multiple stages of its operation.

A spokesperson for Iran’s mission to the United Nations didn’t respond to an email requesting comment about the water plant hacking campaign or propaganda operation.

The group, called CyberAv3ngers, appears to have gone dormant or has disbanded after the Treasury Department sanctioned it in February. Before that, it was known for hacking water and wastewater plants in the U.S. and Israel that use an Israeli software program called Unitronics. There is no indication that the hackers ever damaged any American water systems, but they did breach several U.S. facilities that used Unitronics. 

Federal authorities said last year that the hackers were often able to get into Unitronics systems by using default usernames and passwords. According to OpenAI’s report, they also tried to get ChatGPT to tell them the default login credentials for other companies that provide industrial control systems software.

They also asked ChatGPT for a host of other things in that operation, including information about what internet routers are most commonly used in Jordan and how to find vulnerabilities a hacker might exploit, and for help with multiple coding questions.

OpenAI also reported something cybersecurity and China experts have long suspected but hasn’t been made explicitly public. Hackers working for China — a country the U.S. routinely accuses of conducting cyberespionage to benefit its industries and which has prioritized artificial intelligence — conducted a campaign to try to hack the personal and corporate email accounts of OpenAI employees.

The phishing campaign was unsuccessful, the report claims. A spokesperson for the Chinese Embassy in Washington didn’t immediately respond to a request for comment.

A consistent theme of malicious actors’ use of AI is that they often try to automate different parts of their work, but the technology so far hasn’t led to major breakthroughs in hacking or creating effective propaganda, said Ben Nimmo, OpenAI’s principal investigator for intelligence and investigations. 

“The threat actors look like they’re still experimenting with different approaches to AI, but we haven’t seen evidence of this leading to meaningful breakthroughs in their ability to build viral audiences,” Nimmo said.



Source by [author_name]

Leave a Comment

Your email address will not be published. Required fields are marked *