AI-run influence campaigns on rise amid limited impact:



A US cybersecurity company Mandiant warned Thursday in its research of the increasing use of artificial intelligence (AI) technology employed for manipulative information campaigns online, prevalent in the past several years, however, it also noted the limited impact of those conduct.

Researchers at the Google-owned firm found “numerous instances” since 2019 in which AI-generated content, such as profile pictures, had been used in politically-motivated campaigns to influence people.

According to the report, the campaigns were run by groups aligned with the governments of Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador, among others.

The research is released following an exceptional increase in the usage of AI for deep fakes and misinformation drives. Security experts have issued warnings about the usage of these models by cybercriminals.

According to Mandiant researchers, generative AI would enable people with minimal resources to develop more effective campaign-influencing content.

Since it first started by focusing on demonstrators in Hong Kong in 2019, Sandra Joyce, vice president of Mandiant Intelligence, believes that a pro-China information campaign has expanded “exponentially” over 30 social platforms and 10 different languages.

However, the impact of these campaigns is said to be limited. She said that from an effectiveness standpoint, not a lot of wins there.

China has repeatedly and vehemently denied the US allegations and distanced itself from any such campaigns that occurred in the past.

The Virginia-based cybersecurity company which offers its services to public and private organisations for breaches said it could not find AI playing a key role in threats from Russia, Iran, China, or North Korea. AI use for digital intrusions is expected to remain low in the near term, the researchers said.

Joyce noted: “Thus far, we haven’t seen a single incident response where AI played a role. They haven’t really been brought into any kind of practical usage that outweighs what could be done with normal tooling that we’ve seen.”

But she added: “We can be very confident that this is going to be a problem that gets bigger over time.”


Related Post