According to a cybersecurity company, AI gives threat actors new tools for attacks.



The cybersecurity firm Kaspersky Research stated in a news statement on Saturday that “threat actors now have sophisticated new tools to perpetrate attacks” due to the widespread use of artificial intelligence (AI) and machine learning technologies in recent years.

According to the security company, one such technology is deepfake, which can produce speech that sounds human or video and picture copies of real individuals. Businesses and consumers need to be aware that deepfakes are going to be a bigger problem in the future, according to Kaspersky’s warning.

According to Kaspersky, a deepfake, which is a combination of the words “deep learning” and “fake,” synthesizes “fake images, video, and sound using artificial intelligence.”

The security company issued a warning after discovering deepfake creation tools and services that may be used for fraud, identity theft, and data theft on “darknet marketplaces.”

The press announcement states, “Prices per minute of a deepfake video can be purchased for as little as $300, according to estimates by Kaspersky experts.”

A recent Kaspersky poll, according to the press release, revealed that 51% of workers in the Middle East, Turkey, and Africa stated they could distinguish a deepfake from a genuine image. But in a test, just 25% of participants could tell the difference between an artificial intelligence-generated image and a real one.

Given that employees are frequently the main targets of phishing and other social engineering assaults, the firm cautioned, “this puts organizations at risk.”

According to the press release, Hafeez Rehman, technical group manager at Kaspersky, “one of the most likely use cases that will come from this is to generate voices in real-time to impersonate someone, even though the technology for creating high-quality deepfakes is not widely available yet.”

Rehman went on to say that deepfakes posed a risk to both individuals and companies. In addition to highlighting that they were an increasing cyber hazard that needed to be avoided, he said, “They spread misinformation, are used for scams, or to impersonate someone without consent.”

The World Economic Forum’s Global Risks Report 2024, which was published in January, had issued a warning that India and Pakistan were vulnerable to misinformation driven by artificial intelligence.

In Pakistan, deepfakes have been deployed for political purposes, especially before general elections.

Imran Khan, the former prime minister who is presently detained in Adiala Jail, spoke at an online election rally in December using an artificial intelligence (AI)-generated image and voice clone. The event was witnessed live by tens of thousands of people and received over 1.4 million views on YouTube.

Despite the fact that Pakistan has written an AI law, advocates for digital rights have criticized the absence of safeguards against misinformation and to protect marginalized communities.


Related Post