阅读:0回复:0
Neural networks are becoming criminals
Oxford academics and business consultants have been warning for years about the dangers of so-called AI crime: personalized phishing, autonomous drone bombers, hyper-realistic deepfakes, and more. Today, these predictions are coming true, and ahead of schedule: Latin American drug cartels are using AI submarines to smuggle contraband, corporate content writing service
spies are using AI for OSINT analysis. Even Russian businesses are encountering AI crime – a curious (and, fortunately, unsuccessful) example – a little later. Leaving aside highly specialized areas like stock exchanges, military, or engineering, there are three key sources of AI threats: deepfakes, personalized phishing, and self-driving vehicles. The most impressive progress over the year has occurred in AI photo and video. The number of neural services for processing and generating photos is breaking records. Many large companies like Adobe and Microsoft have added neural networks for generative filling to their products. Over the past year, AI crime was almost committed even against a Russian organization. A cement manufacturing company faced an attempt at a real corporate raid: fraudsters found a nominal "investor" and communicated with journalists on his behalf using deepfake technology. However, thanks to the company's competent actions, the attack was eventually repelled and the courts won. In early May, Kaspersky Lab experts released an interesting report : the security experts analyzed the black market for deepfakes used to attack people and organizations. According to their data, For a minute of fake video they are now asking from 300 to 20 thousand dollars. The main market is 18+ content and fake streams from celebrities for the sake of "scamming" money Some craftsmen even launched a multi-hour deepfake stream on Twitch with AI celebrities, whose lines are also written by “neural networks”. There are more dangerous applications, such as politics. In late April, Republicans in the US responded to the launch of Biden's election campaign by releasing a deepfake video about the horrors that allegedly await the world after his re-election. And the autumn elections in Slovakia were probably the first in history when AI helped a candidate win. More precisely, to lose to his opponent: populist Robert Fico managed to beat liberal Michal Šimečka, in part because of a leaked video recording in which the latter allegedly talks about vote rigging. The recording was exposed as an AI deepfake, but it was too late: Šimečka’s opponents had already distributed the material, shaking his ratings. Similar incidents happen in business: in March 2023, Sberbank Deputy Chairman Stanislav Kuznetsov told how fraudsters created the image of a company director who ordered an accountant to urgently transfer him $100,000. And then there's more. |
|