by Mr. Darin Stewart, VP Analyst Gartner Inc
In March 2019, the CEO of a U.K. energy firm received an urgent call from his boss, the chief executive of the firm’s parent company in Germany. The German CEO instructed his subordinate to transfer €220,000 to a supplier in Hungary. The transfer of funds was urgent and needed to be completed within the hour. Since the CEO recognized his superior’s distinctive German accent and slightly melodious way of speaking, he immediately authorized the transfer. Unfortunately, the U.K. CEO was not speaking to his boss. He was speaking to an AI impersonating the German CEO.
Should business be terrified by this incident? Yes and no. Yes, because of the sophistication and success of the attack. And no, because it still takes considerable effort and resources to launch an attack like this and the target was a large, multinational corporation. This is about to change. The weapons to create disinformation will scale dramatically in two ways. First, it will become much easier for bad actors to launch many of these attacks, so that even a small percentage of success will make it worth their while. Second, powerful, easy-to-use, deepfake technologies in the hands of the many make it simple for anyone to launch an attack against somebody they wish to target for whatever reason.
AI and machine learning permeate modern business and communications. It should come as no surprise that these technologies are being turned to illicit purposes. Deepfakes are audio, images, and videos that appear real but are actually AI-generated, synthetic creations. They are just the latest manifestation of disinformation in what the RAND Corporation describes as a culture of “truth decay.” This dynamic is much discussed and lamented in the realm of politics and conspiracy theories. In the context of business, the loss of online veracity receives much less attention. While companies are scrambling to defend against ransomware attacks, they are doing nothing to prepare for an immanent onslaught of synthetic media.
Deep-fake technology has advanced considerably in the few short years since the British CEO was duped by AI voice mimicry. The necessary tools have also become much more accessible. All that is needed to produce a convincing fake image or video is a decent computer (your teenager’s gaming rig is more than sufficient) and a good collection of images of the target. This makes corporate executives particularly vulnerable.
CEO’s and other corporate officers tend to be the public face of their companies. As a result, there is usually a wide range of publicly available images and recordings of these people in different contexts doing different things in different ways. This provides would be deepfakers with all the material they need to create digital doppelgangers of those executives.
This is already happening to many many public figures, most commonly as celebrities are made the unwilling subject of synthetic pornography. Public images of a star are combined with existing pornographic material to produce disturbingly realistic fakes. Imagine the scandal and necessary damage control if a CEO were placed in such a position. Alternatively, a deep-faker could stage and film a bribe with money exchanging hands between two actors and then replace one of the actors with a politician or your chief financial officer? Ransomware may be today’s great vulnerability but deep-fake extortion or slander will be tomorrow’s.