AI could ‘turbo-charge fraud’ and be monopolised by tech companies, Andrew Leigh warns
Artificial intelligence could “turbo-charge fraud” and exacerbate anti-competitive practices including collusion, the assistant competition minister, Andrew Leigh, has warned.
In a speech to the McKell Institute on Wednesday, Leigh warned that deepfake videos and voice clones could have criminal applications, including by tricking people into thinking a relative is in trouble and in need of an urgent transfer of money.
The Albanese government is conducting a consultation into safeguards to achieve safe and responsible AI. While the industry minister, Ed Husic, has stressed positive applications of AI such as medical innovation and combatting online fraud, the government concedes it might be necessary to ban some “high-risk” uses of AI.
Leigh said that AI could “turbocharge fraud, enabling scammers to send personally tailored phishing messages, produce fake websites, overwhelm sites with fake consumer reviews, or create deepfake videos and voice clones”.
“A seven-country survey found that one in 10 respondents had been targeted by an AI voice scam, with cybercriminals using snippets of online audio to trick people into thinking that their child or grandchild is in trouble, and needs money urgently transferred,” he said.
In March Guardian Australia revealed that AI-generated voice trained to sound like a specific person could be used to fool the voice identification system used by the Australian government for millions of people, a serious security flaw.
Leigh said that competition regulators “have already flagged their concerns that AI may raise a myriad of issues, including bundling, self-preferencing and collusion”.
Bundling refers to selling items of hardware or software, together as a package, while self-preferencing refers to firms promoting its own products over those of rivals.
Leigh argued that AI can “reduce barriers to entry for new firms”, such as through the development of a new website, marketing materials, or correspondence for a “new migrant running a small business”.
“In each of these cases, consumers benefit,” he said.
But Leigh said there were “looming risks”, including the expense of the technology, because “currently only a handful of companies have the cloud and computing resources necessary to build and train AI systems” and disputes over access to data needed to train AI.
Leigh warned that AI engines could be “natural monopolies … entrenching the position of the strongest platforms”, posing a challenge to competition regulators.
He also noted concerns about the “open first, closed later” business model in which companies “use an open-source approach to initially lure in new business and fresh streams of data, building scale advantages before closing off their ecosystems to ‘lock in customers and lock out competition’”.
In February the Labor MP Julian Hill used a parliamentary speech part-written by ChatGPT to warn that artificial intelligence could be harnessed for “mass destruction”, with negative applications including student cheating, job losses, discrimination, disinformation and uncontrollable military applications.
Hill called for “a white paper, an inquiry, a permanent commission, an international collaboration or some combination of those” to examine AI.
Leigh said AI was “effectively a new factor of production” which has “implications for geopolitics as well as productivity”.
He said the government was responding through the competition taskforce in Treasury, and considering recommendations from the competition regulator to strengthen consumer protection in markets for digital platform services.
“With a technology that is moving this fast, it’s unlikely we’ll find a solution that is perfect the first time,” he said.
“But with AI having huge potential to transform our society and economy, it’s critical to be considering its competitive aspects. Only by doing so will we ensure that Australia reaps the greatest social and economic benefits of AI.”