Over the course of almost a year, Google received over 250 complaints worldwide about its artificial intelligence software being used to create deepfake terrorist content, the company has told Australian authorities.
According to the Australian eSafety Commission, the Alphabet-owned (GOOGL.O) tech giant also said that it had received dozens of user reports alerting it to the fact that its AI tool, Gemini, was being exploited to produce child abuse content.
Tech companies are required by Australian law to report damage minimization initiatives or risk fines to the eSafety Commission on a regular basis. The time frame for reporting was April 2023–February 2024.
Since ChatGPT by OpenAI became widely known in late 2022, regulators worldwide have demanded stronger controls to prevent AI from being exploited to facilitate fraud, terrorism, deepfake pornography, and other misuses.
Google’s revelation was dubbed the “world-first insight” by the Australian eSafety Commission into how users might be abusing the technology to create illicit and damaging content.
In a statement, eSafety Commissioner Julie Inman Grant said, “This highlights how important it is for businesses creating AI products to incorporate and evaluate the effectiveness of safeguards to prevent this type of material from being generated.”
According to its analysis, Google received 258 user accusations alleging deepfake terrorist or violent extremist content created with Gemini, as well as 86 user reports claiming AI-generated content related to child exploitation or abuse.
The authority did not specify the number of complaints it confirmed.
Google identified and eliminated child abuse content created with Gemini by using hatch-matching, a technique that automatically compares recently uploaded photos with previously identified photos.