Google has developed SynthID Text, which makes it easy to distinguish AI-generated content from human-written publications, and the company has made it available open source version.
The new SynthID Text tool is part of a larger set of "watermarks" tools for generative AI output. This watermark system can help prevent AI chatbots from misusing misinformation and misinformation, not to mention cheating in schools and workplaces.
DeepMind's vice president of research, Bushmit Kohli, said: "While SynthID is not a panacea for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools."
Scott Arnson of the University of Texas at Austin, who previously worked on AI safety at OpenAI, says, "I hope other large language model companies, including OpenAI and Anthropic, will follow DeepMind's lead."
The US tech giant unveiled a photo watermark last year, and has since rolled out one for the AI-generated video clipper.
In May, the company announced that it was applying SynthID to its Gemini smart software and online chatbots and made it available for free on Hugging Face, an open repository of AI data collections and models.
The company has now published a paper in the journal Nature showing how SynthID has generally outperformed watermark technologies similar to the artificial intelligence of text. The comparison included assessing how easily responses from different watermarked AI models were easily detected. The SynthID tool works by adding an invisible watermark directly to the text when it is created by an AI model.
The Google DeepMind team found that using the SynthID watermark does not compromise the quality, accuracy, creativity or speed of the text created. This conclusion was drawn from a huge SynthID performance experiment after the watermark was published in Gemini outputs and used by millions of people.
ASharq News