Tsunami of fake news generated by AI can ruin the Internet

0
142
fake fake news indoors
Human beings often cannot distinguish between AI generated fake news and real news (Credits - PxHere)

Several new tools can recreate the face of a human being or the voice of a writer to a very high level of accuracy. However, the one that is most concerning is the adorably-named software GROVER. It has been developed by a team of researchers at Allen Institute for Artificial Intelligence

It is a fake news writing bot which several people have used for composing blogs and also entire subreddits. This is an application of natural language generation that has raised several concerns. While there are positive uses such as translation and summarization, it also allows adversaries for generating neural fake news. This illustrates the problems that news written by AI can pose to humanity. Online fake news is used for advertising gains, influencing mass opinions and also manipulating elections. 

                                     Try Grover Demo here                                            

GROVER can create an article from the headline itself such as “Link found between Autism and Vaccines”. Humans find the generation to be more trustworthy than the ones composed by other human beings.

This could be just the beginning of the chaos. Kristin Tynski from the marketing agency Fractl said that similar tools such as GROVER can create a giant tsunami of computer-generated content in every possible field. GROVER is not perfect but it can surely convince a casual reader who is not paying attention to every word they are reading. 

The current best discriminators can classify fake news from the original, human-composed news with an accuracy of 73%, assuming that they have access to a moderate set of training data. Surprisingly, the best defense against GROVER is GROVER itself with an accuracy of 92%. This presents a very exciting opportunity against neural fake news. The very best models for generating neural fake news are also the most appropriate for detecting them.

AI developers from Google and other companies have a huge task as the spam generated by AI will be flooding the internet and advertising agencies will be looking forward to generating the maximum possible revenue out of it. Developing robust verification methods against generators such as GROVER is a very important research field. Tynski said in a statement that since AI systems enable content creation at a giant scale and pace, which both humans and search engines have difficulty in distinguishing from the original content, this is a very important topic for discussion which we are not having currently. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here