Login with your Social Account

fake fake news indoors

Tsunami of fake news generated by AI can ruin the Internet

Several new tools can recreate the face of a human being or the voice of a writer to a very high level of accuracy. However, the one that is most concerning is the adorably-named software GROVER. It has been developed by a team of researchers at Allen Institute for Artificial Intelligence

It is a fake news writing bot which several people have used for composing blogs and also entire subreddits. This is an application of natural language generation that has raised several concerns. While there are positive uses such as translation and summarization, it also allows adversaries for generating neural fake news. This illustrates the problems that news written by AI can pose to humanity. Online fake news is used for advertising gains, influencing mass opinions and also manipulating elections. 

                                     Try Grover Demo here                                            

GROVER can create an article from the headline itself such as “Link found between Autism and Vaccines”. Humans find the generation to be more trustworthy than the ones composed by other human beings.

This could be just the beginning of the chaos. Kristin Tynski from the marketing agency Fractl said that similar tools such as GROVER can create a giant tsunami of computer-generated content in every possible field. GROVER is not perfect but it can surely convince a casual reader who is not paying attention to every word they are reading. 

The current best discriminators can classify fake news from the original, human-composed news with an accuracy of 73%, assuming that they have access to a moderate set of training data. Surprisingly, the best defense against GROVER is GROVER itself with an accuracy of 92%. This presents a very exciting opportunity against neural fake news. The very best models for generating neural fake news are also the most appropriate for detecting them.

AI developers from Google and other companies have a huge task as the spam generated by AI will be flooding the internet and advertising agencies will be looking forward to generating the maximum possible revenue out of it. Developing robust verification methods against generators such as GROVER is a very important research field. Tynski said in a statement that since AI systems enable content creation at a giant scale and pace, which both humans and search engines have difficulty in distinguishing from the original content, this is a very important topic for discussion which we are not having currently. 

This Simple Online Game Could Work Like a 'Vaccine' Against Fake News

A simple online game works like a vaccine against fake news

Several researchers have been finding out a way to stop the spreading of fake news, so they developed an online role-playing game. In February 2018, researchers from the University of Cambridge helped to launch the browser game called Bad News. Till now, thousands of people spent 15 minutes to complete it and many allowed the data to be used for the study. The simulation stoked anger and fear in players by manipulation of news and social media within the simulation. The game has shown positive results.

Dr. Sander van der Linden, director of Cambridge Social Decision Making Lab said that fake news spread quicker and easier than the truth and so fighting against it might be like losing a battle.  He also added that they wanted to see if they can make people identify between a hoax and real news by introducing them to a weaker dose of techniques used to generate false information. This is known as inoculation theory in the psychological version and it acts like a psychological vaccine. 

To measure the effects of the game the players were told to rate the reliability of a series of different headlines and tweets before and after a game.  The headlines had a random mixture of real and fake news. In a study paper published in Palgrave Communications showed that the perceived reliability of fake news has reduced to 21% after playing the game and it also showed that the people who were most likely to be influenced by fake news have been benefited a lot by this theory. 

Van said that playing the game for just 15 minutes has an average effect but if seen practically then it scales to thousands of people. A study co-author Jon Roozenbeek from Cambridge University said that they are shifting their target from ideas to tactics and by doing this they are hoping to create a general vaccine against fake news instead of trying to put different opinion specifying the falsehood. 

This game has attracted much attention and by working with UK Foreign Office the team has translated it into different languages which include German, Serbian, Polish and Greek. WhatsApp also commissioned the team to create this game for their platform. 

The researchers have also created a junior version of this game and this game had limitations but was equally advantageous too among each age group. The players have to earn six badges in the game which reflects a common strategy used by sellers of fake news but due to limited bandwidth, the questions measuring the effects were reduced to four badges. Roozenbeek added that their platform offers early proof to start building protection against duplicity by training people to deal with the techniques that promote fake news.