Login with your Social Account

Soyuz TMA 7 spacecraft

Humanoid robot, Skybot F-850 to be the commander of a Soyuz spacecraft

The Russian Soyuz spacecraft will be carrying a humanoid robot as the commander of the capsule when it leaves for the International Space Station.

The robot named Skybot F-850 is one of the latest models of FEDOR robots from Russia that have been developed to serve as a stand-in for humans for tasks such as driving cars. And not only it is going to enter into space; for the first time, a robot will be in the position of commanding a spacecraft – Soyuz. It will be monitoring spacecraft conditions during the uncrewed flight as it leaves the atmosphere and enters the land of zero gravity.

The development of FEDOR robots started in 2014. This Skybot F-850 is made from materials which can withstand the operating conditions in space. To make sure that the ISS is not damaged in any way, its actions have been intentionally limited and is controlled by special movement algorithms. With the help of AI, Skybot can function independently or it can also be controlled by an operator wearing a control suit.

Its hands are meticulously designed for handling human tools. It can unlock a door, turn a valve and use a fire extinguisher. There are also videos of it, driving cars and even using automatic pistols which prompted Dmitry Rogozin, Russia’s Deputy Prime Minister to make it clear that the robot does not serve as a weapon, but uses artificial intelligence in assisting humans in several other places. It is the first Russian robot on ISS but robots developed by NASA and ESA are in use on the space station for a long time.

Robonaut 2, a NASA robot without legs worked with astronauts when it flew to the ISS in 2011 and worked till 2014. It was brought back to earth as it developed problems after that. CIMON, ESA’s social robot is on the ISS since 2018. It can recognize faces, capture images and assist astronauts in communicating with Watson, IBM’s NLP computer on Earth.

The most advanced robots have been from the Astrobee project of NASA. They will take over from the SPHERES satellites, which served 10 years as an experimental hardware platform.

The Astrobees are more advanced than SPHERES. They will be serving as payload carriers and gradually take over regular tasks in ISS such as equipment inventory and instrument surveys. Astrobee’s project manager, Maria Bualat said that in a crewed spaceflight, crew time is very precious. Assistant robots will be performing the repetitive and dangerous tasks which the crew has to perform now. Development work for robots to operate in space itself is also in progress. These robots can perform works such as repairing external leaks so that astronauts do not have to perform an extravehicular activity.

Guided by AI, robotic platform automates molecule manufacture

Guided by AI, robotic platform automates molecule manufacture

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”

Materials provided by Massachusetts Institute of Technology

artificial intelligence

Chinese newspaper uses robots to generate stories and articles

On Thursday, the China Science Daily publicly announced that it has used software to automatically generate news articles and stories about the latest discoveries and developments from the leading science journals across the world.

The robot reporter named “Xiaoke” was developed by the newspaper in collaboration with researchers from Peking University in nearly half a year. According to the creators, Xiaoke has generated more than 200 stories which are based on English abstracts of papers published in journals such as Science, Nature, Cell and the New England Journal of Medicine.

The content before being published, undergoes a review process and a group of scientists and the editor will verify the content or provide supplementary information if needed. Zhang Mingwei, program head and the newspaper’s deputy editor-in-chief said the scientists would transform Xiaoke to a cross-linguistic academic secretary for helping the Chinese researchers overcome the barriers of language and have quick and easy access to the recent advances in the scientific world in English publications. The articles created by Xiaoke are published here.

The lead researcher of Peking University, Wan Xiaojun who is in charge of the system’s design and technology, said that this tool could do a lot more than just translation. Xiaoke is a good a selecting complex words and sentences and can transform the complex and confusing terms into simple readable and easy news reports as the readers of the news paper also include general public apart from professionals.

Science reporting is important to spread information about recent discoveries and popularizing the knowledge to the masses. Manual review by reporters will also help in engaging people. It recently published a Canadian study about the origin and development of cerebellar tumors by using gene sequencing approach that was published in the Nature journal. The robot also provided the original abstract and a link to the paper. Robots will also help to save time in writing as compared to human editors. According to investors, more journals, conferences and patent information will be automated by systems in the coming days.

Recently many Chinese news organizations have used robots to generate news stories regarding sport and financial reports as well as weather forecasts. The China Science daily is administrated by Chinese Academy of Sciences, which is published five days a week with a circulation close to 100,000 across the country.

The stories generated by Xiaoke have been published online at:  http://paper.sciencenet.cn/AInews/

Quantum Computing Ion Trapping

Neven’s law to possibly replace Moore’s law for quantum computing processors

New disruptive technology is promising to take the power of computing to unprecedented heights. And for predicting the speed of the progress of “quantum computing” technology, Hartmut Neven, director of Quantum AI Labs of Google has given the proposal of a new rule for quantum computers that is similar to Moore’s Law which measured the progress of normal computers for more than 50 years. But the question is if “Neven’s Law” can be trusted as an actual representation of what is occurring in quantum computing and what will be the situation in the future. 

Quantum computers use physical systems for storing data unlike the normal computers which store data as electrical signals having either of 0 or 1 state. This helps in encoding information in multiple states that allows exponentially faster calculations than the normal computers. It is still in infancy and a quantum computer has not been built yet which crosses the existing supercomputers. There is some skepticism about its progress however there is also excitement now how quick the progress is occurring. Thus it would be helpful to have an idea of what can be expected from quantum computers in future. 

Moore’s law describes that the processing power of normal digital computers to double almost every two years creating exponential growth. It is named after Gordon Moore, Intel co-founder and it accurately describes the rate of increase in the transistor number which can be integrated into a silicon microchip. Moore’s law is not applicable to quantum computers as they are designed differently on basis of laws of quantum physics. This is where Neven’s law states that quantum computing power is experiencing doubly exponential growth relative to normal computing.

Doubly exponential growth increases in powers of powers of two: 2^2 (4), 2^4 (16), 2^8 (256), 2^16 (65,536) and so on. If this was applicable to normal computers in Moore’s law, then smartphones and computers would have been present by 1975. Neven hopes that this fast pace should lead to quantum advantage where the smaller quantum processors overtake the highly powerful supercomputers. 

Neven said that researchers at Google can decrease the error rate in the prototypes of quantum computers, allowing them to build more complex and powerful machines with every iteration. This progress is exponential however a quantum processor is exponentially better than a normal processor of the same size. Reason being a quantum effect called entanglement allows various computational tasks to be performed at the same time creating exponential rates. Hence, quantum processors developing at an exponential rate and being exponentially faster than normal processors makes them develop at a doubly exponential rate than the classical processors. 

Although this is exciting, the Neven’s rule is based on a small number of prototypes where progress has been measured in a small period of time. So several data points can be taken which fits other growth patterns. There is also the issue that as quantum processors become more powerful, the small technical problems get much larger. The minor electrical noise in quantum computers leading to errors could grow in frequency as the complexity of the processor grows. This could be solved by using error correction protocols, where many backup hardwares have to be added to the redundant processor. Hence the computer would have to be much more complex without gaining any extra power. This could impact Neven’s rule.

Moore’s law foresaw the progress of normal computing for a time period of 50 years without being a fundamental natural law. It allowed the microchip industry to adopt roadmaps for developing regular milestones, assess investment and evaluate revenues. If Neven’s rule becomes as prophetic as Moore’s law it will have ramifications more than the prediction of quantum computing performance. We do not know yet about the commercialisation of quantum computers, however, this can be quickly known if Neven’s law holds true. 

algorithm image deep learning

Researchers develop deep neural network to identify deepfakes

It was normally considered that seeing is believing until we learnt that photo editing tools can be used to alter the images we see. Technology has taken this one notch higher where facial expressions of one person can be mapped onto another in realistic videos known as deepfakes. However, each of these manipulations is not conclusive as all the image and video editing tools leave traces to be identified. 

A group of researchers led by Video Computing Group of Amit Roy Chowdhury at the University of California, Riverside has created a deep neural network architecture which can detect the manipulated images at pixel level with very high accuracy. Amit Roy Chowdhury is a professor of computer science and electrical engineering at Rosemary Bourns College of Engineering. He is also a Bourns Family Faculty Fellow. The study has been published in the IEEE Digital Library

As per artificial intelligence researchers, the deep neural network is a computer system which has been trained to perform specific tasks which, in this case, identify the altered images. The networks are organised in several connected layers. 

Objects which are present in images have boundaries and whenever an object is removed or inserted to an image, its boundary will be different than the boundary which is normally present. People having good Photoshop skills will try their best to make these boundaries look natural. Examining pixel by pixel brings out the differences in the boundaries. As a result, by checking the boundaries, a computer can distinguish between a normal and an altered image. 

Scientists labelled the images which were not manipulated and relevant pixels in the boundaries of the altered images in a large photo dataset. The neural network was fed the information about manipulated and the natural regions of the images. Then it was tested with a training dataset of different images and it could successfully detect the manipulated images most of the times along with the region. It provided the probability of the image being a manipulated one. Scientists are working with still images as of now, but this technique can also be used for deepfake videos. 

Roy Chowdhury pointed out that a video is essentially a collection of still images, so the application for a still image will also be applied to a video. However, the challenge lies to figure out if a frame in a video is altered or not. It is a long way to go before deepfake videos are identified by automated tools.

Roy Chowdhury pointed out that in cybersecurity, the situation is similar to a cat mouse game, with better defence mechanisms the attackers also come up with better alternatives. He pointed out that a combination of a human and automated system is the right mix to perform the tasks. Neural networks can make a list of suspicious images and videos to be reviewed by people. Automation can then help in the amount of data to be sifted through to determine if an image has been altered or not. He said that this might be possible in a few years’ time with the help of the technologies.

Journal Reference: IEEE Digital Library

neuralink elon musk

Elon Musk’s Neuralink Says That It’s Created Brain Reading Threads

Neuralink is Elon Musk’s secretive company which develops brain-machine interface, showed off its technology to the public for the first time. The goal of the company is to begin implanting devices in paralyzed humans which allows them to control phones or computers.

The first big advance is flexible “threads,” which are less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads also create the possibility of transferring a higher volume of data, according to a white paper credited to “Elon Musk & Neuralink.” The abstract notes that the system could include “as many as 3,072 electrodes per array distributed across 96 threads.”

The threads are 4-6 micrometres thin which is thinner than human hair, another advantage being that the machine automatically embeds them.

Elon Musk gave a presentation not for creating a hype but encouraged people to apply for work for Neuralink. Scientists from Neuralink hope to use a laser beam through the skull instead of the traditional method of drilling holes through the skull. After conducting prior experiments they hope to have this in human patients by the end of next year. Elon Musk also revealed the result that a monkey was able to control a computer with its brain. He added that it is not suddenly that Neuralink will have a neural lace and take over people’s brain but ultimately wants to create a technology which allows merging with AI.

Matthew Nagle was the first person with spinal cord paralysis to receive brain implant which allowed him to control a computer cursor back in 2006 and also played Pong using only his mind. As a part of the research, paralyzed people with implants have moved robotic arms and brought objects into focus by a system called BrainGate.

Neuralink has emerged as a result of a long history of academic research and that no existing technology is in lines with the company’s goal of reading neural spikes. BrainGate relied on Utah Array which are stiff needles with 128 electrode channels which meant less data being picked up and stiffer channels. This problem has been solved by using a thinner polymer in Neuralink.

Neuralink technology is difficult to implant precisely but to combat the difficulty they have developed a neurosurgical robot capable of inserting 192 electrodes per minute. It also avoids blood vessels, which may lead to less of an inflammatory response in the brain. Bandwidth is a problem for interacting with AI as you can take in information more quickly than you can push it out via voice but you are already connected to the machine so this system allows quick communication with machines directly from the brain.

Right now the chip can only communicate data with a wired connection but their ultimate goal is to create a system to work wirelessly. The wireless product called the N1 SENSOR will transmit data wirelessly and Nerualink plans to implant 4 of these in the brain with 3 in motor areas and 1 in the somatosensor area and will be connected to a device mounted behind the ear. This will eliminate the need for general anaesthesia through non-invasive experience however there is a whole FDA process to be done. It is currently testing on rats to check the stability of the system and ensuring high bandwidth brain connection and flexible threads to make it record more neurons and precise outcomes.

alan turing bletchley park

Britain honors Alan Turing, father of AI as the face of their new bank note

Alan Turing, the father of artificial intelligence and computer science was revealed as the face of Britain’s new 50-pound banknote. Turing was also famous as a World War II codebreaker whose work hastened the war’s end and as a result saved the lives of thousands of people. However, his achievements were overlooked as he was convicted of homosexual activity which was a criminal offense in Britain. 

Mark Carney, governor of Bank of England said that Turing was a war hero as well as the father of artificial intelligence on whose shoulders many now stand. The announcement was made at the Science and Industry Museum in Manchester that also featured the 12 finalists who were also considered as the face of the note including famed physicist Stephen Hawking. 

During World War II, Turing worked at the Bletchley Park, where he helped in developing a machine to crack the Enigma code used by Germany. His famous “Turing Test” is used even now as a benchmark for examining if a machine is thinking or not. After the end of the war, Turing was charged with acts of indecency and was sentenced to a period of chemical castration. He was found dead at the age of 41 years after he apparently poisoned himself with cyanide. 

Prime Minister of Britain Theresa May tweeted that Turing’s pioneering work played a major role in ending World War II. Hence it is very suitable to honor the legacy and contribution of LGBT people on the new 50-pound note. 

Dermot Turing, Alan Turing’s nephew said that the entire family was very delighted with this announcement. He also praised the Bank of England for recognizing his uncle’s work in the field of computer science. He said that this decision reminded the nation what he was best at during his lifetime and how he would have wished to be remembered. 

An oscar-winning biopic “The Imitation Game” which starred Benedict Cumberbatch brought attention to the role played by the math genius and his team in defeating Adolf Hitler. Cumberbatch told the BBC that he could not think of a more deserving candidate. He added that Turing was a great human being with a brilliant mind who suffered a lot in his intolerant times. Kim Sanders, spokeswoman for Stonewall, a gay rights charity said that it is very important to recognize the contribution made by LGBT figures throughout history thus it is quite meaningful to have Alan Turing as the new face of the 50-pound banknote. 

The new note is expected to begin circulation at the end of 2021. It will include the image of Turing, ticker tape of his birth date in binary code and a table, formula from 1936 that introduced the concept of how computers operated. 

digital model influenza

Researchers release the first vaccine fully developed by AI program

  • A team of researchers at Flinders University in South Australia has created a vaccine that is considered to be the first human drug to be fully designed by artificial intelligence
  • this vaccine was independently designed by an AI software known as SAM or Search Algorithm for Ligands.
  • A program was developed known as the Synthetic Chemist that generated a vast number of different chemical compounds which were fed to SAM.
  • This vaccine was tested on animals to confirm that it boosted the effectiveness of the influenza vaccine

A team of researchers at Flinders University in South Australia has created a vaccine that is considered to be the first human drug to be fully designed by artificial intelligence. Drugs have been previously designed with the help of computers. However, this vaccine was independently designed by an AI software known as SAM or Search Algorithm for Ligands.

Nikolai Petrovsky, professor at Flinders University who also led the development said that its name has been derived from the task it was assigned to perform which was searching the universe for all possible compounds for a good human drug also known as a ligand. Petrovsky, also a Research Director for an Australian company, Vaxine added that the AI software was first taught about the set of compounds which activate the immune system in human beings and a set of compounds which do not. The AI then worked on itself for what separated the two classes of compounds.

Another program was developed known as the synthetic chemist that generated a vast number of different chemical compounds which were fed to SAM. It then went through all of the compounds to find a good drug suitable for the human immune system.

The top candidates identified by SAM were then tested on human blood cells by the team for checking if they actually worked or not. This confirmed that not only SAM found human immune drugs but they were better than the ones currently present. They were then tested on animals to confirm that it boosted the effectiveness of the influenza vaccine. This technique not only saves millions of dollars, but it also saves the time needed for a normal drug discovery process. It received funding from the US National Institute of Allergy and Infectious Diseases, a part of NIH.

This vaccine is developed at a time of high influenza-related cases in Australia. In 2019 till now, 96 thousand cases have been confirmed in Australia, with nearly 10 thousand in WA. Petrovsky hopes that this vaccine will be more effective than the present ones and also replace the standard seasonal flu shot. For Flinders University, this is not the first time as it had developed the first swine flu vaccine back in 2009.

However, it is challenging for research groups to receive new funding. Funding bodies in Australia direct the vast amount of funding to big research institutes. This makes it very difficult for researchers outside these big groups to compete and stay relevant. As a result of which they often have to look overseas. In this case, Petrovsky submitted an application to NIH in the United States and have received 10 grants so far, with supplements totaling 50 million dollars. He felt that the US system values innovative, futuristic research while Australian bodies are highly conservative.

recreation poker games

AI program defeats professionals in six-player poker game

An AI program created by Carnegie Mellon University researchers in collaboration with Facebook has beaten top professionals in the world’s most popular form of poker, six-player no-limit Texas hold’em poker.

The AI named Pluribus defeated two top players, Darren Elias, who has the most number of World Poker Tour Titles and Chris “Jesus” Ferguson, champion of six World Series of Poker events. Each of them separately played 5000 hands of poker against five Pluribus copies.
Pluribus also played against 13 pros in another experiment, all of whom have won more than a million dollars playing poker. It played against five pros at a time for a total of 10,000 hands and emerged as the winner.

Tuomas Sandholm, Computer Science professor at CMU created Pluribus along with Noam Brown, Computer Science Ph.D. and research scientist at Facebook AI. Sandholm said that Pluribus achieved a supreme level of performance at multi-player poker which is considered a milestone in AI and in game theory that has been open for decades. Milestones in AI in strategic reasoning have been limited to two-party contests until now. However, defeating five other players in a complicated game opens up new possibilities of using AI to solve real-world problems. The paper detailing this achievement has been published in the Science journal. Playing a six-player contest rather than two players involve fundamental changes in how AI develops its strategy. Brown said that some of the strategies used by Pluribus may even change the way professionals approach the game.

Algorithms used by Pluribus used some surprising features in its strategy. For example, most human players avoid “donk betting” – ending a round with a call but starting the other round with a bet. It is generally viewed as a weak move. However, Pluribus used a large number of donk bets in its game.

Michael “Gags” Gagliano who has won almost two million dollars in his career also played against Pluribus. He was fascinated by some of the strategies used by Pluribus and felt that several plays related to bet sizing were not being made by the human players.

Sandholm and Brown had earlier developed Libratus. It defeated four poker players playing a combined 120000 hands in a two-player version of the game. In games involving more than two players such as Poker, using Nash equilibrium(which is generally used by other two-player game AI’s) can result in a defeat. Pluribus first plays against six copies of itself and develops a blueprint strategy. It then looks ahead several moves for adopting further strategies. A newly developed limited-lookahead search algorithm enabled Pluribus to achieve superhuman results. It also tries out unpredictable moves as it is often considered that AI bets only when it has the best possible hand. Pluribus used efficient computation and computed its blueprint strategy in eight days using 12400 core hours compared to 15 million core hours used by Libratus.

fake fake news indoors

Tsunami of fake news generated by AI can ruin the Internet

Several new tools can recreate the face of a human being or the voice of a writer to a very high level of accuracy. However, the one that is most concerning is the adorably-named software GROVER. It has been developed by a team of researchers at Allen Institute for Artificial Intelligence

It is a fake news writing bot which several people have used for composing blogs and also entire subreddits. This is an application of natural language generation that has raised several concerns. While there are positive uses such as translation and summarization, it also allows adversaries for generating neural fake news. This illustrates the problems that news written by AI can pose to humanity. Online fake news is used for advertising gains, influencing mass opinions and also manipulating elections. 

                                     Try Grover Demo here                                            

GROVER can create an article from the headline itself such as “Link found between Autism and Vaccines”. Humans find the generation to be more trustworthy than the ones composed by other human beings.

This could be just the beginning of the chaos. Kristin Tynski from the marketing agency Fractl said that similar tools such as GROVER can create a giant tsunami of computer-generated content in every possible field. GROVER is not perfect but it can surely convince a casual reader who is not paying attention to every word they are reading. 

The current best discriminators can classify fake news from the original, human-composed news with an accuracy of 73%, assuming that they have access to a moderate set of training data. Surprisingly, the best defense against GROVER is GROVER itself with an accuracy of 92%. This presents a very exciting opportunity against neural fake news. The very best models for generating neural fake news are also the most appropriate for detecting them.

AI developers from Google and other companies have a huge task as the spam generated by AI will be flooding the internet and advertising agencies will be looking forward to generating the maximum possible revenue out of it. Developing robust verification methods against generators such as GROVER is a very important research field. Tynski said in a statement that since AI systems enable content creation at a giant scale and pace, which both humans and search engines have difficulty in distinguishing from the original content, this is a very important topic for discussion which we are not having currently.