Login with your Social Account

graphene structure

Researchers demonstrate production of graphene using bacteria

Researchers have figured out a novel method to produce graphene, an amazing substance in a cheaper way with the help of bacteria. Graphene is a very useful material in filtering water, dyeing hair and great strengthening of substances. The study has been published in ChemistryOpen.

When the bacterium Shewanella oneidensis is mixed with oxidized graphite or graphene oxide (which is comparatively easy to produce but not conductive due to oxygen groups), the oxygen groups are withdrawn and conductive graphene is obtained as the product. It is inexpensive, quicker and more eco-friendly than the existing methods to produce the substance. It can also be stored for a long period of time making it appropriate for various applications. Using this method, we can produce graphene at a scale required for computing and medical devices of the next generation.

“For real applications, you need large amounts,” says biologist Anne Meyer from the University of Rochester in New York.

Using the new method, Meyer and her colleagues were able to make graphene that’s thinner, more stable, and longer-lasting than graphene that’s produced by chemical manufacturing. This will unlock all sort of opportunities for less costly bacteria-produced graphene and can be used in field-effect transistor (FET) biosensors.  It is a tool that identifies specific biological molecule such as glucose tracking for diabetics.

Bacteria production method leaves back specific oxygen group. It makes resulting graphene compatible to link with specific molecules. Graphene material obtained from this method can be used as conductive ink in circuit boards, computer keyboards or in small wires to unfreeze car windscreen or to produce one-sided conductive graphene by twisting the bacteria process. It can also lead to the creation of innovative computer technologies and medical equipment.

At present, graphene is produced by different chemical methods using graphite or graphene oxide compared to the past method where graphite was extracted by graphite blocks using sticky tape. The new method of production is the most favorable one to date without the use of unpleasant chemicals. Prior to scaling up and using it to develop next-generation devices, lots of research needs to be done to study the bacteria process. However, the future of this extraordinary material continues to look bright. Meyer said that bacterially produced graphene material will guide to much better applicability for product development and development of nanocomposite materials.

Journal: https://onlinelibrary.wiley.com/doi/full/10.1002/open.201900186

New imaging method aids in water decontamination

New imaging method aids in water decontamination

A breakthrough imaging technique developed by Cornell researchers shows promise in decontaminating water by yielding surprising and important information about catalyst particles that can’t be obtained any other way.

Peng Chen, the Peter J.W. Debye Professor of Chemistry, has developed a method that can image nonfluorescent catalytic reactions – reactions that don’t emit light – on nanoscale particles. An existing method can image reactions that produce light, but that applies only to a small fraction of reactions, making the new technique potentially significant in fields ranging from materials engineering to nanotechnology and energy sciences.

The researchers then demonstrated the technique in observing photoelectrocatalysis – chemical reactions involving interactions with light – a key process in environmental remediation.

“The method turned out to be actually very simple – quite simple to implement and quite simple to do,” said Chen, senior author of “Super-Resolution Imaging of Nonfluorescent Reactions via Competition,” which published July 8 in Nature Chemistry. “It really extends the reaction imaging to an almost unlimited number of reactions.”

First author of the paper is Xianwen Mao, a postdoctoral researcher in Chen’s lab.

Catalytic reactions occur when a catalyst, such as a solid particle, accelerates a molecular change. Imaging these reactions at the nanoscale as they happen, which the new technique allows scientists to do, can help researchers learn the optimal size and shape for the most effective catalyst particles.

In the paper, the researchers applied the new technique to image the oxidation of hydroquinone, a micropollutant found in water, on bismuth vanadate catalyst particles, and discovered previously unknown behaviors of catalysts that helped render hydroquinone nontoxic.

“Many of these catalyzed reactions are environmentally important,” Chen said. “So you could study them to learn how to remove pollutants from an aqueous environment.”

Previously, Chen’s research group pioneered the application of single-molecule fluorescence imaging, a noninvasive, relatively inexpensive and easily implemented method that allows researchers to observe chemical reactions in real time. Because the method was limited to fluorescent reactions, however, his team worked for years on a more widely applicable method.

The technique they discovered relies on competition between fluorescent and nonfluorescent reactions. The competition suppresses the fluorescent reaction, allowing it to be measured and mapped, which in turn provides information about the nonfluorescent reaction.

The researchers named their method COMPetition Enabled Imaging Technique with Super-Resolution, or COMPEITS.

“This highly generalizable technique can be broadly applied to image various classes of nonfluorescent systems, such as unlabeled proteins, neurotransmitters and chemical warfare agents,” Peng said. “Therefore, we expect COMPEITS to be a breakthrough technology with profound impacts on many fields including energy science, cell biology, neuroscience and nanotechnology.”

Co-authors include research associate Chunming Li, former postdoctoral researcher Madhi Hesari and Ningmu Zou, Ph.D. ’17. The research was partly supported by the Army Research Office and the U.S. Department of Energy, and made use of the Cornell Center for Materials Research, which is supported by the National Science Foundation.

Journal: https://www.nature.com/articles/s41557-019-0288-8

Materials provided by Cornell University

Elon Musk Says SpaceX's About to Launch And 'Hover' a Mars Rocket Prototype

SpaceX to perform hop-and-hover launch of its Mars rocket prototype

South Texas will witness the most ambitious test launch of SpaceX with a shiny Mars rocket prototype. Over the past 8 months, workers have been working upon a coastal launch site to prepare the stout and three-legged prototype for testing. It is made of stainless steel and have the shape of a badminton shuttlecock and is equipped with the next-gen Raptor rocket engines.

Elon Musk has named his company’s six-story tall rocket ship as “Starhopper” as it is made to hop to altitudes no higher than 3 miles but not to fly into space. The Starship is a 400-foot tall launch system for the powerful Starhopper.  The company’s vision lies in taking dozens of people to Moon and Mars, deploying many satellites at a time and rocketing people around Earth for a few minutes. The Starhopper was fired up in April for the first time and the vehicle was lifted for no more than a few inches and subsequent tests lifted it further but not by much.

The engineers are hoping to take the vehicle up to 20 meters above its own height and Musk also added that the Starhopper will move sideways and land back on its launch pad. SpaceX spokesperson said that the hop and hover test is a series of tests designed to push the limits of the vehicle as quickly and as fast as possible within safety limits.

SpaceX will launch Star hopper from 14:00-20:00 Hours CT on 16th July. The schedule for launch can be quite dynamic and subject to change with development programs.  The company will flow liquid oxygen through the engine to check for plumbing and hardware and a static fire test attempt on Monday if everything works out fine. The static fire test would add methane to flowing oxygen to ignite and burn the Raptor Engine for a matter of seconds to check its functioning.  The launch site is located near Boca Chica Beach in South Texas. Spacehopper was confirmed by Musk in late 2018.

It marks the beginning of the Starship Program. It is building a Mark 1 Starship (Mk1) prototype which is designed to reach the orbit. Mk1 Starship is hopeful to reach 20km up in a few months but the issued government license allows experimental vehicles up to 5Km and lasting up to 6 minutes. It is working on regulatory approval for the prototypes. The company plans to launch prototypes into orbit by end of 2020 and a full-scale Starship to the moon by 2023 and a manned mission to Mars by 2026.

recreation poker games

AI program defeats professionals in six-player poker game

An AI program created by Carnegie Mellon University researchers in collaboration with Facebook has beaten top professionals in the world’s most popular form of poker, six-player no-limit Texas hold’em poker.

The AI named Pluribus defeated two top players, Darren Elias, who has the most number of World Poker Tour Titles and Chris “Jesus” Ferguson, champion of six World Series of Poker events. Each of them separately played 5000 hands of poker against five Pluribus copies.
Pluribus also played against 13 pros in another experiment, all of whom have won more than a million dollars playing poker. It played against five pros at a time for a total of 10,000 hands and emerged as the winner.

Tuomas Sandholm, Computer Science professor at CMU created Pluribus along with Noam Brown, Computer Science Ph.D. and research scientist at Facebook AI. Sandholm said that Pluribus achieved a supreme level of performance at multi-player poker which is considered a milestone in AI and in game theory that has been open for decades. Milestones in AI in strategic reasoning have been limited to two-party contests until now. However, defeating five other players in a complicated game opens up new possibilities of using AI to solve real-world problems. The paper detailing this achievement has been published in the Science journal. Playing a six-player contest rather than two players involve fundamental changes in how AI develops its strategy. Brown said that some of the strategies used by Pluribus may even change the way professionals approach the game.

Algorithms used by Pluribus used some surprising features in its strategy. For example, most human players avoid “donk betting” – ending a round with a call but starting the other round with a bet. It is generally viewed as a weak move. However, Pluribus used a large number of donk bets in its game.

Michael “Gags” Gagliano who has won almost two million dollars in his career also played against Pluribus. He was fascinated by some of the strategies used by Pluribus and felt that several plays related to bet sizing were not being made by the human players.

Sandholm and Brown had earlier developed Libratus. It defeated four poker players playing a combined 120000 hands in a two-player version of the game. In games involving more than two players such as Poker, using Nash equilibrium(which is generally used by other two-player game AI’s) can result in a defeat. Pluribus first plays against six copies of itself and develops a blueprint strategy. It then looks ahead several moves for adopting further strategies. A newly developed limited-lookahead search algorithm enabled Pluribus to achieve superhuman results. It also tries out unpredictable moves as it is often considered that AI bets only when it has the best possible hand. Pluribus used efficient computation and computed its blueprint strategy in eight days using 12400 core hours compared to 15 million core hours used by Libratus.

Automated system generates robotic parts for novel tasks

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.

Materials provided by Massachusetts Institute of Technology

Artificial “muscles” achieve powerful pulling force

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio.

Materials provided by Massachusetts Institute of Technology

Information transmission

New findings could revolutionize information transmission

A team of researchers at the University of California led by physicists have observed, characterized and controlled dark trions in a semiconductor named tungsten diselenide  (WSe2) which is a feat that could increase the capacity and the form of information transmission. The study has been published in the Physical Review Letters.

In semiconductors like WSe2, a trion is a quantum bound state containing three charged particles. A negative trion has two electrons and one hole whereas a positive trion contains two holes and one electron. A hole is a vacancy in a semiconductor and behaves like a positively charged particle and a trion having 3 interacting particles can carry much more information than a single electron can carry. Electronics today use a single electron to conduct electricity and transmit information. However, trions have a net electric charge and they can be controlled with the help of an electric field. It is possible to control their motion and thus use as an information carrier. In addition to it, they also have controllable spin, momentum and a rich internal structure which can be used to encode information.

On the basis of spin configuration, they can be configured as dark trions and bright trions. A hole and electron in opposite spins are bright trions whereas hole and electron in the same configuration are called dark trions. Bright trions can couple to light strongly and can emit light efficiently means they decay quickly and dark trions couple weakly to light and they decay much more slowly as compared to bright trions. It has a life close to 100 times longer than bright trions and enables transmission of information over a longer distance.

Chun Hung, an assistant professor of physics and astronomy at UC Riverside says that we can allow writing and reading of trion information by light and that they can generate bright and dark trions and control the information that we can encode. Using dark trions for their long life we can realize the information transmission by trions as it allows more information transfer than an individual electron.

They used a single layer of WSe2 atoms like a graphite sheet because dark trions energy level in WSe2 lies below the energy level of bright trions.  The dark trions can, therefore, accumulate a large population, enabling their detection. Lui explains that most of the trion research groups focus on bright trions because they emit light which can be measured.

However, researchers here have a focus on dark trions and how they behave under charge conditions in WSe2 device. They managed to demonstrate the continuous tuning from positive dark trions to negative dark trions by adjusting external voltage and confirm the dark trions by distinct spin configuration from bright trions. Our technology will be greatly enriched by using trions for information transmission. The researchers intend to demonstrate the first prototype using dark trions.

water transport plants

Researchers develop antigravity water transport system inspired by plants

For hundreds of millions of years, trees have mastered the efficient movement of water upward against gravity. A recent study from researchers shows that they have set up Tree-based water transport system that uses capillary forces to pull out dirty water upward through a pyramidal structure aerogel and thus converting into steam by solar energy to produce fresh, clean water.

The researchers, directed by Aiping Liu at Zhejiang Sci-Tech University and Hao Bai at Zhejiang University, have published a paper on the new water transport and solar steam generation method in the journal ACS Nano. Efficient water transport methods could be used in water purification and desalination in near future.

Liu said that their production method is global and can be industrialized. Their resources have the best qualities, stability and can be recycled. This establishes the opportunity for extensive desalination and sewage treatment in near future.

The mechanism includes two major components. A long, porous, lightweight aerogel for transporting water and a carbon nanotube layer to absorb sunlight and convert water into steam covered in a glass vessel. Capillary forces generated by adhesion between the water molecules and the interior wall of the pores causes water to rise upward in the aerogel. Solar-heated carbon nanotube layer turns the water at the top into steam releasing impurities back. The water droplets formed by condensed steam on the walls of container flow down into a storage for collection.

Like carbon solar steam generator, plants pull water from the ground across branches and leaves by the help of tiny xylem vessels. Solar radiation causes leaves water to evaporate through tiny pores.

Compared to previous trials of tree-like water transport system showing suboptimal performance, reduced transport speeds and lesser transport distances, the new aerogel design showed optimisation in these fields obtaining upward flow performance of 10 cm and 25 cm in first 5 minutes and 3 hours respectively attaining a high energy conversion efficiency of up to 85%.

To enhance the set-up, researchers have build up the material by pouring aerogel ingredients into a copper tube keeping cold end at -90 degree Celsius causing ice crystal to grow. Tiny structures like pyramid with radially aligned channels, micro-sized pores, wrinkled inner surfaces, and molecular meshes obtained after freeze-drying the tube helped in aerogel best performance.

Liu said that they further want to enhance the speed and length of water transmission and efficiency of water storage to take forward practical implementation and achieve mass production.

fake fake news indoors

Tsunami of fake news generated by AI can ruin the Internet

Several new tools can recreate the face of a human being or the voice of a writer to a very high level of accuracy. However, the one that is most concerning is the adorably-named software GROVER. It has been developed by a team of researchers at Allen Institute for Artificial Intelligence

It is a fake news writing bot which several people have used for composing blogs and also entire subreddits. This is an application of natural language generation that has raised several concerns. While there are positive uses such as translation and summarization, it also allows adversaries for generating neural fake news. This illustrates the problems that news written by AI can pose to humanity. Online fake news is used for advertising gains, influencing mass opinions and also manipulating elections. 

                                     Try Grover Demo here                                            

GROVER can create an article from the headline itself such as “Link found between Autism and Vaccines”. Humans find the generation to be more trustworthy than the ones composed by other human beings.

This could be just the beginning of the chaos. Kristin Tynski from the marketing agency Fractl said that similar tools such as GROVER can create a giant tsunami of computer-generated content in every possible field. GROVER is not perfect but it can surely convince a casual reader who is not paying attention to every word they are reading. 

The current best discriminators can classify fake news from the original, human-composed news with an accuracy of 73%, assuming that they have access to a moderate set of training data. Surprisingly, the best defense against GROVER is GROVER itself with an accuracy of 92%. This presents a very exciting opportunity against neural fake news. The very best models for generating neural fake news are also the most appropriate for detecting them.

AI developers from Google and other companies have a huge task as the spam generated by AI will be flooding the internet and advertising agencies will be looking forward to generating the maximum possible revenue out of it. Developing robust verification methods against generators such as GROVER is a very important research field. Tynski said in a statement that since AI systems enable content creation at a giant scale and pace, which both humans and search engines have difficulty in distinguishing from the original content, this is a very important topic for discussion which we are not having currently. 

Robot uses machine learning to harvest lettuce

The ‘Vegebot’, developed by a team at the University of Cambridge, was initially trained to recognize and harvest iceberg lettuce in a lab setting. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, local fruit and vegetable co-operative.

For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot

–Josie Hughes

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The results are published in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the UK, iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr. Fumiya Iida.

The Vegebot first identifies the ‘target’ crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested, and finally cuts the lettuce from the rest of the plant without crushing it so that it is ‘supermarket ready’. “For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image, and then for each lettuce, classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognise healthy lettuces in the lab, it was then trained in the field, in a variety of weather conditions, on thousands of real lettuces.

A second camera on the Vegebot is positioned near the cutting blade and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In future, robotic harvesters could help address problems with labour shortages in agriculture, and could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded. However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6m for the new CDT, which will support at least 50 PhD students.

Simon Birrell et al. ‘A Field-Tested Robotic Harvesting System for Iceberg Lettuce.’ Journal of Field Robotics (2019). DOI: 10.1002/rob.21888

Materials provided by the University of Cambridge