Login with your Social Account

transistors

Researchers a step closer in creating successor to shrinking transistors

Over the decades, computers and other electronic devices have shrunk in size and also been significantly faster. This has been possible as the makers have understood and implemented the techniques to decrease the size of individual transistors, small electrical switches which work in transmitting information.

Researchers have relentlessly worked on decreasing the size of the transistor so as to pack more in each chip. However, it seems that pursuit is almost over as scientists are rapidly approaching the minimum physical limit for the size of the transistor, with the current models measuring 10 nanometres which is equivalent to the width of 30 atoms.

Dr. Kyeongjae Cho, professor of Materials science at the University of Texas, Dallas remarked that the power of processing of electronic equipment is derived from the millions and billions of transistors which are interconnected on one chip. He also pointed out that we are very rapidly nearing the minimum scale of size.

For further making improvement on the processing speed, the industry of microelectronics is currently looking at alternative possibilities. Professor Cho’s research work has been published in the Nature Communications journal.

Normal transistors can only transmit two types of information. Being a switch, the transistor is either in the on state or off state which in binary language translates to 1 or 0.

A technique to increase the processing power without putting in additional transistors would be to ramp up the information that can be conveyed by a single transistor with the help of intermediate stages between 1 and 0. The multi-valued transistor based on this principle would make for more operations and a greater amount of information which can be processed in one device.

Cho said that the concept of multi-valued logic transistors is not very new and there have been past attempts to create similar devices.  

Cho and his research group used a unique configuration of two types of zinc oxide to make a composite layer which is incorporated with other materials inside a superlattice. They found out that the physics for multi-valued logic can be achieved by embedding crystals of zinc oxide called quantum dots in amorphous zinc oxide. The order of atoms in amorphous solid is not rigid as in crystals. Cho has applied for patenting his work as he found that it is possible to create an electronic structure for the multi-level logic structure.

The significance of this research is that it can bridge the gap between current computing and quantum computers. Cho added that quantum computing is not yet commercialized and his work is in the direction to merge the gap between binary and large degrees of freedom.

 

most dangerous malwares, an illustration of viruses

Laptop with six most dangerous malwares sold for 1.3 million dollars

No one wants to use a laptop infected with computer viruses, right? What about a laptop with the six most dangerous computer malware ever, installed in it? Well, it just sold for 1.3 million USD at an online auction.

The laptop named “The Persistence of Chaos” has been created by Chinese artist Guo O Dong. The buyer is anonymous but there is a lot of information about the laptop. Dong collaborated with a cybersecurity company, Deep Instinct to load the laptop with the dangerous viruses. Deep Instinct is the first company to use deep learning technology for cybersecurity and it can detect even unknown malware across all range of devices.

The laptop is an ordinary 10.2 inch Samsung NC10-14GB running Windows XP. It is airgapped which means that it cannot be used for connecting to other networks. The 6 pieces of malware running in it have caused financial damages totalling 95 billion USD in the past.

The 6 malware are ILOVEYOU, MyDoom, SoBig, WannaCry, DarkTequila and BlackEnergy.

Guo commented that these softwares may seem abstract with their funny names, but it emphasizes the fact that web and real life are not separate spaces. Malware is a significant way to demonstrate how the apparently fun and harmless Internet can create a huge mess in your real life.

ILOVEYOU is a virus that was distributed through e-mail and virtual file sharing, affecting almost 500,000 machines and causing damage of 15 billion USD. It came with the email subject header “ILOVEYOU” and deleted the local files in the machine when executed.

MyDoom was created by Russian spammers and it remains the fastest spreading worm till date. It targeted specific servers and the damage caused by it is projected to be 38 billion USD.

SoBig remains the second fastest computer worm only surpassed by MyDoom. It is not only a computer worm, spreading by self-replication but also a Trojan Horse pretending to be some other application.

WannaCry is a ransomware cryptoworm which targeted machines running Windows operating system. It encrypted files and asked for ransom payments to be made through Bitcoins.

DarkTequila is a complex malware running mainly in Latin American countries intended to steal financial information as well as login credentials of popular websites. And finally, there is BlackEnergy which is a web toolkit designed to execute DDoS attacks. In December 2015, it caused a huge blackout in Ukraine.

The laptop has been live-streamed online to make sure no sudden moves are performed. It may seem absurd that an old laptop is fetching so much money but according to Guo, he thinks of the artwork as a catalogue of historical threats.

After listening to this news, I think that many of you would be interested in getting malware into your laptops and then selling them. Tell us what do you think, with a short and quick comment.

micro submarines

‘Submarines’ small enough to deliver medicine inside human body

Cancers in the human body may one day be treated by tiny, self-propelled ‘micro-submarines’ delivering medicine to affected organs after UNSW Sydney chemical and biomedical engineers proved it was possible.

In a paper published in Materials Today, the engineers explain how they developed micrometre-sized submarines that exploit biological environments to tune their buoyancy, enabling them to carry drugs to specific locations in the body.

Corresponding author Dr Kang Liang, with both the School of Biomedical Engineering and School of Chemical Engineering at UNSW, says the knowledge can be used to design next generation ‘micro-motors’ or nano-drug delivery vehicles, by applying novel driving forces to reach specific targets in the body.

“We already know that micro-motors use different external driving forces – such as light, heat or magnetic field – to actively navigate to a specific location,” Dr Liang says.

“In this research, we designed micro-motors that no longer rely on external manipulation to navigate to a specific location. Instead, they take advantage of variations in biological environments to automatically navigate themselves.”

What makes these micro-sized particles unique is that they respond to changes in biological pH environments to self-adjust their buoyancy. In the same way that submarines use oxygen or water to flood ballast points to make them more or less buoyant, gas bubbles released or retained by the micro-motors due to the pH conditions in human cells contribute to these nanoparticles moving up or down.

This is significant not just for medical applications, but for micro-motors generally.

“Most micro-motors travel in a 2-dimensional fashion,” Dr Liang says.

“But in this work, we designed a vertical direction mechanism. We combined these two concepts to come up with a design of autonomous micro-motors that move in a 3D fashion. This will enable their ultimate use as smart drug delivery vehicles in the future.”

Dr Liang illustrates a possible scenario where drugs are taken orally to treat a cancer in the stomach or intestines. To give an idea of scale, he says each capsule of medicine could contain millions of micro-submarines, and within each micro-submarine would be millions of drug molecules.

“Imagine you swallow a capsule to target a cancer in the gastrointestinal tract,” he says.

“Once in the gastrointestinal fluid, the micro-submarines carrying the medicine could be released. Within the fluid, they could travel to the upper or bottom region depending on the orientation of the patient.

“The drug-loaded particles can then be internalised by the cells at the site of the cancer. Once inside the cells, they will be degraded causing the release of the drugs to fight the cancer in a very targeted and efficient way.”

For the micro-submarines to find their target, a patient would need to be oriented in such a way that the cancer or ailment being treated is either up or down – in other words, a patient would be either upright or lying down.

Dr Liang says the so-called micro-submarines are essentially composite metal-organic frameworks (MOF)-based micro-motor systems containing a bioactive enzyme (catalase, CAT) as the engine for gas bubble generation. He stresses that he and his colleagues’ research is at the proof-of-concept stage, with years of testing needing to be completed before this could become a reality.

Dr Liang says the research team – comprised of engineers from UNSW, University of Queensland, Stanford University and University of Cambridge – will be also looking outside of medical applications for these new multi-directional nano-motors.

“We are planning to apply this new finding to other types of nanoparticles to prove the versatility of this technique,” he says.

Materials provided by University of New South Wales

Alzheimer's detection by virtual reality

Virtual reality can spot navigation problems in early Alzheimer’s disease

Virtual reality (VR) can identify early Alzheimer’s disease more accurately than ‘gold standard’ cognitive tests currently in use, suggests new research from the University of Cambridge.

We’ve wanted to do this for years, but it’s only now that virtual reality technology has evolved to the point that we can readily undertake this research in patients

–Dennis Chan

The study highlights the potential of new technologies to help diagnose and monitor conditions such as Alzheimer’s disease, which affects more than 525,000 people in the UK.

In 2014, Professor John O’Keefe of UCL was jointly awarded the Nobel Prize in Physiology or Medicine for ‘discoveries of cells that constitute a positioning system in the brain’. Essentially, this means that the brain contains a mental ‘satnav’ of where we are, where we have been, and how to find our way around.

A key component of this internal satnav is a region of the brain known as the entorhinal cortex. This is one of the first regions to be damaged in Alzheimer’s disease, which may explain why ‘getting lost’ is one of the first symptoms of the disease. However, the pen-and-paper cognitive tests used in clinic to diagnose the condition are unable to test for navigation difficulties.

In collaboration with Professor Neil Burgess at UCL, a team of scientists at the Department of Clinical Neurosciences at the University of Cambridge led by Dr Dennis Chan, previously Professor O’Keefe’s PhD student, developed and trialled a VR navigation test in patients at risk of developing dementia. The results of their study are published today in the journal Brain.

In the test, a patient dons a VR headset and undertakes a test of navigation while walking within a simulated environment. Successful completion of the task requires intact functioning of the entorhinal cortex, so Dr Chan’s team hypothesised that patients with early Alzheimer’s disease would be disproportionately affected on the test.

The team recruited 45 patients with mild cognitive impairment (MCI) from the Cambridge University Hospitals NHS Trust Mild Cognitive Impairment and Memory Clinics. Patients with MCI typically exhibit memory impairment, but while MCI can indicate early Alzheimer’s, it can also be caused by other conditions such as anxiety and even normal aging. As such, establishing the cause of MCI is crucial for determining whether affected individuals are at risk of developing dementia in the future.

The researchers took samples of cerebrospinal fluid (CSF) to look for biomarkers of underlying Alzheimer’s disease in their MCI patients, with 12 testing positive. The researchers also recruited 41 age-matched healthy controls for comparison.

All of the patients with MCI performed worse on the navigation task than the healthy controls. However, the study yielded two crucial additional observations. First, MCI patients with positive CSF markers – indicating the presence of Alzheimer’s disease, thus placing them at risk of developing dementia – performed worse than those with negative CSF markers at low risk of future dementia.

Secondly, the VR navigation task was better at differentiating between these low and high risk MCI patients than a battery of currently-used tests considered to be gold standard for the diagnosis of early Alzheimer’s.

“These results suggest a VR test of navigation may be better at identifying early Alzheimer’s disease than tests we use at present in clinic and in research studies,” says Dr Chan.

VR could also help clinical trials of future drugs aimed at slowing down, or even halting, progression of Alzheimer’s disease. Currently, the first stage of drug trials involves testing in animals, typically mouse models of the disease. To determine whether treatments are effective, scientists study their effect on navigation using tests such as a water maze, where mice have to learn the location of hidden platforms beneath the surface of opaque pools of water. If new drugs are found to improve memory on this task, they proceed to trials in human subjects, but using word and picture memory tests. This lack of comparability of memory tests between animal models and human participants represents a major problem for current clinical trials.

“The brain cells underpinning navigation are similar in rodents and humans, so testing navigation may allow us to overcome this roadblock in Alzheimer’s drug trials and help translate basic science discoveries into clinical use,” says Dr Chan. “We’ve wanted to do this for years, but it’s only now that VR technology has evolved to the point that we can readily undertake this research in patients.”

In fact, Dr Chan believes technology could play a crucial role in diagnosing and monitoring Alzheimer’s disease. He is working with Professor Cecilia Mascolo at Cambridge’s Centre for Mobile, Wearable Systems and Augmented Intelligence to develop apps for detecting the disease and monitoring its progression. These apps would run on smartphones and smartwatches. As well as looking for changes in how we navigate, the apps will track changes in other everyday activities such as sleep and communication.

“We know that Alzheimer’s affects the brain long before symptoms become apparent,” says Dr Chan. “We’re getting to the point where everyday tech can be used to spot the warning signs of the disease well before we become aware of them.

“We live in a world where mobile devices are almost ubiquitous, and so app-based approaches have the potential to diagnose Alzheimer’s disease at minimal extra cost and at a scale way beyond that of brain scanning and other current diagnostic approaches.”

The VR research was funded by the Medical Research Council and the Cambridge NIHR Biomedical Research Centre. The app-based research is funded by the Wellcome, the European Research Council and the Alan Turing Institute.

Reference
Howett, D, Castegnaro, A, et al. Differentiation of mild cognitive impairment using an entorhinal cortex based test of VR navigation. Brain; 28 May 2019; DOI: 10.1093/brain/awz116

Materials provided by University of Cambridge

Samsung Ai Deepfake

Latest Samsung AI can produce animated deepfake with a single image

Samsung’s AI tech is amazingly creepy and will make our deepfake problem worse as the Samsung engineers from the Samsung AI Center and the Skolkovo Institute of Science and Technology Moscow developed an AI technique using algorithms. It has the ability to transform an image into an animoji like video with face making speech expressions, with the only difference that it would not be exactly like your video. It uses AI to morph the face of the person and get all expressions right, which makes it difficult to recognise if its a video of the person or a morphed one. This requires deepfake to possess huge data sets of images in order to create a realistic forgery.

Artificial intelligence system used by Samsung is different from other modern-day technologies as it does not use 3D modelling and hence, can generate a fake clip which is directly proportional to the number of images.

According to researchers, working on this technology claim that it can be used for a host of applications which includes video games, film and TV and can be applied to paintings also. Though Samsung’s AI is cool, it comes with its own banes. It is sometimes used to morph the pictures of celebrities and other people for doing anti-social activities and may result in an increasing number of crimes.

This new approach provides a decent improvement on past work by teaching the neural network how to manipulate the existing landmark facial features into a more realistic-looking moving video. This knowledge can then be used to be deployed on a few pictures or maybe a single picture of someone, whom the AI has never seen before. The system uses a convolution neural network, a type of neural network based on biological processes in the animal visual cortex. It’s particularly adept at processing stacks of images and recognising what is there in them, that is, the “convolution” essentially recognises and extracts parts of those images.

This technique manages to solve the complexities existing in the artificially generated talking heads system, which is basically dealing with the increasing complexity of the heads, along with our ability to be able to easily spot a fake head. Nowadays, software for making deepfakes is available free of cost and it creates fully convincing lifelike videos. We have to just remember that seeing is no more believing. The research has been published on the pre-print server arXiv.org.

Though this technology looks pretty cool, I strongly think that this technology should not be made available for the public as it can lead to many fake videos and fake news. What do you think about this technology? Tell us with a short a quick comment

Starlink Mission

SpaceX launched the first batch of 60 satellites in Starlink satellites project

SpaceX, the space transportation company founded by Elon Musk successfully launched 60 of its Starlink satellites. If everything goes properly, then the Starlink network of satellites can provide superfast internet connectivity across the globe.

Before the launch of the satellites from Cape Canaveral, Florida, SpaceX announced that Starlink will make the entire earth connected through reliable and cheap broadband connectivity.

The first Starlink mission was broadcast by SpaceX which was launched by a Falcon 9 rocket at 10:30 pm ET on 24th May. Just over an hour of the liftoff, all the 30000 pounds of satellites were deployed by the rockets’ upper stage. This is the heaviest payload which the SpaceX has launched till date.

The release of spacecrafts took place 440 kilometres above the Earth’s surface over the Indian Ocean. The method of deployment was described as odd yet efficient by Elon Musk, as it managed to launch 60 satellites at once. Usually, during a multi-satellite deployment process, a device present at the uppermost stage of the rocket deploys them one after the another with the help of complex spring mechanisms. In December, SpaceX successfully deployed 64 satellites in this way with the help of one rocket.

However this time, a different process was followed by SpaceX. The rocket’s upper stage was spun resembling that of a baseball pitch and slowly the whole stack of Starlink satellites were launched. A SpaceX engineer, Tom Praderio commented that since they are no deployment mechanisms present, the satellites are actually launched like a deck of cards.

The size of an individual satellite is almost that of an office desk and its weight is closely 500 pounds. It is equipped with solar panels and has antennas present for transmission of data. Besides that, the spacecraft has an ion engine which releases krypton gas. It would prevent the Starlink satellite colliding with other satellites and avoid space junk. Once it nears its useful life, it would self-destruct. The engine will also power the satellite to an orbit of 550 kilometres above Earth.

However, the present batch of satellites lack a major component, laser beam interlinks. The future satellites will have these lasers for connecting with other four satellites. In this way, it will form a strong mesh network over Earth facilitating the internet speed almost to that of light’s speed in a vacuum. It is almost 50% greater than fibre-optic cables giving the Starlink satellites a great advantage over the current status.

SpaceX aims to deploy 12000 similar satellites before the deadline of 2027 set by the Federal Communications Commission. For reaching this goal, SpaceX needs to launch more than a mission per month through the next eight years. However, for making the concept work and generate revenue, it needs to launch only 1000 satellites which is way less than 12000.

 

Amazon Go prototype

Amazon working on wearable device to sense human emotions

Amazon is reported to be developing a voice-enabled wearable gadget that can recognise human emotions. If the company is successful in manufacturing it then it could help it in improving its target advertisements. Besides this, it can also make improved and refined product recommendations and advise human beings on how to be nicer to each other while interacting.

A few internal papers have revealed some details about the upcoming product. The device can be worn on the wrist, so its appearance could resemble that of Apple Watch or a Fitbit. It is being produced by a collaboration of Amazon and Lab126, a firm which was responsible for making other Amazon products such as Kindle, Fire mobile phone and the smart speaker Echo.

As per Bloomberg, the device will be working in sync with a smartphone application and will come with microphones paired with software which can determine the emotional condition of the customer from his or her sound. It is not yet confirmed if Amazon has future plans to make the device available for mass consumption. However, it is clear that the company is working on a large ecosystem of electronic devices with inbuilt speech recognition facility.

Amazon has come a long way from a simple online company selling books, founded in a garage to a major e-commerce company and a cloud services giant. It is also the first company to reach a valuation of one trillion dollars. It recently launched a chain of automated convenience stores, Amazon Go. It enables the customers to make purchases without going through the regular checkout by a cashier. Till now it has been launched in 11 locations across the United States. Amazon plans to launch further stores depending on the functioning of the current stores.

Apart from the wearable device, it is also reported to be working on a humanoid robot “Vesta”. Named after the goddess of the home, family in Roman mythology, this robot could be able to perform cleaning operations, set up alarms. It will be equipped with highly developed cameras, computer vision software and move around homes like a self-driving vehicle. Amazon is also planning to compete with Apple’s AirPods by developing its own earphones powered by Alexa.

Although it sounds futuristic, devices capable of monitoring emotions is not improbable. As per the research company Gartner, within 2022, almost 10% of devices will have AI abilities to detect emotions either through the device or with the help of cloud technology.

Would you like to use such devices and let companies know about your emotions? Tell us with a short and quick comment.

Read: Scientists announce the possibility of wearable technology to monitor brains

 

Mars

NASA Invites You to Send Your Name to Mars

The race to Mars is heating up as space organizations are constantly seeing and predicting that humans will be able to set foot on Mars in the coming decade. NASA is inviting the common people to submit names to fly to Mars aboard NASA’s Mars Rover in 2020. The names will be stenciled on chips and will be sent on through the rover. It is believed to represent the initial leg of the first round trip of humans to the red planet.

Once your name is submitted you will be able to download your boarding pass along with frequent flyer points on the website of NASA. It is said to be a public engagement campaign to highlight missions of Mars. The spacecraft is expected to touch down on Mars on February 2021 and will be launched on July 2020 as per schedule. The primary aim of the mission is to bring back microbial samples of past life, geographical data including soil samples and climate data. The data collected will return back to Earth for research and pave way for human exploration of the Red Planet.

The rover will weigh more than 1,000 kg as per the design plan. Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate has said that NASA expects everybody’s share in this historic mission to Mars and is thus inviting names which will be stenciled on a chip. It expects to solve many profound and pressing questions of our neighbouring planet and to solve questions of the human’s curious mind. Around two million names flew on NASA’s InSight mission to Mars and which resulted in each flyer around 500 million frequent flyer miles.

We got a boarding pass for ScienceHook also

NASA’s Jet Propulsion Laboratory (JPL) will be using an electron beam technology to stencil the names submitted through its website onto a silicon chip with lines of text will be smaller than one-thousandth the width of a human hair ( 75 Nanometers ).  A dime-sized chip on the Rover is expected to be filled with close to one million names and the chips will ride under a protective glass cover. NASA also plans to send American astronauts to the moon by 2024 and the ultimate goal of sending humans on the Red Planet in the coming decade. To accelerate the mission of sending astronauts to Mars, NASA is trying to send humans to Moon as soon as possible.  NASA’s Kennedy Space Center in Florida will be responsible for the launch management and will be launched from Cape Canaveral Air Force Station in Florida.

Click here to submit your name: https://mars.nasa.gov/participate/send-your-name/mars2020

The last date to submit your name is Sept. 30, 2019, and fly along!

 

NSTX

Researchers use machine learning models for capturing fusion energy

Machine learning is a type of artificial intelligence which helps in face recognition, language identification, translation. Apart from this, now machine learning can help in bringing the clean fusion energy, which helps in lighting the stars, to the Earth.

Now, a group of researchers from Princeton Plasma Physics Laboratory(PPPL) are taking the help of machine learning for creating a model to enable rapid control of plasma. It is the state of matter which consists of free electrons, ions and is responsible for fusion reactions. The biggest examples of such reactions are sun and many other stars which are themselves giant plasmic balls.

Scientists under the leadership of physicist Dan Boyer have trained neural networks which is the essential core of any machine learning software on the dataset produced by National Spherical Torus Experiment-Upgrade at PPPL. This model is quite accurate in reproducing the predictions of the behaviour exhibited by the particles produced by the neutral beam injection (NBI). It is used in fueling the NSTX-U plasmas and reaching upto million degrees of temperature.

The predictions are conventionally done with the help of NUBEAM. It is a program to incorporate the information about the impact made by the beam on the plasma. The calculations are performed a hundred times each second to determine the behaviour of the plasma. But since each calculation takes minutes to complete, researchers can know the result only after the experiment is over.

This problem is solved by the machine learning software as it reduces the time of calculation to less than 150 microseconds. As a result, the outcomes will be visible to the scientists during the experiment. The plasma control system will be able to make better decisions on how to control the injection of the beam for efficient performance.

With such fast evaluations, the operators will be able to make the needed adjustments for the experiments. Boyer, who is also the principal author on a paper of Nuclear Fusion commented that the rapid modelling capacities can guide the operators in changing the NBI settings for the next experiment.

Along with scientist Stan Kaye, he generated a database with NUBEAM results for a specific range of conditions resembling the ones during the initial NSTX-U operations. This was used by the scientists in training a neural network for predicting the effects of the beam on plasma like heating. After that, it was implemented by software engineers on a computer for controlling the experiment and finding out the calculation time. Scientists plan on expanding this modelling approach for other plasma phenomena.

 

solar panels array

Scientists develop new material for improving efficiency of solar panels

Clean energy acts as an intersection which acts as a suitable substitution for fossil fuels. It is noticed that solar power plants have to boost their planning in a better way to compete with the electrical output of the non-renewable energy sources. The design highly depends upon the renovation and growth of newly made products such that they ingest and interchange the heat at the higher temperatures.

 The solar panels which are found on the hybrid cars or the residential rooftops are found lesser compared to the ones found in the solar power plants. Since the solar panels found in the power plants are huge and countless in number so the heat they absorb is more so they absorb more thermal energy from the sun as much as they can and then they create a passage so that the heat can pass through and that heat is converted into fluid-filled converter is known as heat exchanger.

A liquid version of carbon dioxide which is known as supercritical CO2 acts as an agency in converting the energy and the hotter the fluid gets the more the electricity can be produced. Researchers from the University of Toledo have discovered a newer technology based on the supercritical CO2 as a channel which helps in converting into energy and here this fluid minimizes the manufacturing costs and also minimizes the electricity level and commits to working in a good manner with accuracy and it can benefit to the future power plants too. This report was published in Nature journal

An assistant professor in the mechanical engineering department at Texas A&M University Dorrin Jarrahbashi said that the metal material which is used to make the solar panel heat exchangers using supercritical CO2 energy cycles are only firm up to 550 degrees Celsius and he also added that if the heat rises then break down occurs which leads to the replacement of the components and becomes less effective.

To solve this problem the researchers developed a new complex material which had a combination of ceramic and tungsten which is refractory metal which can take the heat of over 750 degrees Celsius. The tendency to tolerate heat can lead to more effectiveness in generating electricity in united solar and supercritical CO2 power plants by 20%. Compared to the fossil fuels the output and the longevity of the mixture and the lower cost in production will help in cutting down the price of construction and maintenance of powerplants.

It is said that with the help of the unique chemical, mechanical and thermal properties there is numerous approach for the compound. It started from improving its nuclear power plants to building rocket nozzles the results of this revolution has made a vast impact in the future of research and industry.