Login with your Social Account

Odds of finding aliens

Odds of Finding an Alien Species

Aliens and their types, as well as visitors to our planet,  has been a matter of study for scientists and many other researchers for a number of years now. It has been a subject of interest for the movie-makers as well as the common public, and that is why from various corners of our planet frequent news of the visit of aliens and their various proofs of presence come forward.

A quote by Michio Kaku

All think of human being, as the most intelligent, technologically advanced species in the whole universe. But when you see things upside down, there are more than 400 billion stars in our own galaxy – Milky Way only, and there are more than two trillion galaxies in the whole universe. Now it may seem that life can be inevitable in some of these places.

So, let us discuss all the prospects of possible alien life in the universe and what all we know about it. Let us go through different theories and findings around extraterrestrial life.


  1. Number of possible technologically advanced civilizations – Drake’s Equation
  2. Seager’s Equation
  3. The Fermi Paradox
  4. The types of civilizations that can exist – Kardashev Scale
  5. The Search for Aliens
  6. Conclusion
  7. An Infographic on Odds of Finding Aliens

Number of possible technologically advanced civilizations – Drake’s Equation

What do we need to know about to discover life in this huge cosmos? How can we estimate the number of technologically advanced civilizations that might exist among the stars? While working as a radio astronomer at the National Radio Astronomy Observatory in Green Bank, West Virginia, Dr. Frank Drake conceived an approach to give limits to the terms involved in estimating the number of technological civilizations that may exist in our galaxy. The Drake Equation, as it has become known, was first presented by Frank Drake in 1961 and identifies specific factors which are thought to play a role in the development of such civilizations. Although there is no unique solution to this equation, it is a generally accepted tool used by the scientific community to examine these factors.

Drake's Equation

Drake’s equation (Credit: SETI Institute)


N = The number of civilizations in the Milky Way Galaxy whose electromagnetic emissions are detectable.

R* = The rate of formation of stars suitable for the development of intelligent life.

fp = The fraction of those stars with planetary systems.

ne = The number of planets, per solar system, with an environment suitable for life.

fl = The fraction of suitable planets on which life actually appears.

fi = The fraction of life-bearing planets on which intelligent life emerges.

fc = The fraction of civilizations that develop a technology that releases detectable signs of their existence into space.

L = The length of time such civilizations release detectable signals into space.

So, these magnitudes that were indefinite in the past are now quite definite to an incredible degree of precision. For beginners, the understanding of the extent and measure of the cosmos has been enhanced dramatically. This is only being possible with the state of the art space and ground laboratories, that jackets the complete scale of the wavelengths, and can determine, how many galaxies are present within it and how many beyond.

The number this equation gives is quite huge. It can range from tens of thousands to hundreds of thousands. This big number signifies the probability of the existence of extraterrestrial intelligence the universe has ever had, over the past 13.8 billion years, the age of its existence. Certain things can overturn as well. The key ingredients of the very first start come augmented with the heavy elements and constituents that are necessary for life.

Though this equation predicts the existence of such a big number, we haven’t found even one alien species. Seeing this, there is a possibility that this equation might be wrong or we might be missing something. The Drake Equation: Could It Be Wrong?

Seager’s Equation

Sara Seager, a Canadian-American astronomer developed a parallel version of the Drake equation to estimate the number of habitable planets in the galaxy. Instead of aliens with radio technology, Seager has revised the Drake equation to focus on simply the presence of any alien life detectable from Earth. The equation focuses on the search for planets with biosignature gases, gases produced by life that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.

Seager’s Equation is

Sara Seager equation 

where: N = the number of planets with detectable signs of life

N* = the number of stars observed

FQ = the fraction of stars that are quiet

FHZ = the fraction of stars with rocky planets in the habitable zone

Fo = the fraction of those planets that can be observed

FL = the fraction that has life

FS = the fraction on which life produces a detectable signature gas

Seager’s equation predicts that there is a probability that we should at least detect 2 exoplanets with biosignatures.

If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans.---Stephen Hawking Click To Tweet

All these predictions aside, if at all aliens are there, where are all of them? This is when the Fermi Paradox comes into the picture.

The Fermi Paradox

The observable universe is 90 billion years in diameter and there are at least 100 billion galaxies each with 100 to 1000 billion stars. With these huge numbers in place, there is nothing wrong in expecting the existence of extraterrestrial intelligence and there should be many of them out there. The Universe is as old as 13.8 billion years. We know that a technologically advanced civilization has existed on earth for the last 300 years only. There is an obvious possibility that some civilization would have started way before us and in that case, they had a lot more time to develop and become a super-powerful civilization. But if such civilization exists in the observable universe, we would have observed them by now. But we didn’t yet. Here comes the Fermi Paradox.

There are many possible explanations for the paradox:

  1. The development of life is not as easy as we think: We don’t yet know how life has evolved on earth and it is a possibility that this process is pretty complex and is very rare especially in the case of intelligent life.
  2. Intelligent life is there only on earth and nowhere else (this is pretty scary)
  3. Intelligent life eventually destroys itself and it is its nature: If this is the case, then we might be getting closer to the end of human civilization.
  4. The Universe kills intelligent life periodically: Probably, the universe kills life periodically by natural extinction events like a meteor shower. In this case, we might have a danger in the near future. To overcome this, we should start expanding our civilization into the cosmos.
  5. Life has formed too far to observe: The universe is too big and is expanding and it is going very fast. So, life might have simply formed too far to observe or detect.
  6. The aliens are too different from us: We hope that aliens also understand the radio signals we transmit but what if aliens have evolved differently. They might be existing in some other dimension and we can’t see them. The sky is the end for such imaginations.

The types of civilizations that can exist – Kardashev Scale

Nikolai Kardashev, a Russian astrophysicist gave the Kardashev civilization classification scale.This scale classifies civilizations depending on the basis of their energy harvesting and consuming capabilities.

On that basis, the civilizations are classified into three categories namely Type 1, Type 2 and Type 3 but Type 4 and Type 5 civilization can also be imagined.

Carl Sagan suggested a rough formula to rate the civilization on the basis of their energy consumption (Note: this formula is not a part of Kardashev theory). The formula is:

Kardashev formula by carl sagan

(Credit: ScienceHook)


K = Rating of civilization

P = Power used by the civilization

Putting our energy use as P = 17.54 Tera Watt we get a rating as 0.7244 but according some researchers human civilization is not even at 0.7 and would take around 150 more years to reach Type-1 level.

So let us know about each type of civilization:

Type 1 Civilization: This civilization can use and store all the energy of its planet and is thus known as a planetary civilization

Type 2 Civilization:  This civilization can control the energy of its solar system. They would be able to build megastructures like Dyson Spheres around there star and harness energy. They might also produce antimatter and harness energy from it. This type of civilization is known as Stellar civilization.

Type 3 civilization: This type of civilization can harness energy at the scale of its galaxy. Just imagine the amount of energy they could access. They can harness energy from neutron stars. They may also be able to tap energy released by supermassive black holes. This type of civilization is called a galactic civilization.

And below are some research papers and articles that you can read to get a much deeper understanding of things like the Kardashev scale and Dyson sphere:

  1. Paper on Kardashev Scale and improvements in it
  2. Dyson Sphere
  3. Energy from Black holes
  4. Kardashev Scale

Apart from these three types of civilizations, we can also imagine the existence of Type 4 civilization which might be operating at the scale of the universe.

The Search for Aliens

Now, after knowing about the odds of finding aliens, possibilities, fermi paradox, Kardashev scale, we also need to know a few things about the search for aliens. Researchers around the world are working day and night to find some clue of extraterrestrial intelligence. There are many non-profit organizations like SETI who are spending millions in this. So, let us move through some missions, programs and their findings related to extraterrestrial intelligence.

A time will come when men will stretch out their eyes. They should see planets like our Earth. ― Christopher Wren Click To Tweet

Kepler Space Telescopes

The Kepler space telescope was launched on March 7 2009. These space telescopes monitor the light coming from distant stars and a periodic drop in the light indicates the presence of a planet there. From the amount of light drop, we can estimate the properties of that planet. By, this we can estimate the distance of the planet from its star and predict whether life can exist there or not. Kepler has found close to 3000 exoplanets during its 9.5 years of spaceflight and there are many potential candidates in there.

One star has baffled the science community, known as the Tabby star or KIC 8462852. The Kepler data from this star was so weird that scientists started imagining that there was a Dyson Sphere built around this star and there was an extraterrestrial civilization around there. But astronomers now claim that that light block was due to dust and all.

Huge Radio Telescopes

Humans have built a lot of radio telescopes for detecting radio telescopes coming from the cosmos. We have been doing this in a hope to detect some signal from some civilization trying to contact from deep space. We have really big radio telescopes like China’s National Astronomical Observatories. These radio telescopes can detect extremely weak signals also.

There were a few mysterious signals we detected, like the famous ‘Wow signal‘. We never again received a similar signal, so researchers thought that it might be an alien signal. But recently the mystery of was solved

Wow! mystery signal from space finally explained


Researchers are trying very hard to find aliens by trying all the possible methods. We have gigantic radio telescopes, powerful space telescopes. There are many non-profit organisations like SETI which are spending millions of dollars on several exploration programs. But still, we don’t even have a single clue about any extraterrestrial civilization. Things again come back to the Fermi Paradox.

Let us hope that future missions like the James Webb space telescope give us at least a few signs of extraterrestrial intelligence.

An Infographic on Odds of Finding Aliens

Embed Image

Odds of finding aliens infographic


CRISPR Cas9 and Gene Editing Explained

CRISPR and the CRISPR Associated system are powerful gene-editing technologies.

For the scientists, the genes and its related fields have been an area of interests for research for a number of years. They have found a lot of therapies and treatments where modern technologies are used to cure various diseases as well as preventive treatments. The DNA(Deoxyribonucleic acid) is the system of the human body with the help of which one can study the genes and decipher many things that can be used to treat various diseases that happen to an individual. Among these studies, the leading one is CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) that have originated before few decades. CRISPR was first described by Osaka University researcher Yoshizumi Ishino and his associates in the year 1987. They accidentally duplicated part of a CRISPR together with the “iap” gene which was the original interest.

See: A beautiful Infographic on CRISPR and Gene Editing

Genome editing is a group of methods that gives researchers the ability to modify the DNA of an organism. These technologies let genetic substance to be added, detached, or transformed at specific locations in the gene. Several methods to genome editing have been advanced. The latest one is recognized as CRISPR-Cas9.


CRISPR Cas9 – Mode of Action. Image Source: Wikipedia

Cas9 (or “CRISPR-associated protein 9“) is an enzyme that uses CRISPR DNA sequences as a guide to recognize and cleave specific strands of DNA that exactly match given CRISPR sequence, it is so precise in matching DNA that Cas9 can be called as a GPS locator of DNA. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR-Cas9 that can be used to edit genes within organisms.

Companies like Synthego provide Full Stack Genome Engineering. You can go check out their website for various offers.

CRISPR technology is a modest yet influential tool for excision genes. It lets researchers alter DNA arrangements and modify gene function effortlessly. Its many possible applications include modifying genetic defects, giving and averting the spread of diseases and refining crops.

This system has produced a lot of enthusiasm among the researchers and scientists because it is quicker, inexpensive, more precise, and more well-organized than other prevailing methods for genome editing.

Growing CRISPR Research

Credit: WebOfKnowledge

CRISPR-Cas9 from Nature

CRISPR-Cas9 was adapted from a naturally happening gene editing process in bacteria. Whenever viruses attack bacteria they inject there own code of DNA into the bacteria and if the bacteria survives this attack, it stores the virus’s DNA in a DNA archive called CRISPR. When the same virus or any virus of the same strain attacks the bacteria again, the bacteria quickly produces an RNA from the stored DNA and uses the Cas-9 protein to scan through to find the exact match of virus DNA. If it does find a match, it removes that DNA protecting the bacteria from the virus.

The CRISPR-Cas9 scheme works likewise in the lab. Examiners create a small portion of RNA with a short “guide” arrangement that attributes (binds) to a precise target arrangement of DNA in a gene. The RNA also muddles to the Cas9 enzyme. As in microbes, the adapted RNA is used to identify the DNA arrangement, and the Cas9 enzyme punctures the DNA at the battered location. Even though Cas9 is the enzyme that is used most frequently, certain other enzymes like Cpf1 (CRISPR-associated endonuclease in Prevotella and Francisella 1) can also be used. Once the DNA is cut, investigators use the cell’s DNA repair gear to add or delete pieces of inherited material or to make variations to the DNA by substituting prevailing section with a tailored DNA sequence.


The human DNA model takes on a double helix shape. Source: www.pixabay.com

Gene editing is of inordinate importance in the inhibition and treatment of the human ailment. Presently, most study on genome editing is done to comprehend diseases using cells and animal models. Researchers are still at work to define whether this method is harmless and operative for use in people.

Use of CRISPR in different fields

Now let us look into some other potential uses of CRISPR which can be revolutionary:

1) Genetically editing crop seeds for higher yields and higher nutrition: Researchers are doing various experiments with crop seeds and CRISPR. They are trying to develop resistance against diseases in crops. Companies have been licensing CRISPR technology to develop new varieties of crops.

CRISPR crop research data

(Credit: WebOfKnowledge)

2) Using CRISPR to prevent and cure deadly diseases: It is being discovered in the investigation on a wide diversity of ailments, as well as single-gene complaints such as cystic fibrosis, haemophilia, and sickle cell disease. It can also prevent more complex diseases, such as HIV, Zika, Cancer, Heart Disease, Mental illness. Apart from prevention, scientists claim that we can even cure diseases like HIV. We can even develop effective antibiotics (which are becoming ineffective day by day) against an increasing number of superbugs.

3) We could alter an entire species: CRISPR has the potential to alter a complete species. This concept is called Gene Drive. Generally, whenever organisms mate there is a 50-50 chance to pass a particular gene but using CRISPR we can make a particular gene to pass with nearly 100 percent probability. This can be pretty beneficial if we use it properly, for example, we can totally prevent Malaria by genetically editing mosquito.

CRISPR could alter entire species

Regulating gene drives, Biotechnology journal, Javier Zarracina; Oye et al. 2014

4) Genetically edited Babies: Researchers are trying to give birth to genetically edited babies which could have amazing qualities. Not going into the depths, but once imagine how could it be, you can give birth to a baby with any quality you want, literally any quality!!! (News: Russian Scientist Says 5 Couples Have Agreed to Gene-Editing Babies to Avoid Deafness)

Moral concerns do ascend during the editing of genome, using technologies such as this one, is put to use to make changes in human genes. Most of these variations are known with gene editing and are incomplete to somatic cells, which are cells other than egg and sperm cells. These deviations affect only definite tissues and are not conceded from one age group to the next. However, alterations made to genes in egg or sperm cells or the genes of an embryo could be distributed to future groups. Embryo genome and germline cell editing are developed with numerous ethical trials, including the probabilities if same can be permitted to use this technology to improve normal human traits like height or intelligence. Centred on apprehensions about integrity and wellbeing, germline cell and embryo gene editing are at present illegal in many nations.

CRISPR Controversies

Though CRISPR has a lot of potential, it also has a lot of controversies around it. Many researchers claim that CRISPR could cause extensive mutation and gene damage which could not be easily detected.

A Chinese scientist named He Jiankui created the first ever genetically edited twin babies. He also edited one more baby with the twins. The Chinese government stated that he has violated Chinese rules and has avoided supervision, faked an ethical review, and used potentially unsafe and ineffective gene editing methods on the children. This created a lot of controversies around CRISPR in early 2019.

Even after that, there have been reports from various parts of the world describing controversial and unethical CRISPR research. There are reports saying that researchers have edited the genes of healthy embryos which is illegal and have created controversies.

In between all this, we have also got bio-hackers who can create potentially harmful micro-organisms using CRISPR as this technology is available to anyone and is easier compared to other gene editing technologies. So, we have to monitor CRISPR usage and watch potential bio-hackers to save humanity from some gene edited deadly microbes.

CRISPR has the potential to make completely altered human beings which can be dangerous in many ways. So, there is are many rules in force against the use of CRISPR in almost all countries. But still, there is a possibility that someone could misuse this technology.

Advantages of CRISPR over other Gene Editing Methods

Questionably, the most imperative rewards of CRISPR-Cas9 over other genome editing technologies is its uncomplicatedness and effectiveness. To give a good comparison if old methods are like maps then CRISPR is like a GPS system. CRISPR/Cas9 is way faster and decreases the time required to perform a gene edit to a few days from a few months or even years. CRISPR gene editing is way cheaper than other gene editing technologies.

Risks of gene editing?

Gene treatment has some probable risks. Even though CRISPR is said to be easy, it is not that easy to replace the targeted gene only. It is not like we just catch a thread cut it and tie with something else, gene editing is complex. There is a very good chance that we can make a mistake and lead to unwanted mutations (BREAKING: CRISPR Could Be Causing Extensive Mutations And Genetic Damage After All)

This method presents the subsequent risks

  1. An unwanted reaction in the immune system might occur.
  2. Wrong cells may get targeted if we are editing specific cells in grown-up organisms.
  3. There is a possibility of infection.
  4. The new segments may get inserted in the wrong spot leading to unwanted and dangerous changes.


There is a lot of buzz around CRISPR and gene editing and if used properly can give good results otherwise can be harmful to humanity. There must be National and International Organisations strictly monitoring gene editing research and any trial of biohacking must be strictly punished. Things apart, let us hope that CRISPR gets established and we get fruits of it early in the form of disease prevention, cure, good crops etc.

An Infographic on CRISPR and Gene Editing

Embed Image

CRISPR Cas9 and Gene Editing Explained


Wormhole Graphic Representation

What is a Wormhole?

Wormholes have served as fodder for numerous science fiction stories and movies for quite some time now. There have been several theories that try to explain how wormholes work and several more on how time travel could be made possible through these wormholes. Much like black holes, wormholes are beguiling and tend to leave people mesmerized with the intricacies. If you have ever wondered about what these things are, or, if you want to better understand all their alluring intricacies, then, this article is exactly what you need. Read on to learn more about the splendour of these mysterious bodies.


  1. Wormhole Explained
  2. Wormhole vs Black Hole
  3. Problems with travel through wormholes
  4. Keeping a wormhole open

Wormhole explained

Wormholes can be visualized as portals that can allow entities to travel through space and time. Black holes consist of a point of singularity where all mass is said to accumulate. These black holes consume anything in their proximity. Scientists hypothesize that there also exists a white hole at the other end of a black hole. These white holes spit out the matter, and light, absorbed by the black hole. The entry point and the exit point exist as separate points in the universe. The bridges that link two separate points in space-time are referred to as Einstein-Rosen bridges. This phenomenon of the existence of bridges was predicted in 1935 when Albert Einstein and Nathan Rosen published a paper showing the existence of a corridor or passage directly connecting one part of the universe to another as part of a black hole-white hole system.

These bridges, however, are highly unstable and tend to collapse due to the influence of the gravitational force on them. A wormhole, in this context, is a passage from one point in space-time to another. Each wormhole is expected to have two mouths and a neck, that, serves as a bridge between the two mouths. Proposedly, one mouth of a wormhole is a black hole and at the other mouth is a white hole. Both black holes and white holes are solutions to Einstein’s field equations.

Einsteins Field Equation

Einsteins' Field Equation

Einsteins’ Field Equation


  1. Rμν is the Ricci curvature tensor
  2. R is the scalar curvature
  3.  gμν is the metric tensor
  4.  Λ is the cosmological constant
  5. G is Newton’s gravitational constant
  6. c is the speed of light in vacuum
  7. Tμν is the stress-energy tensor.

A black hole is Schwarzschild’s solution to Einstein’s field equations. Ludwig Flamm discovered the presence of another solution to these field equations while understanding the Schwarzschild’s solution, and this solution was referred to as the white hole. There exists a parameter, called the Schwarzschild’s radius for every entity with mass. This radius is the radius of a sphere such that if all the mass of an object were to be compressed within the sphere of radius equal to the Schwarzschild’s radius, the escape velocity from the surface of the object would equal to the speed of light.

Lorentzian Wormhole

“Embedding diagram” of a Schwarzschild wormhole (Source: wikipedia.org)

Mathematically it could be represented as,

R = 2GM/c2


R is the Schwarzschild’s radius, G is the gravitational constantc is the speed of light, and M the mass of the black hole.

Physics is often stranger than science fiction, and I think science fiction takes its cues from physics: higher dimensions, wormholes, the warping of space and time, stuff like that. -Michio Kaku Click To Tweet

To understand a wormhole through better visualization, we would have to consider the analogy of a piece of paper consisting of two points on it. The two points represent different points in space-time.  For those of you who are absolutely tired of hearing about this analogy (in various movies or explanations), skip the next couple of lines.

For those of you who have not heard of this analogy, pay attention. When the paper is not bent or folded, there is a certain distance between the two points. Now, imagine that the paper is folded, poking a pencil through the paper to connect the two points would provide a shortcut between the points. This distance is seemingly much lesser than the distance between the points had the paper not been folded. A wormhole works in a similar manner to this shortcut. It provides a shortcut between two points in space-time. These points could even belong to different universes.

Wormhole visualized

Wormhole visualized (Credit: Wikimedia Commons)

Wormholes have not been discovered yet. In theory, their existence is proven, but, nobody has ever found one.

Many physicists and astronomers believe that the supermassive black holes that exist at the centre of most galaxies could potentially be wormholes.

Wormhole vs Black Hole

What’s the difference between black holes and wormholes? Like I have mentioned, wormholes are better explained as passages while a black hole is just a mouth to this passage. Before I get to the specific differences, let’s look at some of the similarities between the two. Both are mathematically consistent. Both distort space and thus both should have matter swirling around them. Both are immensely fascinating, and both have not been fully understood!

Black Hole NASA

This artist’s concept illustrates a supermassive black hole with millions to billions of times the mass of our sun. (Source: NASA JPL)

Now, let’s get to the differences between the two. Recently, we got an image of a black hole whereas wormhole is yet to be found. One more distinguishing factor between black holes and wormholes is the Hawking radiation. Black holes lose energy continuously through the emission of Hawking radiation. This emission is initially slow and builds speed as the process continues. Only black holes are said to emit this Hawking radiation.

Another difference is the lack of an event horizon in wormholes. The event horizon of a black hole is its boundary. To escape from within a black hole, one would have to travel faster than light, at speeds greater than the escape velocity of the black hole.

In black holes, there is no point of return. Once you enter a black hole, there is no escaping it. On the other hand, when you travel through a wormhole, if the wormhole is kept open for a sufficiently long enough time, you could potentially travel back to the same place through the same wormhole. There is a lot of controversy over this theory though since you would not end up going back to the same point in space-time that you initially started at.

Take the below question to quickly test your understanding of wormholes and black holes

Random Quiz

How much is the escape speed in Schwarzschild radius?

Correct! Wrong!

The escape velocity from the surface (i.e., the event horizon) of a Black Hole is exactly c, the speed of light. Actually, the very prediction of the existence of black holes was based on the idea that there could be objects with escape velocity equal to c.

Perhaps, the most distinguishing aspect is the fact that wormholes are purely theoretical, while black holes are proven to exist. A black hole is a massive dent in the fabric of space-time that seems to cause a puncture in it. Anything that enters this puncture is consumed and is present at a single point of singularity. A wormhole, on the other hand, can be considered as two punctures in space-time that are connected to one another. The two punctures could exist as any two points in space-time.

Problems with travel through wormholes

The problems with travel through wormholes arise due to their size and stability. Primordial wormholes are considered to be so small that they are microscopic in nature. Travelling through microscopic wormholes would be highly impossible.

The other problem is the stability of wormholes. Wormholes, under the influence of gravitational forces, tend to collapse rather easily. In order to travel through these wormholes, wormholes should remain open. This requires the presence of Exotic Matter, which, I will cover in the next section. However, this exotic matter also only exists in theory. Keeping a wormhole open is a daunting challenge, indeed. 

Keeping a wormhole open

In the case of wormholes that are existent through the explanations of the string theory, the wormholes are kept open by cosmic strings. In the case of man-made and other wormholes, they would have to be kept open by exotic matter. Exotic matter is a special kind of hypothesized matter. The exotic matter has negative mass. This means that it is repulsive in nature. Positive masses that exist in this universe tend to attract each other, while exotic matter tends to repel.

Due to the presence of gravity, it would not be easy for wormholes to remain open. This exotic matter can counter gravity and allow wormholes to remain open. Exotic matter can be used to weave space and time and sustain wormholes. One candidate for the exotic matter is the vacuum of space.

To understand why the vacuum of empty space could be a potential candidate, you will first have to understand why empty space is not empty. Empty space consists of several virtual particles that are randomly generated. These particles cancel each other out, in pairs. Each pair is said to be a particle-antiparticle pair.

This property, where pairs of particles cancel each other out, can be manipulated to produce similar pairs of matter that cancel each other out. Exotic matter can thus be produced. The exotic matter would provide a great deal of help in the stabilization of wormholes by keeping them open.

Exotic matter, unlike regular matter, would accelerate in directions opposite to the applied force. Despite its peculiar properties and its deviation from the behaviour of normal matter, it is not inconsistent mathematically. It also does not violate the principles of conservation of energy or momentum.

For exotic matter, the mass-energy equivalence would be represented as,

E = -mc2


  1. E represents energy
  2. m represents mass and
  3. c2 is the coefficient of proportionality where c is the speed of light.

The concept that interstellar travel is possible, is most certainly enthralling. The possibilities that can be unlocked through the travel in space-time are enormous and could change how we view the universe entirely. Through such travel, the vastness of the universe could be diminished. We could travel across galaxies and universes and unlock so many secrets of the universe. Although space-time travel has enormous potential, we are hindered by the fact that wormholes, at least for now, only exist in theory.

Eleanor Roosevelt, the former First Lady of the United States once said, “The future belongs to those who believe in the beauty of their dreams”. Maybe, one day we will uncover the secrets of the universe through space-time travel and view the universe in all its glory. Until then, we will have to settle for these dreams of what could be.

Read More:

  1. Ripples in Space-Time Could Reveal the Shape of Wormholes
  2. Can We Create Wormholes?
  3. What Would It Be Like to Ride Through a Wormhole?
Distribution of Dark Matter

What is dark matter and why is it still a mystery?

There are a lot of objects and bodies that exist in this gargantuan universe of ours. Everything in this vast abode that we call the universe, whether big or small, is said to consist of matter. Your phone, your body, your hair, dust, air and everything you see around is matter. Each and every one of these objects consists of matter and their existence can generally be perceived rather easily.

Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data

Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data (Credit: Wikipedia)

But what if I told you that most of the matter that exists in the universe cannot be perceived? What if I also told you that more than 85% of the matter in the universe has never been observed? These facts are hard to believe and are rather astounding, but, they are, indeed, facts. There is a special kind of matter called Dark matter, which constitutes about 85% of all the mass of universe and has never been observed directly.

Indeed, talking about the energy composition the universe is composed of roughly 4.6% matter, 23% dark matter and 72% dark energy (this is energy composition not to be confused with the above-mentioned mass composition). It is thought that we can neither detect nor measure dark energy but we can clearly see its implications. Let us talk about Dark matter in this blog and keep Dark energy aside for another blog.


  1. What is Matter?
  2. What tells the presence of Dark Matter?
  3. Types of dark matter
  4. Why should we find dark matter?
  5. What could dark matter be made of?
  6. How could we detect dark matter?
  7. Why is dark matter still a mystery?
  8. An Infographic On Dark Matter.

What is Matter?

To understand about Dark Matter, you have to understand about Matter first. The matter is something that has mass and occupies space. Matter can exist in any form or state. There are seven states of matter and they are:

  1. Solid
  2. Liquid
  3. Gas
  4. Ionised Plasma
  5. Quark-Gluon Plasma
  6. Bose-Einstein Condensate
  7. Fermionic Condensate

Matter consists of atoms, or, to be precise, the matter is made up of protons, neutrons, and electrons. This matter is called “Ordinary Matter”. The sub-atomic particles are built with some fundamental particles. These particles can be put into two groups: fermions and bosons. Fermions are the building blocks of matter. They all obey the Pauli exclusion principle. Bosons are force-carriers. They carry the electromagnetic, strong, and weak forces between fermions.

Fermions are those particles that follow Fermi-Dirac statistics and Bosons are the particle which follows Bose-Einstein statistics.

Standard Model

(Credit: Wikibooks )


Fermions can be put into two categories: quarks and leptons. Quarks make up, amongst other things, the protons and neutrons in the nucleus. Leptons include electrons and neutrinos. The difference between quarks and leptons is that quarks interact with the strong nuclear force, whereas leptons do not.


There are four bosons in the right-hand column of the standard model. The photon carries the electromagnetic force – photons are responsible for electromagnetic radiation, electric fields and magnetic fields. The gluon carries the strong nuclear force – they ‘glue’ quarks together to make up larger non-fundamental particles. The W+, W and Z0 bosons carry the weak nuclear force. When one quark changes into another quark, it gives off one of these bosons, which in turn decays into fermions.

All the above particles make up the Standard Model of particles and dark matter doesn’t come in this standard model

I want to know what dark matter and dark energy are comprised of. They remain a mystery, a complete mystery. No one is any closer to solving the problem than when these two things were discovered. --Neil deGrasse Tyson Click To Tweet

What tells the presence of Dark Matter? 

There are many observations which strongly suggests the presence of some strange non-luminous matter or the dark matter. Let us see some of them:

  1. The speed of bodies located farther from the galactic centre: From Kepler’s Second Law, it is expected that the rotation velocities will decrease with increase in the the distance from the centre of the galaxy, similar to the Solar System. This is not observed and the only obvious reason we could find is the presence of Dark matter.
  2. Mass velocity discrepancy: Stars in bound systems must obey the Virial theorem which together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. However, some velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution. This discrepancy also tells that there is some extra invisible mass out there.
  3. Gravitational Lensing: Galaxies and other huge interstellar objects act as a lens and bends light. Actually, these massive things distort or bend the fabric of space-time and light passing through this distortion bends. So, the bending of light clearly depends on the mass of the galaxy. Researchers have made many such observations of light coming from quasars through some galaxy clusters. The bending of that light clearly tells that there is some extra mass out there.
  4. Cosmic Microwave Background: The Cosmic Microwave Background radiation or CMB for short is basically electromagnetic radiation which has been travelling for these 14 billion years since the big bang. This has the temperature data also. Scientists have collected a lot of data from this radiation and created a map. This map perfectly matches with the Dark matter model and clearly tells that the universe cannot exist without Dark Matter.
9 year WMAP image of background cosmic radiation

9-year WMAP image of cosmic microwave background (Credit: NASA)

Like this, there are many other proofs but these four are the most prominent proofs for the existence of some unknown and invisible matter out there.

Types of dark matter

The classification of dark matter is based on its velocities. Free streaming length (FSL) is used to describe the distance objects would travel due to the random motions in the early universe. The size of a protogalaxy is used for determining the category of dark matter.

  1. Cold dark matter: Dark matter whose constituents have an FSL less than the size of a protogalaxy.
  2. Warm dark matter: Dark matter whose constituents have an FSL comparable to the size of a protogalaxy.
  3. Hot dark matter: Dark matter whose constituents have an FSL greater than the size of a protogalaxy.

Why should we find dark matter?

Dark matter constitutes 85% of the Universe’s Mass and it is present in really huge quantity and a lot of it might be present here on earth as well. If detected, we could probably use it for energy production and many other unbelievable applications might come up.

Other than applications, dark matter could unveil some of the dark secrets of the universe which are lying unanswered for centuries.

What could dark matter be made of?

There are several theories about what dark matter could be made of and some of  them are:

  1. WIMPs(Weakly interacting massive particles):  WIMPs are hypothetical particles that are thought to make the dark matter. These are totally new particles interacting through weak forces which are probably weaker than the weak nuclear force. These particles are not included in the above-mentioned standard model. Researchers are trying and developing a lot of experiments to detect such particles.
  2. Axions: Axion is another hypothetical elementary particle. It was actually postulated to solve the strong CP problem in quantum chromodynamics. Scientists believe that if they axions exist and have some specific properties then they can be a possible component of dark matter.

Like this, there are many proposed things and to understand all these hypothetical particles, we need a deeper understanding of physics. There are theories also saying that the current understanding of gravity itself is wrong and should be modified according to the observations but there are limitations to this also.

How could we detect dark matter?

We can locate the places in the universe where dark matter is present using techniques like Gravitational Lensing and we can even create the model of galaxies including dark matter. But we are not yet able to detect the particles which make this dark matter. So, how could we detect dark matter? Let us discuss the possible approaches. Basically, there are three approaches and they are:

Large Underground Xenon detector inside watertank

Large Underground Xenon detector inside watertank (Credit: Wikipedia)

  1. Make it here: Physicists have been bombarding particles in accelerators like LHC and there is a hope that someday we create dark matter particles and hopefully detect them.
  2. Direct Detection: Considering the amount of dark matter present in the universe, there is a possibility that dark matter is present here on earth as well and there is a possibility that some sensitive detector could detect it. So, scientists have been building extremely sensitive detectors to detect dark matter. One such detector is The Large Underground Xenon experiment (LUX) aimed to directly detect weakly interacting massive particle (WIMP) interactions with the ordinary matter on Earth
  3.  Dark matter collisions: Scientists believe that collisions of dark matter could probably release something which we could detect. So, researchers are trying to use this approach as well.
Random Quiz

The Pauli's Exclusion principle states that two electrons in same orbitals have:

Correct! Wrong!

The Pauli Exclusion Principle states that, in an atom or molecule, no two electrons can have the same four electronic quantum numbers. As an orbital can contain a maximum of only two electrons, the two electrons must have opposing spins.

Why is dark matter still a mystery?

While dark matter is the simplest explanation for the extra gravity and mass that exists, it is not necessarily the correct explanation. There are several theories that claim to explain this extra gravity and mass in the universe. Nobody really knows for sure if the existence of dark matter is a sufficient enough explanation for the existence of the extra mass. Dark matter does not give off light and as I have mentioned, does not interact with particles. Without any interactions, it is extremely hard to derive any conclusions on its nature and properties.

A recent paper in the physical review journals gave the maths claiming that dark matter might be created before the big bang itself which ads another mystery to the already existing mysteries around dark matter

New research claims dark matter might be older than the Big Bang

Dark matter may be considered as the universe’s biggest mystery. It is known that something makes objects faster than they should but we do not actually know what it is and where it came from. The origins of dark matter might be even more peculiar than it is known.

Jonathon Swift, an Anglo-Irish poet once said, “Vision is the art of seeing what is invisible to the others”. Dark matter may be invisible, but it has served to solve a lot of mysteries in this astonishingly mysterious universe. Without the invisible phenomenon of dark matter, there would still be a lot of perplexity regarding the formation of galaxies and their movements. Despite all the information we possess about the universe, nobody can say with certainty that dark matter exists. Perhaps, that is where the magnificence of physics lies, in its mystery, and this mystery is what makes the search for the truth worthwhile.

An Infographic On Dark Matter

Embed Image

Dark Matter Infographic

Read More:

  1. Dark Matter Behaves Differently in Dying Galaxies
  2. Dark matter on the move
Gravitational Wave in a Binary Black Hole

What is a gravitational wave and how it changed physics?

Gravitational waves were proposed by Henri Poincaré in 1905 and subsequently predicted in 1916 by Albert Einstein on the basis of his general theory of relativity. There are so many aspects of physics that are aesthetically pleasing. These aspects are not necessarily pleasing due to their visible or on the surface features, rather, they are aesthetically pleasing in their detail. Gravitational waves are certainly one such phenomenon. They have immense importance, and their impact in understanding the theories of physics is considerably high.


  1. Gravitation and Gravitational waves explained
  2. So, what is space-time?
  3. Gravitational pull and formation of waves
  4. Detection of gravitational waves
  5. Significance of gravitational waves

Gravitation and Gravitational waves explained

Gravitational waves are ripples in the fabric of space-time that are formed due to the acceleration of masses. These ripples propagate outwards from the source of mass. One must understand that distortions are created in the fabric of space-time by bodies of mass. To visualize this concept, think of this fabric as a piece of paper or a blanket, with people holding on to it from all sides. When an object of mass is placed on the paper or blanket, there is a visible dent or distortion of the shape of the paper or blanket at the position where the object was placed. Now when these bodies of mass are moved about, that is, they are provided acceleration, these distortions also move about in the fabric of space-time. These accelerated bodies lead to the formation of waves in space-time. These waves are the gravitational waves.

Every time you accelerate - say by jumping up and down - you're generating gravitational waves. --Rainer Weiss Click To Tweet

As you would imagine, larger bodies tend to create larger intensity waves. Theoretically, any movement of a body having mass can cause these ripples. A person walking on the pavement, in theory, also causes these ripples. However, these ripples caused by a walking person are very minuscule and insignificant.

So, what is space-time?

The universe was long thought to be consisting of the three dimensions of space only. But, Albert Einstein proved that the universe consisted of a fourth dimension, time. It would be impossible to move in space without moving in time. Similarly, it would also be impossible to move in time without moving in space. Space and time, therefore, have a very integral relationship. Einstein stated that there is a profound link between motion through space and passage through time. He hypothesized that time is relative. Objects in motion experience time slower than objects at rest.

The three dimensions of space and the dimension of time are viewed as the four-dimensional space-time. Hermann Minkowski provided a geometric interpretation that fused the three dimensions of space and the dimension of time to form the space-time continuum. This was called the Minkowski space.


Minkowski Space Illustration. Image Source: Wikipedia

In three dimensional space, the distance, D between any two points can be represented using the Pythagorean theorem as:

D2=(Δx)2 + (Δy)2 + (Δz)2


Δx represents the difference in the first dimension, Δy represents the difference in the second dimension and Δz represents the difference in the third dimension

The spacetime difference of two points given by (Δs)2 varying by time Δt would be given as:

(Δs)2=(Δct)2 – (Δx)2 + (Δy)2 + (Δz)2


c is a constant, representing the speed of light that enables conversion of units used to measure time to units used to measure space.

Gravitational pull and formation of waves

Every body that has mass tends to attract other bodies. Whether the mass is small or large, every body exerts a force on the other. This attraction is the gravitational pull. The greater the mass of the object, the larger its gravitational pull. The larger the distance of an object from another object, the lower its gravitational pull on it. Since every object, however large or small, tends to exert this pull on every other object, changes in gravity can provide insight into the behaviour of these objects.

Random Quiz

If the distance between two bodies is doubled, the force of attraction F between them will be:

Correct! Wrong!

Since the force of gravity acting between any two objects is inversely proportional to the square of the separation distance between the object's centers, Force F will be reduced by 1/2 x 1/2 = 1/4 times.

Consider the earlier example of the distortion caused by placing an object on paper or blanket, now, if we were to place a larger object, this would result in an even larger distortion. The larger object would cause a larger depression in the paper or blanket and hence, is said to have larger gravity. If the two objects were placed on the paper or blanket together, the larger object with the larger distortion would seem to be exerting a larger force of attraction towards the other object. If these objects moved, there would be ripples formed on the paper or blanket. This is similar to how gravitational waves are formed, the only difference being that the paper or blanket would be replaced by the fabric of space-time.

These gravitational waves cannot be felt easily. To detect these, you would require special equipment. These detectors are L shaped instruments with generally long arms.

Detection of gravitational waves

Gravitational waves were first witnessed in September 2015. Scientists observed the waves that were a result of two black holes colliding. These black holes were said to possess masses several times that of the sun. The black holes were attracted to each other due to the gravitational forces and slowly, over the course of several years, began to spiral into each other. One day, they finally merged. Before they merged, they let out gravitational waves that were felt on earth billions of years later in 2015.

This was picked up by a detector called Laser Interferometer Gravitational Wave Observatory (LIGO). This signal was very short lived and lasted only a fifth of a second. These wobbles in space-time picked up by the LIGO was thousands of times smaller than the nuclei of atoms. This is because the gravitational waves over the course of time gradually became weaker. The Laser Interferometers were configured in such a way that even these small ripples could be picked up.

LIGO consists of two gigantic laser interferometers located thousands of kilometres apart. Each detector consists of two 4km long steel vacuum tubes arranged in an ‘L’ shape. A special covering is provided to these tubes to ensure protection from the environment.

Aerial View Of LIGO Hanford

Aerial view of the LIGO Hanford Observatory. (Source: Caltech/MIT/LIGO Laboratory)

These tubes are the arms. The lengths of these arms are measured with lasers. If the lengths are changing, this could be due to compression and relaxation of arms due to gravitational waves. Studying these gravitational waves enables scientists to derive certain information about the objects that produced them. Information such as the mass and size of the orbit of the object that created the wave can be extracted from studying these gravitational waves. In the year 2017, The Nobel Prize in Physics was received by Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves.

Today, LIGO is trying to detect Gravitational waves with even more sensitive instruments in hope to detect more merging neutron stars and black holes and maybe some new discoveries too

Significance of gravitational waves

These gravitational waves help scientists gain information about the physical properties of the objects that created the waves. These gravitational waves provide a new way to observe the universe. A way that never existed previously.

The detection of the gravitational waves allows us to understand interactions in the universe in a completely new way. The waves detectable by LIGO are waves generated due to the collision of two black holes, exploding stars, or perhaps the birth of the Universe.

Before this form of understanding the universe was realized, most observations of the universe were made based on electromagnetic radiation. Something like the collision of black holes would have been impossible to have been picked up by electromagnetic radiation.

A major difference between gravitational waves and electromagnetic waves is the fact that gravitational waves interact very weakly with matter. Electromagnetic radiation, on the other hand, reacts strongly with matter and could face several alterations in its properties. Gravitational waves can travel through the universe virtually unimpeded.

The information, such as the mass and orbit of the object that caused the waves could be understood in a clearer manner. The information carried by the waves is free from any alterations or distortions that result from interaction with matter present in the universe.

The gravitational waves can also penetrate regions of space that electromagnetic radiation cannot. These properties have led to the creation of a new field of astronomy, called gravitational field astronomy. Gravitational field astronomy aims to study large entities in the universe and their interactions through unadulterated properties of gravitational waves.

Famous basketball player, John Wooden once said, “It’s the little things that are vital. Little things make big things happen”. In the case of gravitational waves, the little things are the ones that provide the knowledge of the larger things. Little observations made on the properties and complexities of the gravitational waves are what gives rise to the details pertaining to the larger bodies existing in the universe. There is no denying the fruitfulness of the existence of gravitational waves. One can even go so far as to say that gravitational waves have revolutionized physics. I can say without a cloud of uncertainty that gravitational waves will surely help us uncover more secrets of the universe in the future.

Read More:

  1. Four new gravitational wave detections announced, including the most massive yet
  2. Why Don’t Gravitational Waves Get Weaker Like The Gravitational Force Does?
Mars Image

The secret story of Mars

Mars, the fourth planet in our solar system, has been a source of intrigue for quite a while now. There have been several theories on what its constituents are, and several more on how it could be potentially habitable. Can humans live on Mars? Can Mars support civilization? Is there life form already existing there? Contained within the mysteries of the red planet are the answers to these questions. However, a fair attempt at answering these questions can certainly be made. Read on if you wish to know more about the Red Planet and its mysteries.

Why is Mars being considered?

Mars is the next best candidate in our solar system for supporting life. It exists at a distance that is neither too far nor too near to the Sun which is technically called the Goldilocks zone. The surface properties, as examined by scientists, also seem to suggest a possibility for the existence of life on the planet. Mars is perhaps the only planet in the solar system that could provide crucial answers about life forms. Understanding the structure and physical properties of Mars is therefore essential. Mars could provide us with the answer to whether life is prevalent in the universe or is exclusive to the Earth.

The proximity and several similarities to Earth make Mars a prime candidate. Most investigations of Mars have been carried out through telescopic observations and probes. Scientists have recognized certain habitability factors whose values could provide us with valuable information to determine whether life can be supported on Mars or not. These habitability factors are water, chemical environment, energy for metabolism and conducive physical conditions. The right combinations of the values for these habitability factors would mean that life can exist on Mars.

If the evidence for the claim that Mars can support life is conclusive enough, it would be groundbreaking. It would allow humanity to expand beyond the constraints imposed by the properties and nature of the Earth.

Similarities to Earth


Earth and Mars. Image Source: NASA

The similarities that Mars shares with Earth have been a major reason for all the speculation around its potential to support life. Conditions such as sunlight and temperature are very similar to that of the Earth’s and no other planet or moon in the solar system has a similarity greater than that of Mars with the Earth.

The Martian day, referred to as a sol, is very similar in duration to that of the Earth’s. A Martian day, or, sol, is 24 hours, 39 minutes and 35 seconds long. The similar duration of the day would enable easier conformity to Mars’s days if humans were to colonize it.

The axial tilt of Mars is also extremely similar to that of the Earth’s. Mars has an axial tilt of 25.19° as opposed to Earth’s axial tilt of 23.44°. This means that seasons on Mars are quite similar to that of seasons on Earth.

Mars also has a large enough surface area to support human colonies. The amount of dry land on Mars is only slightly lesser than the amount of dry land on Earth.

Perhaps, the most significant similarity is the presence of water. Water is where life forms originated from. Water is necessary to sustain life and life would cease to exist if water is absent.

Water on Mars

Perhaps, the most promising sign of the possibility of life on Mars is the existence of water on the planet. There has been conclusive evidence to suggest that water exists on Mars. Presence of water is necessary, but, not a sufficient condition to sustain life. The fact that water exists on Mars is extremely promising and exciting. Most of the water on Mars exists in the form of ice. Some water also exists in the form of vapor in the atmosphere and an even smaller amount exists in the form of liquid.

The polar ice caps on Mars contain the most amount of water. The northern ice cap of Mars is called PlanumBoreum while its southern counterpart is called PlanumAustrale. It is calculated that if all the ice on the southern polar cap melted, it would be sufficient to cover the entire planetary surface of Mars to a depth of 36ft.

The Phoenix lander launched by NASA confirmed the presence of water at the place where it landed in the northern polar ice cap. Mars Reconnaissance Orbiter took measurements of the ice present in the northern polar cap and estimated that the ice present in the northern ice cap is sufficient to cover the surface of Mars to a depth of 18ft. Several orbiters have also confirmed the presence of water in the form of ice in several craters on the surface of Mars.

Phoenix Lander small

Artist’s concept of the Phoenix Mars Lander. (Source: NASA/JPL/Corby Waste)

This shows that water is available in abundance on Mars and could help sustain life on the planet. These discoveries are nothing short of radical and could go a long way in helping humanity understand Mars, which, would enable easier colonization.

Problems that prevent colonization

Mars is several times colder than the Earth. Temperatures could reach drastically low values on Mars which could prove to be detrimental to human life.  The surface temperatures on Mars lies in the range -87°C to -5°C and this certainly would pose a problem to human habitation.

Despite findings of water on Mars, the quantity of water is a source of concern. In order to satiate the needs of the current population on Earth, Mars would need several times the amount of water that it is estimated to have.

Another cause for concern is the toxicity of the Martian atmosphere. Martian atmosphere contains 95% carbon dioxide, 3% nitrogen and 1.6% argon. The amount of oxygen in the atmosphere is a meager 0.4% according to estimations. This poses an extremely severe problem.

Mars also has a thin atmosphere that does not block out Ultraviolet radiation coming from the Sun. These radiations are extremely harmful and could cause several deformities in human beings.

The surface gravity of Mars is only 38% of that of the Earth. Lower surface gravity could cause a lot of harm. Problems such as space motion sickness, cardiovascular problems, muscle loss, and bone demineralization could occur in such conditions.

Global dust storms are common throughout the year, on Mars. Surviving these dust storms and their effects on the planet would be an exacting task. These storms could leave the planet covered with dust and prevent sunlight from reaching the surface.

Random Quiz

The following planet(s) has(have) ring around it(them):

Correct! Wrong!

All of the giant planets in our solar system have rings: Jupiter, Saturn, Uranus, and Neptune. Jupiter's ring is thin and dark, and cannot be seen from Earth. Saturn's rings are the most magnificent; they are bright, wide, and colorful.

Seasons and days on Mars

A Martian year roughly equals 686.86 Earth days. The Darian calendar was proposed by Thomas Gangale to aid future human settlers on Mars. Each day is measured in sols. The sol is longer than a day on Earth by 39 minutes and 35 seconds. Each year is 668.59 sols (686.86 days on Earth).

The Darian calendar consists of 24 months. The last month in the calendar is 27 sols generally. If the year is a leap year, then, the last month would have an additional sol and consist of 28 sols. Only three other months have 27 sols, all other months have 28 sols. 

Why has there not been a mission to Mars?

There have been several impeding factors that have prevented missions that aim at the full exploration of Mars. These space programs intended on discovering truths about life and space require extremely large amounts of money.

The notion is that it is moronic to be spending such exorbitant amounts of money and resources on something that could prove to be futile. There is no guaranteeing that the results from these explorations will prove to be positive. Therefore, there is a considerable amount of risk involved with these programs. There are already enough problems on Earth that need addressing and need sorting out. The search for planets that would enable humans to be spacefaring species would prove to be a self-inflicted problem. A problem that is brought upon us during times when there exist other problems that need addressing.

There is no denying that the findings on Mars could be radical and change the course of human history entirely. But, the impeding factors would probably prove to be too powerful to allow for ventures of such proportion to take place.

Mark Zuckerberg, who is the founder of Facebook and the individual who is widely regarded to have revolutionized social media, once said, “The biggest risk is not taking any risk. In a world that is changing really quickly, the only strategy that is guaranteed to fail is not taking risks”. He certainly makes a point and maybe, exploration of Mars is a risk that is worth taking. It is certainly a high risk-high reward scenario, but, when the reward is as lucrative as it is in the case of Mars exploration, then, it would almost seem too ridiculous to not take the risk.

Perhaps, it is not worth the effort or the expense to venture onto Mars. The prospect of understanding the universe, on the other hand, makes it worth the risk. The only question that lingers is: Is humanity willing to take a punt and explore Mars, or, is humanity going to be perpetually confined to the curtailments of the Earth?

Read More:

  1. The first evidence of planet-wide groundwater system on Mars
  2. SpaceX Has a Bold Timeline for Getting to Mars and Starting a Colony
Antarctica Glaciers

The science behind climate change

The earth’s climate has changed drastically over the decade. Geostationary satellites revolving around our planet help us see the big picture (quite literally), accumulating data constantly and updating us about the conditions of the Earth. Be it from the melting of polar ice caps to erratic monsoons and weather changes, and most definitely warming up of oceans to rise of sea levels. These events essentially indicate the dire condition of the climate all around the globe and its immediate need for attention.

The evidence behind climate change

  • Rising temperatures

Global warming is not a phenomenon we are unfamiliar with. It has had serious implications on our planet in various ways in the last decade and even before. Erratic rainfall, severe droughts, rising sea levels, etc. the main reason behind the rising of temperature was the increase in CO2 levels which was again caused due to pollution. In fact, the last decade 2000-2009 was the hottest on record.

Global Temperature

  • Ocean acidification

Since the industrial revolution swept through our planet bringing in new opportunities, adversely it has brought about some pretty serious implications on our large water bodies. The acidity in ocean waters has increased by 30%. This is due to the CO2 which is being expelled in greater quantities and in turn being absorbed into the oceans. The amount increasing per year is a whopping 2 billion tons per year. And that is just the upper layer of oceans.

  • Extreme events

Certain events occurring around the globe have captured the attention of various environmentalists and scientists, such as in the United States the number of recorded high-temperature weather phenomenon has been increasing. On the other hand, the number of low-temperature weather phenomenon has been decreasing since 1950. The number of intense rainfall conditions has also increased in this time period.

  • Shrinking glacial cover

From the snowy peaks of Himalayas to the Andes, the Rockies, Alps, etc. glaciers are decreasing everywhere around the world. This is a serious indication of climate change and poses serious threats to sea levels and mountain animals. Even islands remain in threat of disappearing completely under the rising sea levels. Satellite observations have revealed how much of this is true. In the past five decades, the snow cover has melted over the Northern Hemisphere.

  • NASA’s data

The ice sheets that form a huge landmass of Greenland and Antarctica have diminished in mass. According to NASA’s Gravity Recovery and Climate Experiment data, every year the loss of ice is 281 billion tons between 1993 and 2016. In Antarctica, the loss is 119 billion tons in that same time period. On top of that, the rate of ice mass loss in Antarctica has tripled in the last ten years.


Signs and Science behind climate change

The various compounds, whose abrupt increase in our environment which has caused changes in our climate are CO2, CH4, N2O, O3, etc. Their formations have been explained below:-

6 O2 + C6H12O6 --------> 6 H2O + 6 CO2 + energy

This is the process of combustion during which O2 reacts with glucose (C6H12O6) to produce water (H2O) and CO2. These chemical reactions occur when organic matter burns in our environment releasing chemical energy in the form of heat and light.

CH3COOH --------> CO2 + CH4

This is the microbial process of methanogenesis during which acetate (CH3COOH) is split into CO2 and Methane (CH4). Methane has the greatest impact on freshwater wetlands and rice paddies. The amount of methane produced in these fields increases with the area of land required for these rice paddies. This is the direct impact of the human population on climate change.

Nitrous oxide (N2O) is another contributing factor which is formed as a by-product of nitrification and denitrification.

CH4 + 4O2 --------> HCHO + H2O + 2O3

Smog is another pollutant that causes irritation of eyes and lungs, especially in city inhabitants. Tropospheric ozone (O3) is a constituent of smog that causes the mentioned problems.

NO2 + sunlight --------> NO + O

O + O2 --------> O3

NO2 + O2 --------> NO + O3

This is another process by which tropospheric ozone is emitted from atmospheric nitrate (NO2). First, the breakdown of nitrate occurs from which nitric oxide (NO) and an atom of oxygen (O) is obtained. After that, it combines with O2 and produces O3. Depicted above is the basic science behind climate change.

Random Quiz

Which of the following is not a greenhouse gas?

Correct! Wrong!

In order, the most abundant greenhouse gases in Earth's atmosphere are Water vapor, Carbon dioxide, Methane, Nitrous oxide, Ozone, Chlorofluorocarbons (CFCs) and Hydrofluorocarbons (incl. HCFCs and HFCs). Carbon Monoxide does not cause climate change directly.

Main reasons behind climate change

We might not be able to notice changes in our Earth’s climate and enjoy it as normal. However, the Earth’s climate is ever-changing, more rapidly in these times than ever so before as seen in the geological record. There are a lot of reasons behind climate change and a lot of factors, natural and anthropogenic (human-induced) which has contributed to this. The rapid rate of climate change is now a great concern worldwide.

Here are some of the main reasons behind climate change:

  1. Human activity
    • We, humans, are the ones who emitted greenhouse gases in the atmosphere since the industrial revolution. This led to more heat retention and absorption which in turn, increased surface temperatures.
    • We have emitted aerosols and these, after scattering in the atmosphere have absorbed solar and infrared radiation, which has had an adverse effect on the microphysical and chemical properties of clouds.
    • We have also changed the usage of lands, deforested them, which in turn led to a greater amount of sunlight being reflected from the surface of the earth back into space also known as the surface albedo.
Haiti Deforestation

Satellite image showing deforestation in Haiti, Haiti-Centre. This image depicts the border between Haiti (left) and the Dominican Republic (right). (Source: NASA)

  1. Solar Irradiance

Since the Sun is our nearest star and our most fundamental source of energy, it does have the effect that is instrumental to our climate changes. The Ice Age between 1650- 1850 in Greenland was due to the littlest decrease in solar activity. From 1410-1720 it was cut-off by ice and all the Glaciers shifted in and moved towards the Alps.

  1. Tectonic movements of plates and volcanic activity

Tectonic plates form the very basis of our continents and even the slightest movement can cause them to move to very different positions from their initial location. These plate movements can cause eruptions in volcanoes which in turn contribute to climate change.

The eruptions from volcanoes which consist of gases and dust particles may warm or cool the Earth’s surface altering it’s temperature significantly.

  1. Changes in ocean currents

Ocean currents carry heat to all the other water bodies of the Earth. Hence, the change in direction of these currents can change affecting the warmth or coolness of various continents. These can have a relatively large effect on our overall climate (including coastal climate and global too) because oceans harbor a large amount of heat.

These are some of the main reasons behind climate change.

Remedies for climate change

We, as humans should individually take measures to save our planet Earth and so that our climate is not affected as much.

  • Instead of depleting our reserved fossil fuels, we need to use more renewable resources such as wind, wave, tidal and solar energy.
  • We need to make use of more public transport instead of our private vehicles. We need to gradually replace our petrol driven vehicles with electric ones in the future to reduce the emission of toxic gases in the atmosphere.
  • One of the easiest steps our government can take is cutting methane emissions. Methane is 84 times harmful than carbon dioxide emissions and is a much greater reported problem.
  • We should wisely use our available energies. We can do this by using energy-efficient light bulbs, unplugging computers and other electronic devices when not in use, washing clothes in cold water instead of warm, using natural sunlight to dry our clothes instead of dryers, etc.
  • Focusing our lives in nature rather than consuming and purchasing. If we start practicing composting, recycling, sharing, fixing and making our lives would be greener and cleaner and would significantly enrich nature and our lives in the process.
  • Carbon pricing so that polluting nature has a heavy price. It might sound not as much of an important step but it paves the way for greener solutions. As agreed by market economists, carbon pricing is also a business-friendly way to decrease pollution in nature. The federal needs our individual support to help make this possible.
  • We should consume more organic meals and less meat. By doing so, we will help ourselves to a better diet and also our planet to make it more climate-stable. We should also try growing our own food and never waste it as much as possible.

Final Words

Scientists all around the globe belonging to various scientific societies have published numerous statements, coming to the unanimous conclusion that global warming is the primary factor of climate change and that we, humans are the primary cause. We should definitely stop overloading our atmosphere with carbon dioxide (CO2), which we do when we burn fossil fuels like oil and coal to provide ourselves electricity to power our transports and keep our homes warm. The Earth is steadily warming up in response and this is a dire situation whose consequences will affect us in the very near future in drastic ways.

Read More:

  1. Climate change: How do we know?
  2. Climate change, global warming and greenhouse gases
Srinivasa Ramanujan

Ramanujan: The man who knew Infinity

Srinivasa Ramanujan was one among India’s greatest mathematical geniuses. He created substantial contributions to the analytical theory of numbers and worked on elliptic functions, continuing fractions, and infinite series.

22 December 1897-26 April 1920, Ramanujan was a man of science who lived throughout the British decree in India. Although he had nearly no formal coaching in math, he created substantial contributions to mathematical analysis, range theory, infinite series, and continued fractions, as well as solutions to mathematical issues thought of to be insoluble.

Early life

When he was nearly 5 years young, Ramanujan entered the first faculty in Kumbakonam though he would attend many completely different primary faculties before getting into the city high school in Kumbakonam in Jan 1898. At the city high school, Ramanujan was doing well altogether in his faculty subjects and showed himself an in a position well-rounded scholar. In 1900 he began to figure on his own on arithmetic summing geometric and arithmetic series.

Ramanujan was shown a way to solve cube-shaped equations in 1902 and he went on to seek out his own methodology to unravel the fourth power. The subsequent year, not knowing that the quintic couldn’t be solved by radicals, he tried (and in fact failed) to unravel the quintic.

It was within the city high school that Ramanujan stumbled on an arithmetic book by G S Carr known as a summary of elementary leads to maths. This book, with its terribly cryptic vogue, allowed Ramanujan to show himself how to solve arithmetic problems, however, the design of the book was to possess a rather unfortunate impact on the method Ramanujan was later to jot down arithmetic since it provided the sole model that he had of written mathematical arguments. The book contained theorems, formulae, and short proofs. It conjointly contained an index of papers on maths that had been revealed within the European Journals of Learned Societies throughout the primary 1/2 the nineteenth century. The book, revealed in 1856, was, in fact, a feed of date by the time Ramanujan used it.

By 1904 Ramanujan had begun to undertake a deep analysis. He investigated the series ∑ (1/n) and calculated Euler’s constant to fifteen decimal places. He began to review the Bernoulli numbers, though this was entirely his own freelance discovery.

Ramanujan, on the strength of his smart faculty work, was given a scholarship to the govt. school in Kumbakonam that he entered in 1904. But the subsequent year his scholarship wasn’t revived. As a result of Ramanujan devoted a lot of his time to arithmetic and neglected his different subjects. While not having much cash, he was knee-deep in difficulties and, while not telling his oldsters, he ran away to the city of Vizagapatnam regarding 650 km far north of Madras. He continued his mathematical work, however, and then he worked on hypergeometric series and investigated relations between integrals and series. He was to find later that he had been learning elliptic functions.


In 1906 Ramanujan visited Madras wherever he entered Pachaiyappa’s school. His aim was to pass the primary Arts examination which might permit him to be admitted to the University of Madras. He attended lectures at Pachaiyappa’s school. However, he became unwell when 3 months of study. He took the primary Arts examination when having left the course. He passed in arithmetic however he was unsuccessful in all his alternative subjects and ultimately in the examination. This meant that he couldn’t enter the University of Madras. Within the following years, he worked on arithmetic problems, developing his own concepts with no one facilitating him and with no real plan of the then current analysis topics apart from that provided by Carr’s book.


Godfrey Harold Hardy, Ramanujan’s Mentor. Image Source: Wikipedia

Proceeding with his work on mathematical problems, Ramanujan studied continued fractions and divergent series in 1908. At this stage, he became seriously unwell once more and also underwent an operation in April 1909 when he took him some substantial time to recover. He married on 14th July 1909 once his mother organized for him to marry a ten-year-old woman S Janakiammal. Ramanujan did not accompany his partner, however, till she was twelve years of age.

Ramanujan continued to develop his mathematical concepts and started to cause issues and solve issues within the Journal of the Indian Mathematical Society. He developed relations between elliptic standard equations in 1910. When he published a superb analysis paper on Bernoulli numbers in 1911 within the Journal of the Indian Mathematical Society he gained recognition for his work. Despite his lack of university education, he was turning into being acknowledged within the Madras space as a mathematical genius.

The pursuit of a career in mathematics

In 1911 Ramanujan approached the founding father of the Indian Mathematical Society for a recommendation on employment. After this, he was appointed to his first job, a short-lived post within the Comptroller General’s workplace in Madras. It was then absolutely urged upon him that he approach Ramachandra Rao, who was a Collector at Nellore. Ramachandra Rao was a founder of the Indian Mathematical Society. The agency which had helped him begin the arithmetic library.

Ramachandra Rao told him to come to Madras and he tried, unsuccessfully, to rearrange a scholarship for Ramanujan. In 1912 Ramanujan applied for the post of clerk within the accounts section of the Madras Port Trust.

Despite the very fact that he had no university education, Ramanujan was clearly documented to the university mathematicians in Madras for, along with his letter of application, Ramanujan enclosed a reference from E W Middlemast who was the faculty member of arithmetic at The Presidency faculty in Madras. Middlemast, a graduate of St John’s faculty, Cambridge.

On the strength of the advice, Ramanujan was appointed to the post of clerk and commenced his duties on 1st March 1912. Ramanujan was quite lucky to possess a variety of individuals operating around him with coaching in arithmetic. In fact, the Chief comptroller for the Madras Port Trust, S N Aiyar, was trained as a scientist and printed a paper on the distribution of primes in 1913 on Ramanujan’s work. The faculty member of engineering at the Madras Engineering faculty C L T Griffith was conjointly fascinated by Ramanujan’s talents and, having been educated at University faculty London, who knew the faculty member of arithmetic there, particularly M J M Hill. He wrote to Hill on 12th November 1912 referring him a number of Ramanujan’s work and a duplicate of his 1911 paper on Bernoulli numbers.

Hill replied in an exceedingly fairly encouraging method however he showed that he had did not perceive Ramanujan’s results on divergent series. The advice to Ramanujan that he browsed Bromwich’s Theory of infinite series failed to please Ramanujan a lot of. Ramanujan wrote to E W Hobson and H F Baker attempting to interest them in his results however neither replied. In Jan 1913 Ramanujan wrote to G H Hardy having seen a duplicate of his 1910 book Orders of time.

Mathematical achievements

On 18th of February, 1918 Ramanujan was appointed as a fellow of the Cambridge Philosophical Society then 3 days later, the best honor that he would receive, his name appeared on the list for election as a fellow of the Academy of London. He had been planned by a powerful list of mathematicians, specifically Hardy, MacMahon, Grace, Larmor, Bromwich, Hobson, Baker, Littlewood, Nicholson, Young, Whittaker, Forsyth and Whitehead. His election as a fellow of the academy was confirmed on 2nd May 1918, and then on 10th Oct 1918, he was appointed as a Fellow of Trinity faculty Cambridge, the fellowship to last for six years.

The honors that were presented on Ramanujan perceived to facilitate his health improvement and he revived his efforts at manufacturing arithmetic. By late November 1918, Ramanujan’s health had greatly improved.

Ramanujan sailed to India on 27th February 1919 and he reached on thirteen March. But his health was terribly poor and, despite medical treatment, he died there the subsequent year.

Posthumous recognition

The letters Ramanujan wrote to Hardy in 1913 had contained many desirable results. Ramanujan figured out the Riemann series, the elliptic integrals, hypergeometric series and therefore the practical equations of the zeta function. On the opposite hand, he had solely an imprecise plan of what constitutes a proof. Despite several sensible results, a number of his theorems on prime numbers were utterly wrong.

Riemann Zeta Function

This image shows a plot of the Riemann zeta function along the critical line for real values of t running from 0 to 34. The first five zeros in the critical strip are clearly visible as the place where the spirals pass through the origin. (Source: wikimedia.org)

Ramanujan discovered the results of Gauss, Kummer, et al on the hypergeometric series on his own. Ramanujan’s own work on partial sums and results of hypergeometric series have shed major light and led to a significant development within the topic. Maybe his most notable work was on the amount p(n) of partitions of an integer number n into summands. MacMahon had made tables of the worth of p(n) for tiny numbers n, and Ramanujan used this numerical information to conjecture some outstanding properties a number of that he tried exploitation elliptic functions. Various other results were proved posthumously.

Ramanujan gave an asymptotic formula for p(n) after collaborating on a paper with Hardy. It had the outstanding property that it seemed to offer the right worth of p(n), and this was later verified by Rademacher.

Random Quiz

What is Hardy-Ramanujan number?

Correct! Wrong!

1729 is known as the Hardy–Ramanujan number after a famous anecdote of the British mathematician G. H. Hardy regarding a visit to the hospital to see the Indian mathematician Srinivasa Ramanujan. In Hardy's words: “I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."” The two different ways are these: 1729 = 1^3 + 12^3 = 9^3 + 10^3

Wrapping up!

Ramanujan left a variety of unpublished notebooks full of theorems that mathematicians have continued to check. G N Watson, Mason academic of maths at Birmingham from 1918 to 1951 revealed fourteen papers underneath the final title Theorems declared by Ramanujan and altogether he revealed nearly thirty papers that were impressed by Ramanujan’s work. Hardy passed on to Watson an outsized range of manuscripts of Ramanujan that he had, each written before 1914 and a few written in Ramanujan’s last year in India before his death. Ramanujan left behind 3 notebooks and a bundle of pages (also referred to as the “lost notebook”) containing several unpublished results that mathematicians continuing to verify long since his death.

Read More:

  1. Ramanujan’s lost notebook
  2. Ramanujan surprises again
  3. Mathematical proof reveals magic of Ramanujan’s genius

Machine Learning and Artificial Intelligence

The Future of Computing and Artificial Intelligence

The advent of computers propelled society and spurred into action a lot of businesses and industries that would otherwise have been impossible. Computers and computer technology changed the way humans worked and functioned. The very first computer was a behemoth. Today, everything that you need is available on the computer through the internet and even the most complex computations can be carried out by these modern wonders. But, what does the future have in store for these computers? Are there any more frontiers that these brilliant systems need to face and overcome, and as a result, become even more powerful computationally? Let’s find out.

Computers are already ubiquitous, but, they possess the ability to enhance their presence even more. Computers are on desks and countertops, in bags and pockets, but, soon, they might be a part of everything imaginable. Yes, everything conceivable, even the tiniest of devices and commodities, might soon have computers embedded within them. There is an increasing demand for products that can carry out the computations that they need on their own and this has led to the need for devices that have computers embedded in them.

Some people perceive the final goal of computers to be that of ensuring that computers are inextricable from society and human life. Computers would be so involved in every process that the very thought of its non-existence would make life impossible! The aim is to ensure that computers are entangled with human life in such a way that computers would be indistinguishable from human life.

The notion going around that the future of computing is limited to Artificial Intelligence is incorrect. Artificial Intelligence is still burgeoning and is expected to be an essential part of day-to-day life, but, the future of computing is not limited to just Artificial Intelligence, there is much more to computing.

Artificial Intelligence


Image Source: Mike MacKenzie (Flickr)

The first challenge involved with Artificial Intelligence is the definition of intelligence. Intelligence has long been considered to be subjective and each person’s interpretation and definition of intelligence is generally different. Is a machine intelligent if it is able to communicate? Is it intelligent if it is able to learn? Or, is it intelligent if it can merely solve problems? One way to measure intelligence is through processing power. Machines can potentially gain computing powers that are greater than the processing power of all human beings put together. This situation is termed as the Singularity. The term was coined by John von Neumann and refers to the point when human affairs would cease, and computers would completely take over.

Potential threats

The low-end repetitive tasks have already been replaced by computing systems that automate it. The need for automation has made it imperative that we develop the computing potentials of computers. But, this growth has already caused a lot of concerns. There are several concerns over whether machines would someday surpass human intelligence. Currently, human intelligence, based on processing power, is considered to be 100,000 times more than that of the most powerful processing computer. But, it is estimated that computers can reach the processing power of humans within the next decade, at the rate of growth they are showing. By 2050, it is estimated that they would have processing power that is more than the sum total of all human brains combined. That is a scary thought indeed.

So, computers can take away our jobs and surpass our intelligence, what does that suggest? Is the world doomed? Not quite. Although one cannot quite rule out world domination by super intelligent computers and artificial intelligence, it is unlikely to happen due to the complexity brake. Scientists argue that computers and systems cannot grow to gain such super intelligence because as the processing power grows, the complexity increases and the complexity would reach a level that would restrain the growth of intelligence. Even if computers somehow broke free from the shackles of the complexity break, it is hypothesized that there would be enough ‘obedient’ intelligent systems that would keep the ‘misbehaving’ ones in check.

Let’s now get back to defining artificial intelligence. Since intelligence cannot be quantifiable, a generalist definition of artificial intelligence would be “machines and computers that carry out computations and tasks that would otherwise be carried out by humans”. This definition clearly does not capture the whole picture, but, is a reasonable attempt at defining A.I.

Evolution of Artificial Intelligence

The evolution of artificial intelligence can be viewed with regards to the forms of problems that it intended on solving. The early works in artificial intelligence focused on formal tasks, such as game playing and theorem proving. Checkers playing programs and chess playing algorithms were the earliest works in the field. Theorem provers such as the ‘The Logic Theorist’ aimed at solving theorems.


Image Source: Pixabay

A.I. then began to be implemented in solving common sense reasoning problems. A.I. was used to simulate the human ability to make presumptions about everyday activities. Reasoning about physical objects and their relationships, and about actions and their consequences were the problems that came under this field.

When A.I. began to grow, it started being built for perception problems. These problems were especially difficult because they involve analog data such as speech and vision.

A.I. is now being used for natural language understanding and natural language processing. Humans possess the ability to communicate through expressions and languages. Perceiving the languages and various constructs of the languages and being able to process them, are all problems that A.I. was created to solve. These fields are still not fully developed, but, systems that carry out natural language processing with a high degree of accuracy are already in place.

A.I. is now being used in tasks where human experts are needed. These tasks, such as engineering design, scientific discovery, medical diagnosis and financial planning are being carried out using artificial intelligence with a great degree of accuracy and precision. Expert systems are computer systems that use inferences and knowledge bases to mimic the decision making ability of human experts.

Domains of Artificial Intelligence

Artificial intelligence consists of countless domains. It would be highly improbable to list all of them. Here are some of the interesting domains of Artificial Intelligence:


Heuristic techniques involve finding approximate solutions to problems instead of accurate ones. Heuristic techniques are generally used when the time for computing the solutions is restricted or when it is computationally too expensive to look for the global optimal solution.

Statistical reasoning

This form of reasoning involves using probabilities and probabilistic theorems to determine most likely occurrences. Logical rules based on probabilities are created to determine what event is more likely to occur, given certain conditions.

Natural language understanding and processing

Natural language understanding involves the interpretation of natural language spoken by humans. Natural language processing, on the other hand, involves both interpretation and generation of natural language spoken by humans.

Expert systems

Expert systems are involved in carrying out tasks that would otherwise require human experts. These systems use inferences, which, are generally if-then rules, and the knowledge of previous problems and inferences derived from these previous problems, to provide solutions to the problem at hand.

Fuzzy logic systems

Fuzzy logic systems remove the notion of crisp boundaries and use fuzzy sets instead of crisp sets. In fuzzy logic, everything is viewed in terms of degree of membership. The hotness of a day is generally not quantifiable, but, fuzzy logic aims to quantify such occurrences using the concept of membership degrees. A temperature of 45°C, which, most people would perceive to be hot, could, therefore, belong to a set called ‘Cold’ to a certain degree (a small membership value) and to a set called ‘Hot’ to a certain degree (large membership value).

Genetic algorithms

Genetic algorithms aim to mimic the biological behavior of human beings and animals to solve problems. Artificial Neural Networks have been widely used in the computation. These networks consist of nodes that are intricately interconnected just like the human neural networks. Outputs at each layer of a network is propagated to the next layer. This system is used to solve a lot of complex problems. Ant colony optimization is one of the areas of genetic algorithms, where actual ant colonies are mimicked to solve problems such as finding the shortest paths.

Evaluation of artificial intelligence: Turing test

Turing test


Turing Test Illustration. Image Source: Wikipedia

Alan Turing hypothesized that a computer can achieve intelligence equivalent to that of a human being, or at least, intelligence that is indistinguishable from human intelligence if it could successfully lead a human being to believe that the machine was human. The Turing test is renowned for being the basis for classifying machine as either intelligent or not intelligent.

The Turing test involves a human evaluator who is responsible for determining which of the two entities he is talking to is actually human. The evaluator is allowed to ask a series of questions to carry out this classification. These questions are asked to both the human being and the machine. If the human evaluator fails to determine which one is human, and which one is a machine, based on the responses, then, the machine is said to have successfully fooled the evaluator and is deemed to be intelligent.

The Turing test is said to have set the benchmark for evaluating systems that may be intelligent.

The CEO of SoftBank, which, is a Japanese conglomerate, Masayoshi Son, once said, “I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it will be our partner”. It is absolutely imperative that we tread carefully in matters pertaining to artificial intelligence. It is important to ensure that humans do not lose control over artificial intelligence. The result of losing control can be catastrophic. There is a need to impose regulations that prevent detrimental outcomes due to the development of artificial intelligence. The field of artificial intelligence has immense capabilities and should be allowed to grow, under regulated circumstances. Artificial intelligence need not have sinister outcomes. It is the responsibility of humanity to monitor A.I. and let it grow.

Read More:

  1. A Technical Overview of AI & ML (NLP, Computer Vision, Reinforcement Learning) in 2018 & Trends for 2019
  2. Bill Gates: These breakthrough technologies are going to profoundly change the world
solar panels array

How Solar Cells Work and are They Important for Our Future?

Solar cell technology: How it works and the future of sunshine

Why do we waste time drilling for oil and shoveling coal once there is a mammoth powerhouse within the sky up on top of us, causing out clean, non-stop energy for free? The Sun, an agitated ball of atomic energy, has enough fuel aboard to drive our scheme for an additional 5 billion years—and solar panels will flip this energy into an endless, convenient provider of electricity.


Image Source: Wikipedia

Solar power might sound strange or futurist, however, it’s already quite commonplace. You may have a solar-powered quartz watch on your wrist joint or a solar-powered calculating machine. Many folks have solar-powered lights in their garden. Spaceships and satellites typically have solar panels on them too. The yank house agency NASA has even developed a solar-powered plane! As warming continues to threaten our surroundings, there looks very little doubt that solar energy can become a fair additional vital style of renewable energy within the future. However specifically will it work?

How much energy are we talking about?

Solar power is wonderful. On average, each square measure of layer receives 164 watts of alternative energy (a figure we’ll make a case for in additional detail during a moment). In alternative words, you may stand a very powerful (150 watt) lamp on each square measure of layer and lightweight up the entire planet with the Sun’s energy! Or, to place it otherwise, if we have a tendency to line only one p.c of the Sahara with solar panels, we have a tendency to may generate enough electricity to power the entire world. That is the smart factor concerning solar power: there is associate degree awful ton of it—much quite we have a tendency to may ever use.

But there is a drawback too. The energy the Sun sends out arrives on Earth as a combination of sunshine and warmth. each of those area units implausibly important—the light-weight makes plants grow, providing us with food, whereas the {warmth} keeps us warm enough to survive—but we will not use either the Sun’s light-weight or heat on to run a TV or an automobile. We have to search out how of changing alternative energy into alternative styles of energy we will use a lot of simple, like electricity and that is specifically what solar cells do.

What are solar cells?

Solar power is rattling. On average, every area unit of layer receives 164 watts of other energy (a figure we’ll build a case for an extra detail throughout a moment). In different words, you will stand an awfully powerful (150 watt) lamp on every area unit of layer and light-weight up the complete planet with the Sun’s energy! Or, to put it otherwise, if we have an inclination to line only 1 p.c of the Sahara Desert with solar panels, we have an inclination to could generate enough electricity to power the complete world. That’s the good issue regarding solar power: there’s academic degree awful ton of it—much quite we have an inclination to could ever use.

solar cell principle

Image Source : Wikipedia

But there’s a downside too. The energy the Sun sends out arrives on Earth as a mixture of sunshine and heat. every of these space units incredibly important—the light-weight makes plants grow, providing us with food, whereas the keeps us heat enough to survive—but we’ll not use either the Sun’s light-weight or heat on to run a TV or associate degree automobile. We’ve to look out however of fixing energy into different types of energy we’ll use a great deal of easy, like electricity. Which is specifically what solar cells do.

Just like the cells in a very battery, the cells in a very electrical device area unit designed to come up with electricity; however wherever a battery’s cells build electricity from chemicals, a solar panel’s cells generate power by capturing daylight instead. they’re generally known as electrical phenomenon (PV) cells as a result of they use daylight (“photo” comes from the Greek word for light) to form electricity (the word “voltaic” could be a relation to Italian electricity pioneer Alessandro Conte Alessandro Giuseppe Antonio Anastasio Volta, 1745–1827).

We can think about light-weight as being made from little particles known as photons, therefore a beam of daylight is sort of a bright yellow hose shooting trillions upon trillions of photons our means. Stick a photovoltaic cell in its path and it catches these energetic photons and converts them into a flow of electrons—a current. Every cell generates a number of volts of electricity, therefore a solar panel’s job is to mix the energy made by several cells to form a helpful quantity of electrical current and voltage. Nearly all of today’s solar cells area unit made up of slices of Si (one of the foremost common chemical parts on Earth, found in sand), though as we’ll see shortly, a range of different materials is often used similarly (or instead). Once daylight shines on a photovoltaic cell, the energy it carries blasts electrons out of the Si. These are often forced to flow around an electrical circuit and power something that runs on electricity.

How do solar cells work?

Solar cells convert the sun’s energy or solar energy into electricity. Whether or not they’re adorning your calculator or orbiting our planet on satellites, they admit the photoelectrical effect: the flexibility of concern emit electrons once a light-weight is shone thereon.

Silicon is what’s called a semiconductor, which means that it shares a number of the properties of metals and a few of these of an electrical non-conductor, creating it a key ingredient in solar cells. Let’s take a better inspect what happens once the sun shines onto a cell.


Sunlight consists of minuscule particles known as photons, which radiate from the sun. As these hit the element atoms of the cell, they transfer their energy to lose electrons, sound them clean off the atoms. The photons might be compared to the white ball in an exceedingly game of pool that passes on its energy to the colored balls it strikes.

Freeing up electrons is but solely the work of a star cell: it then has to herd these stray electrons into an electrical current. This involves making an electrical imbalance at intervals the cell, that acts a small amount sort of a slope down that the electrons can flow within the same direction.

Creating this imbalance is formed attainable by the interior organization of element. Element atoms area unit organized along in an exceedingly tightly certain structure. By compression little quantities of different parts into this structure, 2 differing kinds of element area unit created: n-type, that has spare electrons, and p-type, that is missing electrons, going away ‘holes’ in their place.

When these 2 materials area unit placed facet by facet within a cell, the n-type element’s spare electrons skip filling the gaps within the p-type silicon. This suggests that the n-type element becomes charged, and therefore the p-type element is charged, making an electrical field across the cell. As a result of the element may be a semiconductor, it will act as a non-conductor, maintaining this imbalance.

As the photons smash the electrons of the element atoms, this field drives them on in an orderly manner, providing the electrical current to power calculators, satellites and everything in between.

Are solar cells important for our future?

Solar energy has unbroken our species alive for thousands of years: heat, light, and crops. However, harnessing this energy to come up with electricity is, relatively, a really recent development. Because the Royal Society of London for Improving Natural Knowledge of Chemistry says, “The quantity of energy reaching the Earth’s surface each hour would meet the world’s current energy demands for a complete year… we have a tendency to not ought to gamble the lifestyles of future generations”. Additionally, technology is continually being improved and refined. But how, specifically, alternative energy be of profit in our lives and people of future generations worldwide?

The most obvious professionals of alternative energy, as we have a tendency to at solar Action Alliance indicate, a square measure that’s that it’s rife, property, free, secure, and reliable. Even in less sunny countries like the UK, there’s enough energy within the rays that reach the surface to come up with electricity. However, sunny locations like Calif., square measure ideal for a solar.

Just as individual households or businesses are able to do independence in reference to an influence provide, communities and cities will do a similar with whole communities living off-grid and being self-sustaining. In a world of restricted and strained resources, this can be a large advantage… and one that may be progressively necessary within the future.

Small, rural, and/or less affluent communities, regardless of however remote, won’t get to trust massive energy suppliers and their infrastructures or wait a protracted time for services to achieve them. Solar kits will reach any community and be fitted to homes, schools, clinics, and so on.

The fact that solar panels and systems square measure currently obtainable in varied sizes, shapes, and thicknesses conjointly make them much more versatile in terms of applications and wherever they will be used. New applications are perpetually being found and installations being created. There’s no reason to believe the longer term ones won’t be even a lot of exciting and liberating.