Login with your Social Account

Researchers find application of Golden Ratio in the human skull

Researchers find application of Golden Ratio in the human skull

A new study has compared human skulls to other animals resulting in the claim that our heads tend to follow the golden ratio – the special number associated with beauty. The findings appear in The Journal of Craniofacial Surgery. 

The Golden ratio is denoted by the irrational number phi equating to nearly 1.618 and is a famous mathematical concept. Neurologists from John Hopkins, Rafael Tamargo, and Jonathan Pindrik claim in their new study that phi might indicate some kind of sophistication going in the brain box. They mention that the human skull denotes the elegant harmony of both structure and function as it has evolved over millennia. The paper compares 100 human craniums physiologically normal with 70 others representing six different mammals. 

The Nasioiniac arc connects the point on nasal bones with a bump located at the back of the head known as the inion. Distances were measured from the nasal bone to a point on skull known as bregma and from bregma to inion. The cranial features were selected as representative distances corresponding to significant neural structures in humans as well as other animals. 

The ratio of the distance from bregma to the inion and bregma to the nasion equals 1.6 which is also the ratio of nasion to the inion and bregma to the inion. 1.6 is quite close to the golden ratio so the researchers think there might be something interesting here. Tamargo said that the other mammals surveyed had ratios nearing the golden ratio with an increase in the sophistication of species. This might have significant evolutionary and anthropological implications. However, these implications are not clear. The ratio has been spotted in several physiological structures in recent years which prompts the doubt if there is at all any biological significance. 

There are two numbers a and b, a being greater. Their ratio is a:b. When (a+b): a equals a:b, then it is considered as the Golden Ratio first named by Luca Pacioli in 1509. Even the great artist Leonardo Da Vinci used it for major artistic proportions. 

It has been found that the spiral shell of nautilus follows this ratio in the form of a golden spiral. The real question is if we are influenced by selection bias when looking for the applications of this ratio or it is actually followed in the evolutionary process. In 2015, Eve Torrence, a mathematics professor at Randolph-Macon College said that it is quite silly to assume that only the golden ratio reflects some sort of perfection since humans are quite diverse. 

It is up to debate in the scientific community if the golden ratio found in the human skull’s midline indicates some complexity or is just a mere observation. 

Reference: The Journal of Craniofacial Surgery

Geometry goes viral: Researchers use maths to solve virus puzzle

Geometry goes viral: Researchers use maths to solve virus puzzle

Researchers have developed a new mathematical framework that changes the way we understand the structure of viruses such as Zika and Herpes.

The new mathematical framework changes the way we understand the structure of viruses such as Herpes.

The discovery, by researchers at the University of York (UK) and San Diego State University (US), paves the way for new insights into how viruses form, evolve and infect their hosts and may eventually open up new avenues in anti-viral therapy.

Viruses look like tiny footballs because they package their genetic material into protein containers that adopt polyhedral shapes.

The new theory revolutionises our understanding of how these containers are shaped, solving a scientific mystery that has endured for half a century.

High resolution

For more than fifty years, scientists have followed the Caspar-Klug theory (CKT) about how the protein containers of viruses are structured. However, improvements in our ability to image viral particles at high resolution have made it apparent that many virus structures do not conform to these blueprints.

Published in the journal Nature Communications, the new theory accurately predicts the positions of proteins in the containers of all icosahedral (or twenty-sided) for the first time. It simultaneously works for viruses that conform to CKT and for those that posed an unresolved problem to that theory.

Professor Reidun Twarock, mathematical biologist at the University of York’s departments of Mathematics and Biology and a member of the York Cross-disciplinary Centre for Systems Analysis, said: “Our study represents a quantum leap forward in the field of structural virology, and closes gaps in our understanding of the structures of many viruses that are ill described by the existing framework.

Anti-viral strategies

“This theory will help scientists to analyse the physical properties of viruses, such as their stability, which is important for a better understanding of the mechanism of infection. Such insights can then be exploited for the development of novel anti-viral strategies.

“In particular the structures of larger and more complex viruses that are formed from multiple different components were previously not well understood.

“Our over-arching scheme reveals container architectures with protein numbers that are excluded by the current framework, and thus closes the size gaps in CKT.

Viral evolution

“The new blueprints also provide a new perspective on viral evolution, suggesting novel routes in which larger and more complex viruses may have evolved from simple ones at evolutionary timescales.”

Dr Antoni Luque, theoretical biophysicist at San Diego State University and their Viral Information Institute, said: “We can use this discovery to target both the assembly and stability of the capsid, to either prevent the formation of the virus when it infects the host cell, or break it apart after it’s formed. This could facilitate the characterization and identification of antiviral targets for viruses sharing the same icosahedral layout.”

Materials provided by the University of York

Scientists solve the final part of Sum of Three cubes puzzle

Scientists solve the final part of the Sum of three cubes puzzle

A team of scientists from the University of Bristol and Massachusetts Institute of Technology(MIT) has finally solved the last part of the famed mathematics puzzle of 65 years thus finding the answer for 42, the most elusive number.

The problem was set at the University of Cambridge in 1954. The aim was to find the solutions for the equation x3 + y3 + z3 = k, where k could take any value from one to a hundred. It is a Diophantine Equation in which the number of equations are lesser than the number of variables involved. In a mathematical sense, they define an algebraic surface, algebraic curve. The equation gets its name from Diophantus of Alexandria, a mathematician who lived in the 3rd century. The mathematical study of such type of problems is called Diophantine analysis.

The problem of sum-of-three-cubes became quite interesting as the smaller solutions were found easily but then the other answers could not be calculated as the numbers satisfying became too large. With progress in time, each value of k was either found out or proved unsolvable with the help of modern computation techniques except for two numbers 33 and 42.

Professor Andrew Booker with his mathematical genius was able to find a solution when k is 33 taking the help of a university supercomputer. This meant that the last remaining number to find a solution for was 42. Solving it was a task of higher complexity hence Professor Booker sought the help of Andrew Sutherland, MIT maths professor who is a world record breaker in parallel computations. To solve it they took the help of Charity Engine, a “global” computer that utilizes unused, idle computational power from more than 500,000 PCs to create a super-green platform that is crowd-sourced and developed from surplus capacity.

It took a million hours to calculate the numbers which are as follows:
X = -80538738812075974, Y = 80435758145817515, Z = 12602123297335631.

This finally completes the famous solutions to the Diophantine Equation covering every single number from 0 to 100. Professor Booker who is based at the School of Mathematics, University of Bristol said that he felt relieved as there was no certainty to have found something. It bears resemblance to predicting earthquakes where there are only rough probabilities to proceed with. In problems like these, the solution might be within days of trying or a hundred years might pass by still there might be no definitive answer.

Spherical aberration disk

Physicists came up with a solution to the 2000 years old ‘unsolvable’ optical problem

  • A group of physicists from National Autonomous University, Mexico and Tec de Monterrey has found the solution to the Wasserman-Wolf problem(Basically it is the problem of spherical aberration).
  • González-Acuña, Chaparro-Romo and Gutiérrez-Vega managed to come up with a lengthy mathematical equation involving work of several months which provides an analytical solution to eliminate the problem of spherical aberration.
  • The solution involves fixing the shape of a second aspherical surface in addition to a first surface.

A group of physicists from National Autonomous University, Mexico and Tec de Monterrey has found the solution to the Wasserman-Wolf problem. It is a 2000 year old optical problem. The scientists Rafael González-Acuña, Julio Gutiérrez-Vega and Héctor Chaparro-Romo described the mathematics of the puzzle along with its applications and efficiency results. The paper has been published in the Applied Optics journal

Diocles, a Greek scientist had identified the problem with optical lenses 2000 years ago. He observed that when objects are viewed with devices equipped with optical lenses, the edges appeared to be fuzzier than the centre. He explained this effect attributing the reason for the lenses being spherical, upon which incident light cannot be focused due to refractive differences. Imperfections in the material and shape of the lens also contribute to the light missing the target.

This is now known as a spherical aberration which was considered unsolvable as even physicists such as Isaac Newton and Gottfried Leibniz could not crack it.

Wasserman and Wolf described the problem analytically in 1949 and termed it as Wasserman-Wolf problem. They tried to solve the problem with the approach of using two aspheric adjacent surfaces for correcting the aberrations. Lensmakers and researchers have come up with a variety of approaches since then for producing sharp uniform images by eliminating aberration. There have been developments in lens manufacturing and design from several companies which primarily involved aspherical lenses. But they have been generally very expensive and not so convenient to manufacture although leading to improvement in images. 

But González-Acuña, Chaparro-Romo and Gutiérrez-Vega managed to come up with a lengthy mathematical equation involving work of several months which provides an analytical solution to eliminate the problem of spherical aberration. The equation looks incomprehensible to the common man but it is applicable for a lens of any size. The solution involves fixing the shape of a second aspherical surface in addition to a first surface. The solution is independent of the material, size and application of the lens and provides the exact specifications for the lens to be optically perfect. 

This breakthrough will solve the headache of the photographers who were desperate for perfect images but could not get them irrespective of the money spent. It will also help in scientific observations where better images can lead to accurate results. The normal users will also be benefited from this solution as they can get images of higher quality in their smartphones and cameras. 

Journal Reference: Applied Optics journal

Bubble rings

Solar Flares, Bubble Rings, and Ink Chandeliers

Engineers from Caltech have generated a computer simulation of underwater bubble rings that is so realistic it is virtually indistinguishable from a video of the real thing. The point of the research, however, lies well beyond creating next-generation computer graphics. Instead, its developers hope that the simulation can shed light on the mathematics and forces that govern such phenomena.

“When we make these abstractions, we still want to capture some fundamental truth about the universe,” says Peter Schröder, the Shaler Arthur Hanisch Professor of Computer Science and Applied and Computational Mathematics in the Division of Engineering and Applied Science, whose team built the simulation.

Bubble rings are most often seen in online videos created by scuba divers, who puff the rings out in a manner similar to smokers blowing smoke rings. Schröder’s team simulated two underwater rings of air merging and then snapping apart again. They will present their work at the International Conference and Exhibition on Computer Graphics & Interactive Techniques (SIGGRAPH), to be held in Los Angeles from July 28 to August 1.

The project began with an attempt to better understand solar flares, which are enormous loops of plasma that blast out from the surface of the sun. The creation and growth of solar flares is governed by a number of complex forces (for example, the sun’s strong magnetic field) that make modeling and understanding them difficult, Schröder says.

Because of the complexity of the problem, Schröder’s team wound up breaking it into smaller individual pieces. “If you look at videos of the sun and the eruptions on the surface of the sun, you can see these large arcs of what are called flux ropes,” Schröder says. “They twist and then turn on themselves, and there are violent events where the loop runs into itself. This is a little bit similar to when two vortex filaments—two bubble rings—meet. There is a very violent reaction where they merge or a section gets pinched off, with waves traveling along the bubble ring.”

A similar, but gentler version can be created by dropping ink into water. The ink forms a twisting doughnut shape that then branches off into what are called ink chandeliers.

Solar flares, bubble rings, and ink chandeliers all share one thing in common: twisting, fluid doughnut-like shapes that rotate around a center line. Thus, effectively modeling one of the phenomena should offer partial insight into the others, Schröder says.

“The hope of our work is to provide geometric insights. We are always looking at the geometry of things and of their behavior—in this case, we are looking at the geometry of this center curve of the bubble ring traveling and the changes in its thickness,” he says.

To measure how closely his computer simulations model reality, Schröder compares them side-by-side with videos of the real thing. Even if he doesn’t have every possible variable pinned down perfectly, if the two videos are virtually indistinguishable, he’s probably doing something right.

“Now, we have a visual comparison because there’s no way of saying, ‘Wow, you got that exactly right.’ It’s too complicated to verify quantitatively; there are too many variables. But when you see the visualization, the eye will say, ‘Yes, this is a match,'” he says. “I invite anybody to look at it and tell me whether they think it’s a match. We feel pretty good about it.”

That qualitative—rather than quantitative—analysis can seem inexact by a mathematician’s standards, but it has led to some wonderful “a-ha” moments for Schröder, when the computer simulation looks so realistic that he can be confident that the mathematics he used to build it are the ones that govern real-world phenomena.

“What drives me is finding these beautiful descriptions of something that looks terribly complicated but can be reduced to a few mathematical key concepts. Then the rest just follows from there. There’s beauty in seeing that a very simple principle all of a sudden gives rise to the complex appearance we perceive,” Schröder says.

Materials provided by California Insititute of Technology

katie bouman the woman behind the first black hole image

Know the woman behind the first photo of black hole, Katie Bouman

On Wednesday, scientists released the first ever images of a black hole, about which we have known for a very long period of time, but could not capture a single image of it. This remarkable achievement has been made possible largely due to the algorithms created by a 29 year old computer scientist, Katie Bouman.

Katie Bouman, a doctorate from the Massachusetts Institute of Technology in Electronic Engineering and Computer Science developed an algorithm named as Continuous High-resolution Image Reconstruction using Patch priors or CHIRP. This algorithm along with CLEAN helped in obtaining the image of the black hole inside the galaxy Messier 87.

She grew up in West Lafayette, Indiana and she had learned about the Event Horizon Telescope back in 2007 while she was in school. After that, she studied Electrical Engineering from University of Michigan and went on to earn a Master’s degree from the Massachusetts Institute of Technology where she eventually earned her doctorate. At MIT, her master’s thesis was awarded the Ernst Guillemin award for the best thesis. It was after this that she joined Harvard University as a fellow on the imaging team of Event Horizon Telescope.

The Event Horizon Telescope, a collection of eight interlinked telescopes located in various parts of the world ranging from Hawaii to Antarctic captured the black hole using a technique called interferometry. The data obtained from these telescopes were collected in hard drives and then sent to a central processing centre. Dr. Bouman led the efforts in the testing process where the algorithms were used with various assumptions fed into them for extracting the image from the data. This did not mark the end of the process as the results produced by the algorithms were separated checked by several teams for the final verification. The size of the black hole is larger than that of our entire solar system and it measures 40 billion kilometres across which is three million times the size of our planet.

Dr. Katie Bouman acknowledged the efforts of all the researchers, mathematicians and engineers in this project as she said that it was because of this collaboration that this once thought impossible task was finally achieved.

Bouman now will start her new job as an assistant professor at the California Institute of Technology. But that does not end her journey with the black holes. She plans on working with the Event Horizon team to be able to produce a video on the black holes in addition to the existing images.

Karatsuba during a lecture

Mathematicians find an amazing trick to multiply bigger numbers

The tables which were an amazing aid, pioneered first some 4000 years ago by the Babylonians is the most handy tool when it comes to multiplications. Well, its good for multiplication of small numbers but for bigger numbers , it becomes a bit tedious without the calculators and computers since we switch to the old school math of multiplying basically two numbers together. This method , though very accurate is slow since, for every single digit in a number , there needs to be a separate multiplication operation, before we add up the products. For every single digit in each number in the problem, you need to perform a separate multiplication operation, before adding all the products up.

Long multiplication is basically an algorithm, but not an efficient one, since the process is unavoidably pain-stricken. The problem is that as the numbers get bigger, the amount of work increases, represented by n raised to the power 2.

Well, the long multiplication algorithm was one of the most advanced multiplication algorithm we ever had until the 1960s, when a Russian mathematician, named Anatoly Karatsuba found that n raised to the power 1.58 was possible.

Soon after a decade, a pair of German mathematicians generated another shockwave with the most advanced breakthrough: the Schönhage–Strassen algorithm which in words of Harvey is- “They predicted that there should exist an algorithm that multiplies n-digit numbers using essentially n * log(n) basic operations,” as he posted his research paper which is yet to be peer- reviewed. However, It is in standard practice in mathematics to disseminate the results before undergoing any sort of peer review.

Using the Schönhage-Strassen algorithm and with new theoretical proof, it would take less than 30 minutes to solve the multiplication theoretically – and might be the fastest multiplication algorithm that is mathematically possible. Although the researchers have no idea about how big a number can they use to solve any multiplication problem using this method but the one they gave on the paper equates to 10214857091104455251940635045059417341952, which is really a very big number.

Fürer from Penn State University told the Science News that around a decade ago, he himself tried to make amendments the Schönhage-Strassen algorithm but eventually discontinued since it seemed quite hopeless to him. Now , that the mathematicians can verify it, his hopelessness has diminished.

In the meantime, Harvey and Joris van der Hoeven from École Polytechnique in France, say that their algorithm needs optimisation as they feel anxious if the results go wrong!

Godfathers of AI

The Godfathers of AI receive the prestigious Turing Award

The 2018 Turing Award, acknowledged as the “Nobel Prize of computing” has been awarded to a trio of researchers who have set the foundations for the current success in artificial intelligence.

ARTIFICIAL INTELLIGENCE:
The term artificial intelligence merely refers to the intelligence that is demonstrated by the computers. Artificial intelligence is renowned for its cycles of boom and bust, and the issue of hype is as old as the field itself. When the research fails to meet the inflated expectations, it generates a freeze in the funding and interest known as an “AI winter”. It was at the tail end of one such winter in the late 1980s that Bengio, Hinton, and LeCun began exchanging ideas and working on interconnected problems. These included neural networks. They are computer programs made from connected digital neurons that have become a key building block for contemporary and modern AI.

WINNERS:
Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, who are often referred to as the ‘godfathers of AI’ have been recognized with the $1 million annual prize for their work evolving the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s assisted huge breakthroughs in tasks like computer vision and speech recognition. Their work fortifies the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

All three have ever since taken up prominent places in the Artificial intelligence research ecosystem, by straddling with the academic world and the tech industry. Hinton splits his time between Google and the University of Toronto; Bengio is an associate professor at the University of Montreal and has started an AI company called Element AI, while LeCun is Facebook’s chief AI scientist and an instructor and professor at NYU.

Google’s head of AI, Jeff Dean also praised the trio’s achievements. “Deep neural networks are accountable for some of the greatest advances in modern computer science,” said Dean in a statement.

Let us hope that people like this trio improve AI and all of us use AI in the right way. Tell us your view on AI and what do you think about the future of AI with a quick comment.

Srinivasa Ramanujan

Ramanujan: The man who knew Infinity

Srinivasa Ramanujan was one among India’s greatest mathematical geniuses. He created substantial contributions to the analytical theory of numbers and worked on elliptic functions, continuing fractions, and infinite series.

22 December 1897-26 April 1920, Ramanujan was a man of science who lived throughout the British decree in India. Although he had nearly no formal coaching in math, he created substantial contributions to mathematical analysis, range theory, infinite series, and continued fractions, as well as solutions to mathematical issues thought of to be insoluble.

Early life

When he was nearly 5 years young, Ramanujan entered the first faculty in Kumbakonam though he would attend many completely different primary faculties before getting into the city high school in Kumbakonam in Jan 1898. At the city high school, Ramanujan was doing well altogether in his faculty subjects and showed himself an in a position well-rounded scholar. In 1900 he began to figure on his own on arithmetic summing geometric and arithmetic series.

Ramanujan was shown a way to solve cube-shaped equations in 1902 and he went on to seek out his own methodology to unravel the fourth power. The subsequent year, not knowing that the quintic couldn’t be solved by radicals, he tried (and in fact failed) to unravel the quintic.

It was within the city high school that Ramanujan stumbled on an arithmetic book by G S Carr known as a summary of elementary leads to maths. This book, with its terribly cryptic vogue, allowed Ramanujan to show himself how to solve arithmetic problems, however, the design of the book was to possess a rather unfortunate impact on the method Ramanujan was later to jot down arithmetic since it provided the sole model that he had of written mathematical arguments. The book contained theorems, formulae, and short proofs. It conjointly contained an index of papers on maths that had been revealed within the European Journals of Learned Societies throughout the primary 1/2 the nineteenth century. The book, revealed in 1856, was, in fact, a feed of date by the time Ramanujan used it.

By 1904 Ramanujan had begun to undertake a deep analysis. He investigated the series ∑ (1/n) and calculated Euler’s constant to fifteen decimal places. He began to review the Bernoulli numbers, though this was entirely his own freelance discovery.

Ramanujan, on the strength of his smart faculty work, was given a scholarship to the govt. school in Kumbakonam that he entered in 1904. But the subsequent year his scholarship wasn’t revived. As a result of Ramanujan devoted a lot of his time to arithmetic and neglected his different subjects. While not having much cash, he was knee-deep in difficulties and, while not telling his oldsters, he ran away to the city of Vizagapatnam regarding 650 km far north of Madras. He continued his mathematical work, however, and then he worked on hypergeometric series and investigated relations between integrals and series. He was to find later that he had been learning elliptic functions.

Adulthood

In 1906 Ramanujan visited Madras wherever he entered Pachaiyappa’s school. His aim was to pass the primary Arts examination which might permit him to be admitted to the University of Madras. He attended lectures at Pachaiyappa’s school. However, he became unwell when 3 months of study. He took the primary Arts examination when having left the course. He passed in arithmetic however he was unsuccessful in all his alternative subjects and ultimately in the examination. This meant that he couldn’t enter the University of Madras. Within the following years, he worked on arithmetic problems, developing his own concepts with no one facilitating him and with no real plan of the then current analysis topics apart from that provided by Carr’s book.

g-h-hardy-ramanujan-mentor

Godfrey Harold Hardy, Ramanujan’s Mentor. Image Source: Wikipedia

Proceeding with his work on mathematical problems, Ramanujan studied continued fractions and divergent series in 1908. At this stage, he became seriously unwell once more and also underwent an operation in April 1909 when he took him some substantial time to recover. He married on 14th July 1909 once his mother organized for him to marry a ten-year-old woman S Janakiammal. Ramanujan did not accompany his partner, however, till she was twelve years of age.

Ramanujan continued to develop his mathematical concepts and started to cause issues and solve issues within the Journal of the Indian Mathematical Society. He developed relations between elliptic standard equations in 1910. When he published a superb analysis paper on Bernoulli numbers in 1911 within the Journal of the Indian Mathematical Society he gained recognition for his work. Despite his lack of university education, he was turning into being acknowledged within the Madras space as a mathematical genius.

The pursuit of a career in mathematics

In 1911 Ramanujan approached the founding father of the Indian Mathematical Society for a recommendation on employment. After this, he was appointed to his first job, a short-lived post within the Comptroller General’s workplace in Madras. It was then absolutely urged upon him that he approach Ramachandra Rao, who was a Collector at Nellore. Ramachandra Rao was a founder of the Indian Mathematical Society. The agency which had helped him begin the arithmetic library.

Ramachandra Rao told him to come to Madras and he tried, unsuccessfully, to rearrange a scholarship for Ramanujan. In 1912 Ramanujan applied for the post of clerk within the accounts section of the Madras Port Trust.

Despite the very fact that he had no university education, Ramanujan was clearly documented to the university mathematicians in Madras for, along with his letter of application, Ramanujan enclosed a reference from E W Middlemast who was the faculty member of arithmetic at The Presidency faculty in Madras. Middlemast, a graduate of St John’s faculty, Cambridge.

On the strength of the advice, Ramanujan was appointed to the post of clerk and commenced his duties on 1st March 1912. Ramanujan was quite lucky to possess a variety of individuals operating around him with coaching in arithmetic. In fact, the Chief comptroller for the Madras Port Trust, S N Aiyar, was trained as a scientist and printed a paper on the distribution of primes in 1913 on Ramanujan’s work. The faculty member of engineering at the Madras Engineering faculty C L T Griffith was conjointly fascinated by Ramanujan’s talents and, having been educated at University faculty London, who knew the faculty member of arithmetic there, particularly M J M Hill. He wrote to Hill on 12th November 1912 referring him a number of Ramanujan’s work and a duplicate of his 1911 paper on Bernoulli numbers.

Hill replied in an exceedingly fairly encouraging method however he showed that he had did not perceive Ramanujan’s results on divergent series. The advice to Ramanujan that he browsed Bromwich’s Theory of infinite series failed to please Ramanujan a lot of. Ramanujan wrote to E W Hobson and H F Baker attempting to interest them in his results however neither replied. In Jan 1913 Ramanujan wrote to G H Hardy having seen a duplicate of his 1910 book Orders of time.

Mathematical achievements

On 18th of February, 1918 Ramanujan was appointed as a fellow of the Cambridge Philosophical Society then 3 days later, the best honor that he would receive, his name appeared on the list for election as a fellow of the Academy of London. He had been planned by a powerful list of mathematicians, specifically Hardy, MacMahon, Grace, Larmor, Bromwich, Hobson, Baker, Littlewood, Nicholson, Young, Whittaker, Forsyth and Whitehead. His election as a fellow of the academy was confirmed on 2nd May 1918, and then on 10th Oct 1918, he was appointed as a Fellow of Trinity faculty Cambridge, the fellowship to last for six years.

The honors that were presented on Ramanujan perceived to facilitate his health improvement and he revived his efforts at manufacturing arithmetic. By late November 1918, Ramanujan’s health had greatly improved.

Ramanujan sailed to India on 27th February 1919 and he reached on thirteen March. But his health was terribly poor and, despite medical treatment, he died there the subsequent year.

Posthumous recognition

The letters Ramanujan wrote to Hardy in 1913 had contained many desirable results. Ramanujan figured out the Riemann series, the elliptic integrals, hypergeometric series and therefore the practical equations of the zeta function. On the opposite hand, he had solely an imprecise plan of what constitutes a proof. Despite several sensible results, a number of his theorems on prime numbers were utterly wrong.

Riemann Zeta Function

This image shows a plot of the Riemann zeta function along the critical line for real values of t running from 0 to 34. The first five zeros in the critical strip are clearly visible as the place where the spirals pass through the origin. (Source: wikimedia.org)

Ramanujan discovered the results of Gauss, Kummer, et al on the hypergeometric series on his own. Ramanujan’s own work on partial sums and results of hypergeometric series have shed major light and led to a significant development within the topic. Maybe his most notable work was on the amount p(n) of partitions of an integer number n into summands. MacMahon had made tables of the worth of p(n) for tiny numbers n, and Ramanujan used this numerical information to conjecture some outstanding properties a number of that he tried exploitation elliptic functions. Various other results were proved posthumously.

Ramanujan gave an asymptotic formula for p(n) after collaborating on a paper with Hardy. It had the outstanding property that it seemed to offer the right worth of p(n), and this was later verified by Rademacher.

Random Quiz

What is Hardy-Ramanujan number?

Correct! Wrong!

1729 is known as the Hardy–Ramanujan number after a famous anecdote of the British mathematician G. H. Hardy regarding a visit to the hospital to see the Indian mathematician Srinivasa Ramanujan. In Hardy's words: “I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."” The two different ways are these: 1729 = 1^3 + 12^3 = 9^3 + 10^3


Wrapping up!

Ramanujan left a variety of unpublished notebooks full of theorems that mathematicians have continued to check. G N Watson, Mason academic of maths at Birmingham from 1918 to 1951 revealed fourteen papers underneath the final title Theorems declared by Ramanujan and altogether he revealed nearly thirty papers that were impressed by Ramanujan’s work. Hardy passed on to Watson an outsized range of manuscripts of Ramanujan that he had, each written before 1914 and a few written in Ramanujan’s last year in India before his death. Ramanujan left behind 3 notebooks and a bundle of pages (also referred to as the “lost notebook”) containing several unpublished results that mathematicians continuing to verify long since his death.

Read More:

  1. Ramanujan’s lost notebook
  2. Ramanujan surprises again
  3. Mathematical proof reveals magic of Ramanujan’s genius

Machine Learning and Artificial Intelligence

The Future of Computing and Artificial Intelligence

The advent of computers propelled society and spurred into action a lot of businesses and industries that would otherwise have been impossible. Computers and computer technology changed the way humans worked and functioned. The very first computer was a behemoth. Today, everything that you need is available on the computer through the internet and even the most complex computations can be carried out by these modern wonders. But, what does the future have in store for these computers? Are there any more frontiers that these brilliant systems need to face and overcome, and as a result, become even more powerful computationally? Let’s find out.

Computers are already ubiquitous, but, they possess the ability to enhance their presence even more. Computers are on desks and countertops, in bags and pockets, but, soon, they might be a part of everything imaginable. Yes, everything conceivable, even the tiniest of devices and commodities, might soon have computers embedded within them. There is an increasing demand for products that can carry out the computations that they need on their own and this has led to the need for devices that have computers embedded in them.

Some people perceive the final goal of computers to be that of ensuring that computers are inextricable from society and human life. Computers would be so involved in every process that the very thought of its non-existence would make life impossible! The aim is to ensure that computers are entangled with human life in such a way that computers would be indistinguishable from human life.

The notion going around that the future of computing is limited to Artificial Intelligence is incorrect. Artificial Intelligence is still burgeoning and is expected to be an essential part of day-to-day life, but, the future of computing is not limited to just Artificial Intelligence, there is much more to computing.

Artificial Intelligence

ai-banner-by-flickr

Image Source: Mike MacKenzie (Flickr)

The first challenge involved with Artificial Intelligence is the definition of intelligence. Intelligence has long been considered to be subjective and each person’s interpretation and definition of intelligence is generally different. Is a machine intelligent if it is able to communicate? Is it intelligent if it is able to learn? Or, is it intelligent if it can merely solve problems? One way to measure intelligence is through processing power. Machines can potentially gain computing powers that are greater than the processing power of all human beings put together. This situation is termed as the Singularity. The term was coined by John von Neumann and refers to the point when human affairs would cease, and computers would completely take over.

Potential threats

The low-end repetitive tasks have already been replaced by computing systems that automate it. The need for automation has made it imperative that we develop the computing potentials of computers. But, this growth has already caused a lot of concerns. There are several concerns over whether machines would someday surpass human intelligence. Currently, human intelligence, based on processing power, is considered to be 100,000 times more than that of the most powerful processing computer. But, it is estimated that computers can reach the processing power of humans within the next decade, at the rate of growth they are showing. By 2050, it is estimated that they would have processing power that is more than the sum total of all human brains combined. That is a scary thought indeed.

So, computers can take away our jobs and surpass our intelligence, what does that suggest? Is the world doomed? Not quite. Although one cannot quite rule out world domination by super intelligent computers and artificial intelligence, it is unlikely to happen due to the complexity brake. Scientists argue that computers and systems cannot grow to gain such super intelligence because as the processing power grows, the complexity increases and the complexity would reach a level that would restrain the growth of intelligence. Even if computers somehow broke free from the shackles of the complexity break, it is hypothesized that there would be enough ‘obedient’ intelligent systems that would keep the ‘misbehaving’ ones in check.

Let’s now get back to defining artificial intelligence. Since intelligence cannot be quantifiable, a generalist definition of artificial intelligence would be “machines and computers that carry out computations and tasks that would otherwise be carried out by humans”. This definition clearly does not capture the whole picture, but, is a reasonable attempt at defining A.I.

Evolution of Artificial Intelligence

The evolution of artificial intelligence can be viewed with regards to the forms of problems that it intended on solving. The early works in artificial intelligence focused on formal tasks, such as game playing and theorem proving. Checkers playing programs and chess playing algorithms were the earliest works in the field. Theorem provers such as the ‘The Logic Theorist’ aimed at solving theorems.

ai-chess-game

Image Source: Pixabay

A.I. then began to be implemented in solving common sense reasoning problems. A.I. was used to simulate the human ability to make presumptions about everyday activities. Reasoning about physical objects and their relationships, and about actions and their consequences were the problems that came under this field.

When A.I. began to grow, it started being built for perception problems. These problems were especially difficult because they involve analog data such as speech and vision.

A.I. is now being used for natural language understanding and natural language processing. Humans possess the ability to communicate through expressions and languages. Perceiving the languages and various constructs of the languages and being able to process them, are all problems that A.I. was created to solve. These fields are still not fully developed, but, systems that carry out natural language processing with a high degree of accuracy are already in place.

A.I. is now being used in tasks where human experts are needed. These tasks, such as engineering design, scientific discovery, medical diagnosis and financial planning are being carried out using artificial intelligence with a great degree of accuracy and precision. Expert systems are computer systems that use inferences and knowledge bases to mimic the decision making ability of human experts.

Domains of Artificial Intelligence

Artificial intelligence consists of countless domains. It would be highly improbable to list all of them. Here are some of the interesting domains of Artificial Intelligence:

Heuristic

Heuristic techniques involve finding approximate solutions to problems instead of accurate ones. Heuristic techniques are generally used when the time for computing the solutions is restricted or when it is computationally too expensive to look for the global optimal solution.

Statistical reasoning

This form of reasoning involves using probabilities and probabilistic theorems to determine most likely occurrences. Logical rules based on probabilities are created to determine what event is more likely to occur, given certain conditions.

Natural language understanding and processing

Natural language understanding involves the interpretation of natural language spoken by humans. Natural language processing, on the other hand, involves both interpretation and generation of natural language spoken by humans.

Expert systems

Expert systems are involved in carrying out tasks that would otherwise require human experts. These systems use inferences, which, are generally if-then rules, and the knowledge of previous problems and inferences derived from these previous problems, to provide solutions to the problem at hand.

Fuzzy logic systems

Fuzzy logic systems remove the notion of crisp boundaries and use fuzzy sets instead of crisp sets. In fuzzy logic, everything is viewed in terms of degree of membership. The hotness of a day is generally not quantifiable, but, fuzzy logic aims to quantify such occurrences using the concept of membership degrees. A temperature of 45°C, which, most people would perceive to be hot, could, therefore, belong to a set called ‘Cold’ to a certain degree (a small membership value) and to a set called ‘Hot’ to a certain degree (large membership value).

Genetic algorithms

Genetic algorithms aim to mimic the biological behavior of human beings and animals to solve problems. Artificial Neural Networks have been widely used in the computation. These networks consist of nodes that are intricately interconnected just like the human neural networks. Outputs at each layer of a network is propagated to the next layer. This system is used to solve a lot of complex problems. Ant colony optimization is one of the areas of genetic algorithms, where actual ant colonies are mimicked to solve problems such as finding the shortest paths.

Evaluation of artificial intelligence: Turing test

Turing test

turing-test-img-by-wikipedia

Turing Test Illustration. Image Source: Wikipedia

Alan Turing hypothesized that a computer can achieve intelligence equivalent to that of a human being, or at least, intelligence that is indistinguishable from human intelligence if it could successfully lead a human being to believe that the machine was human. The Turing test is renowned for being the basis for classifying machine as either intelligent or not intelligent.

The Turing test involves a human evaluator who is responsible for determining which of the two entities he is talking to is actually human. The evaluator is allowed to ask a series of questions to carry out this classification. These questions are asked to both the human being and the machine. If the human evaluator fails to determine which one is human, and which one is a machine, based on the responses, then, the machine is said to have successfully fooled the evaluator and is deemed to be intelligent.

The Turing test is said to have set the benchmark for evaluating systems that may be intelligent.

The CEO of SoftBank, which, is a Japanese conglomerate, Masayoshi Son, once said, “I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it will be our partner”. It is absolutely imperative that we tread carefully in matters pertaining to artificial intelligence. It is important to ensure that humans do not lose control over artificial intelligence. The result of losing control can be catastrophic. There is a need to impose regulations that prevent detrimental outcomes due to the development of artificial intelligence. The field of artificial intelligence has immense capabilities and should be allowed to grow, under regulated circumstances. Artificial intelligence need not have sinister outcomes. It is the responsibility of humanity to monitor A.I. and let it grow.

Read More:

  1. A Technical Overview of AI & ML (NLP, Computer Vision, Reinforcement Learning) in 2018 & Trends for 2019
  2. Bill Gates: These breakthrough technologies are going to profoundly change the world