The curious world of Artificial Intelligence
What is intelligence? It could be defined as the process of learning, comparing a situation with a previous experience, making a decision and acting upon it. We are able to take more complicated decisions when we learn from our experience. So, it can be said that intelligence is the ability to learn, understand, and make judgments or have opinions that are based on reason. Just imagine a machine capable of thinking and learning the way you do. There germinates the concept of Artificial Intelligence (AI), which is a major branch of science today.
Such imaginations on artificial intelligence had been there in most of the human civilizations. The human beings thought of having objects with superhuman powers, and look and act like human beings. In Greek mythology, there is a character, Talos of Crete, a giant bronze man who guarded Europa in the island of Crete by throwing stones at the ships of unwanted visitors. The myth is that the giant automaton circled the island’s shores three times a day to protect Europa from invaders and pirates. Such fictitious characters were there in different periods and we all know the stories of the fictional characters Frankenstein and R.U.R (Rossum’s Universal Robots).
However, slowly
the tales of imagination gave way to reality, thanks to the advent of science
and technology. For the last many years, scientists have been thinking of
developing programs enabling the machines to mimic human cognition, thus enable
it to think and learn. Over the years, we have made considerable progress and
this area of research is called Artificial Intelligence (AI). Now, when we
search Google for some information, of seek the help of Google Maps while
driving, we are actually utilizing AI, and it has become an integral part of
our lives in many ways, knowingly or unknowingly. Similarly, content
recommendations on platforms like Netflix or Youtube based on your earlier
search history, product recommendations on e-commerce sites like Amazon, or even
the targeted advertising on the social media platforms like Facebook are
examples of the application of AI in our daily though we are not aware of it.
So,
Artificial Intelligence (AI) can be defined as the ability of a computer
program or a machine to ‘think and learn’. This means computers performing
certain things we associate with our minds, such as learning and problem
solving. However, this is not in the same way as we do, because we use our own
ability of discrimination in taking a decision whereas the computer’s decision
is data oriented; it is on the basis of the data provided to it.
Nowadays,
artificial intelligence (AI) is commonly used in different areas including
healthcare, education, manufacturing, banking and financial markets, supply chains,
retail and e-commerce to mention a few, and this way, AI has been an important
enabler for many new business innovations. Also, there are some other
applications including automated interfaces for visual perception, speech
recognition, decision-making, and translation between languages, and this way AI
has developed into an interdisciplinary science.
Origin of the term AI
The term Artificial
Intelligence was first coined in 1956 at the Dartmouth Conference, organised by
a team under American computer scientist John McCarthy, who was working on the
problem of making computers intelligent. Many computer scientists and others
interested in this field attended this month-long brainstorming session at
Dartmouth College in USA. At the conference they wrote programs and the US Department
of Defense started giving a lot of money to AI research and labs were set up
all over the world. However, many researchers failed to take it forward the way
it should have been and the governments hesitated to fund it as there was no
considerable progress. What followed was a period of “AI winter”, as there was
no major research done for a few years. Once again it was revived in the 1980s and
new faster computers were developed, which made the US and the British governments
to fund the research. However, once again the funding was stopped as the market
for faster machines collapsed in 1987. Then, in the 1990s and early 2000s it
was revived again with the advent of information technology in the areas like data
mining and medical diagnosis. This was followed by some historic milestones. In
1997, Deep Blue, a chess-playing computer developed by IBM, defeated chess
world champion Garry Kasparov. Then, in 2011, IBM’s Watson defeated Brad Rutter
and Ken Jennings, the top two players in the popular American TV quiz program
Jeopardy! Such developments made artificial intelligence popular, thus
promoting research in this field.
Watson
computer was able to answer the questions by processing massive amounts of
structured and unstructured data. The computers achieve this level of intelligence
when researchers train them to play and beat old video games, and this is
stored in their memory.
An industrial robot
Then,
there are industrial robots, like Fanuc industrial robots, which are designed
to perform tasks in collaboration with workers in industrial sectors. Also, now
there are intelligent personal assistants (IPA) including Siri, on the iPhone,
or Cortana, on Windows, Android, and iPhones. They listen to your requests for
information and perform tasks. It can memorize your request like adding an
event to your calendar or dial the phone number of a friend. These applications
combine speech recognition, image matching, natural language processing (NLP),
and question-and-answer systems. One interesting area is Conversational AI, a set
of technologies behind automated messaging and speech-enabled applications,
thus offering human-like interactions between computers and humans.
Human
intelligence utilizes a combination of different cognitive processes to adapt
to new environments, but Artificial Intelligence aims to build machines that
can mimic human behavior and perform human-like actions. This means AI is
objective in choice making, because its actions are based on the analysis of
the accumulated data and specific instructions fed into the system. On the
other hand, humans use the brain’s computing power, memory, and ability to
think.
We all
know that technology has become an integral part of our lives and there are
many innovations in different segments of industry where AI plays a major role.
It helped the automation of many fields thus reducing human effort. Definitely
there is even more to come. According to a report of Artificial Solutions, a specialist
in Conversational AI, this technology will outperform humans in many activities
in the coming years including translating languages all the way to working as a
surgeon.
In order
to make an intelligent machine, it should have sufficient memory or a space to
store the experiences or information so that it can apply these experiences to
new ones by processing the information and comparing experiences to come to
logical conclusions. All these things are done in an amazing speed but it is
not like the human thought. However, in an attempt to make them ‘think’, now scientists
are in the process of creating new software programs by trying to copy the
functioning of the brain. One such program is called a neural network. Neural
networks are also known as artificial neural networks (ANNs) or simulated
neural networks (SNNs). Their name and structure are inspired by the human
brain, mimicking the way that biological neurons signal to one another,
according to the experts in the field.
We all
know our brain is composed of millions of cells called neurons, which are like
a tiny individual switch. Biological neurons are also called nerve cells, and
they are the fundamental units of the brain and nervous system. These cells are
responsible for receiving sensory input from the external world, for sending
motor commands to our muscles, and for transforming and relaying the electrical
signals at every step in between.
Whenever a particular piece of information reaches your brain, it creates a pattern of ‘on and off switches’ using these neurons. For example, when we memorize a telephone number, the neuron creates a pattern of it in our brain and when we try to remember it we try to recreate that pattern. A neural network in a computer is also a copy of the brain’s functioning, and it can be taught to recognize patterns. Once trained, the computer can classify and identify huge number of patterns at a high speed, at times faster than humans. For example, a computer that looks at the past weather and climate data, match it with current conditions and telling us where it is going to rain and how much.
Measuring a computer’s intelligence
Even
before the term AI was coined, experiments were on to develop the machines that
could demonstrate human intelligence. In 1950, famous British mathematician
Alan M. Turing, known as the ‘Father of the Modern Computer’, proposed a method to test if a
computer could demonstrate human intelligence. As part of this, a person is
kept inside a closed cell and asked to speak to a hidden human being and a
computer, without knowing which is the human being and which is the computer.
If this person, who is called the interrogator (one who questions), fails to
find out which of the two candidates is the computer, the machine is considered
intelligent. It has become a fundamental motivator in the theory and
development of Artificial Intelligence. However, those days the term AI was not
coined, and in Britain it was called ‘machine intelligence’. Turning carried
out substantial research in the area way back in 1940s, and circulated a
typescript on machine intelligence among some of his colleagues. In February
1947, Turing gave a public lecture in Royal Astronomical Society in London, to
mention computer intelligence, in which he provided some glimpses of machine
learning and described the human brain as a ‘digital computing machine’. He
even discussed the possibility of machines that act intelligently, learn, and
beat human opponents at chess.
Alan M. Turing
Turing was
also the brain behind the Bombe, an electro-mechanical device used by the
British cryptologists to help decode the encrypted secret messages of German
Enigma-machine during World War II. Turning and team installed this machine at Bletchley
Park, an English country house and estate in Bletchley, which was the principal
centre of the Allied Forces for code-breaking during the World War II. In June
1941, Turing and his team succeeded in breaking into the daily message of
Enigma, the cipher machines used by the Germans in the early- to mid-20th
century to protect commercial, diplomatic, and military communication. During
World War II, Nazi Germany employed this electro-mechanical rotor machine extensively
in all branches of the German military for the encryption and decryption of
secret messages.
However, even
after decades, no machine has passed the Turning Test completely, though some
machines almost passed it. In 2014, a computer program called Eugene Goostman,
which simulates a 13-year-old Ukrainian boy, claimed to have passed the test,
as it successfully convinced one of the three human judges that it was a human
being. But the remaining two judges recognized it as a machine. Then, in 2018,
Google Duplex voice AI was successful in calling a hairdresser and successfully
making an appointment in front of the audience. The hairdresser did not
recognize she was speaking to an AI. Though these are path-breaking
developments, and computers are more efficient in performing many tasks better
than human beings, they are yet to defeat human intelligence in the real sense.
But researches are still on in this direction and it is quite possible that
such a machine would be developed if we are able to fully understand the
functioning of the human brain and replicate it.
Comments
Post a Comment