Artificial intelligence, or AI, is the ability of a digital computer or computer-controlled machine to perform tasks commonly associated with intelligent beings, such as visual perception, speech recognition, decision-making, or translation between languages.
The idea that the human thinking process could be mechanized has been studied for thousands of years by Greek, Chinese, Indian and Western philosophers. But researchers consider the paper A Logical Calculus of the Ideas Immanent in Nervous Activity (McCulloch and Pitts, 1943) as the first recognized artificial intelligence work.
In 1946, the US Army unveiled ENIAC, the first programmable general-purpose electronic digital computer. The giant machine was initially designed to calculate artillery-firing tables, but its ability to execute different instructions meant it could be used for a wider range of problems.
In 1950, the renowned computer scientist and mathematician Alan Turing formally introduced the concept of artificial intelligence in his paper “Computing Machinery and Intelligence”. He proposed the Turing test, which would test a machine’s ability to mimic human intelligence.
In 1956, the Dartmouth Artificial Intelligence conference, proposed by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, gave birth to the field of AI and awed scientists about the possibility that electronic brains could actually think.
In 1964, Joseph Weizenbaum built ELIZA, an interactive program that carries on a dialogue in English and on any topic. It became a popular toy when a version that simulated the dialogue of a psychotherapist was programmed.
Since then, AI has been repeatedly featured in sci-fi movies and TV shows that captivated the public’s imagination. Who doesn’t remember HAL 9000, the sentient and malevolent computer that interacts with astronauts in 2001: A Space Odyssey?
Despite scientists’ initial enthusiasm, practical applications for artificial intelligence had been lacking for many decades and led many to dismiss the impact of AI on our society.
Only in the late 1980s, a new area of research called deep learning began to show early promise about the potential of AI. Unfortunately, the computational power at the time was still too slow for scientists to reach any meaningful breakthroughs.
It was only in 1997, when the IBM Deep Blue computer defeated the world chess champion, Gary Kasparov, that artificial intelligence started to be taken more seriously.
In 2005, DARPA sponsored The Grand Challenge competition to promote research in the area of autonomous vehicles. The challenge consisted of building a robot-car capable of navigating 175 miles through desert terrain in less than ten hours, with no human intervention. The competition kickstarted the commercial development of autonomous vehicles and showcased the practical possibilities of artificial intelligence.
In 2011, IBM’s Watson computer defeated human players in the Jeopardy! Challenge. The quiz show, known for its complex, tricky questions and very smart champions, was the perfect choice to demonstrate the advance of artificial intelligence.
in 2016, DeepMind’s Alpha Go beat Lee Sedol, one of the world’s best Go players, in a contest that left scientists and researchers speechless due to the complexity of the Chinese board game.
Due to the exponential advance in AI in the last years, top scientists and entrepreneurs such as Bill Gates, Elon Musk, Steve Wozniak, and Stephen Hawking began making doomsday predictions and warning society about the dangers of a superintelligent AI that could potentially be very dangerous to humanity.
What happened? Why did some of the smartest people on Earth sound the alarm about the perils of artificial intelligence? How could something like a computer that played Go be a threat to civilization?
To answer these questions and to illustrate how AI will be the most important technology in the next decades, we need to understand the various types of AI in existence, where we are in terms of technology development, and how they work.
Weak Artificial Intelligence (WAI)
Weak artificial intelligence, also known as narrow artificial intelligence, is the only type of AI we have developed so far. WAI specializes in just one area of knowledge and we can experience it everyday, even though we rarely notice its presence.
Simple things like email spam filters are loaded with rudimentary intelligence that learns and changes its behavior in real time according to your preferences. For instance, if you tag a certain sender as junk several times, the WAI automatically understands it as spam and you’ll never need to flag it again.
Google is also a sophisticated WAI. It ranks results intelligently by figuring out among millions of variables, which ones are relevant to your specific search and context.
Other examples of WAI are voice recognition apps, language translators, Siri or Cortana, autopilots in cars or planes, algorithms that control stock trading, Amazon recommendations, Facebook friends’ suggestions, and computers that beat chess champions or Jeopardy! players. Even autonomous vehicles have WAIs to control their behavior and allow them to see.
Weak artificial intelligence systems evolve slowly, but they’re definitely making our lives easier and helping humans to be more productive. They’re not dangerous at all. In case they misbehave, nothing super-serious would happen. Maybe your mailbox would be full of spam, a stock market trade would be halted, a self-driving car would crash, or a nuclear power-plant would be deactivated.
WAIs are stepping stones towards something much bigger that will definitely impact the world.
Strong Artificial Intelligence (SAI)
Strong artificial intelligence, also referred as general artificial intelligence, is a type of AI that allows a machine to have an intellectual capability or skillsets as good as humans. Another idea often associated with SAI is the ability to transfer learning from one domain to another.
Recently, an algorithm learned how to play dozens of Atari games better than humans, with no previous knowledge on how they worked. It is an amazing milestone for artificial intelligence, but it is still far away from being a SAI.
We need to master a myriad of weak artificial intelligence systems and make them really good at their jobs before taking the challenge to build a SAI with human-like capabilities.
Computers might be much more efficient than humans for logical or mathematical operations, but they have trouble understanding simple tasks such as identifying emotions in facial expressions, describing a scene, or distinguishing the nuanced tones or sarcasm.
But how far are we from allowing computers to perform tasks that only humans could do? To answer this question, we need to make sure we have affordable hardware at least as powerful as the human brain. We are almost there.
Scientists estimate the speed of a human brain to be about twenty PetaFLOPS. Currently, just one machine, the Chinese supercomputer Tianhe-2, can claim it is faster than a human brain. It cost $400,000,000, and of course it is not affordable and accessible for researchers in AI. Just for the sake of comparison, in 2015, an average $1,000 PC is roughly 2,000 times less powerful than a human brain.
But wait a few years, and exponential technologies will work their magic. Futurists like Ray Kurzweil are very optimistic that we’ll achieve one human brain capability for $1,000 around the 2020s and one human race capability for $1,000 in the late 2040s. In the early 2060s we’ll have the power of all human brains on Earth combined for just one cent.
As you can infer from these calculations from Kurzweil, computing power won’t be an obstacle to achieving a strong artificial intelligence. Actually, even if we make pessimistic predictions following the current trends dictated by Moore’s Law, we’ll achieve those capabilities several decades later. It is just a matter of time until hardware becomes billions of times more powerful than all human brains combined.
The major difficulties to inventing a strong artificial intelligence lie in the software part, or how we’ll be able to replicate the complex biological mechanisms and the connectome of the brain so a computer can learn to think and do complex tasks.
There are many companies, institutions, governments, scientists, and startups working on reverse engineering the brain using different techniques and counting on the help of neuroscience. Optimists believe we’ll able to have a complete brain simulation around the 2030s. Pessimists think we’ll have achieved it by the 2070s.
Theoretically, it may take decades to have a computer as smart as a five-year-old kid, but a strong artificial intelligence system will be the most revolutionary technology ever built.
A future SAI will be more powerful than humans at most tasks because it will run billions of times faster than our brains, with unlimited storage and no need to rest. Initially, a SAI will make the world a better place by doing human jobs more efficiently.
The reason why tech visionaries and scientists are concerned with the invention of a strong artificial intelligence is because a computer intelligence doesn’t have morals; it just follows what is written in its code.
If an AI is programmed, for instance, to get rid of spams, it could decide that eliminating humans is the best way to do its job properly.
Also, it is feared that an SAI would be on the human-intelligence threshold for just a brief instant in time. Its capability to program itself, recursively, would make it exponentially more powerful as it gets smarter.
Recursive self-improvement works like this. Initially, the SAI is programmed by itself, which would be the equivalent of, let’s say, two humans. As the machine has access to abundant computing power, it can multiply the number of “programmers” by the dozens in just a matter of hours.
Within some days, these human-equivalent “programmers” would have discovered so many scientific breakthroughs that the artificial intelligence would become smarter than an average human. That will affect its “programmers” as well, that could be smarter than Einstein.
At some point in time, this recursive capability will allow any strong artificial intelligence to become orders of magnitude more intelligent than us, giving birth to the first superintelligence.
A Superintelligence
The moment a SAI becomes a superinterlligence is the moment we might lose control of our creation. It can “come to life” in just hours, without our knowledge. What happens after a superintelligence arises is anyone’s guess. It could be good, bad, or ugly for the human race.
The bad scenario we all know from movies such as The Terminator or The Matrix. In this case, humans would be destroyed or enslaved because they present a threat to the superintelligence’s survival.
The ugly scenario is more complicated. Imagine what would happen if multiple superintelligences arise at the same time in countries such as the United States or China. What would happen? Would they fight for supremacy, be loyal to the countries or the programmers who created them, or coexist peacefully and share power? Nobody knows.
The good scenario would remind us of paradise. The artificial superintelligence would be like an altruistic God that exists only to serve us. All humanity’s problems would be fixed and our civilization would go to infinity and beyond.
Brilliant entrepreneurs such as Elon Musk, founder of Paypal, Tesla, SpaceX and SolarCity, are raising awareness about the consequences of our advanced technology, specifically in artificial intelligence.
In his own words:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.
I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
All these scenarios sound like science fiction now, but they might become real one day. The comparison with nuclear weapons is a good one. We’ve almost been annihilated during the cold war by a technology that most people thought was science fiction.
So, even if there is the slightest chance that a superintelligence might arise in the next 20 years, we should be worried, because it could be our last invention.
We must not be afraid of being ridiculed, and proceed to discuss these questions openly. The destiny of our civilization will entirely depend on the safeguards and regulations we now put on our technology in order to avoid catastrophic scenarios.
FAQs
What is artificial intelligence for beginners? ›
AI (Artificial Intelligence) is a machine's ability to perform cognitive functions as humans do, such as perceiving, learning, reasoning, and solving problems. The benchmark for AI is the human level concerning in teams of reasoning, speech, and vision.
Can you explain AI in simple terms? ›Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.
What are the 4 types of AI? ›According to the current system of classification, there are four primary AI types: reactive, limited memory, theory of mind, and self-aware.
What is AI explain to kid? ›Artificial intelligence, or “AI,” is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning.
What are the examples of AI in real life? ›- Manufacturing robots.
- Self-driving cars.
- Smart assistants.
- Healthcare management.
- Automated financial investing.
- Virtual travel booking agent.
- Social media monitoring.
- Marketing chatbots.
Facial Detection and Recognition
Using virtual filters on our faces when taking pictures and using face ID for unlocking our phones are two examples of artificial intelligence that are now part of our daily lives.
At its core, AI reads human behavior to develop intelligent machines. Simply put, the foundational goal of AI is to design a technology that enables computer systems to work intelligently yet independently.
What type of AI is Siri? ›Apple's Siri is one of a group of virtual assistants capable of performing a wide range of everyday tasks and interacting with users in a human-sounding voice and natural speech patterns that you wouldn't normally expect from a machine or computer system.
What type of AI is Alexa? ›With conversational AI, voice-enabled devices like Amazon Echo are enabling the sort of magical interactions we've dreamed of for decades. Through a voice user interface (VUI), voice services like Alexa can communicate with people in ways that feel effortless, solve problems, and get smarter over time.
Who invented AI? ›Stanford's John McCarthy, seminal figure of artificial intelligence, dies at 84. McCarthy created the term "artificial intelligence" and was a towering figure in computer science at Stanford most of his professional life.
How do you explain artificial intelligence? ›
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
How will you explain to your parents what is AI? ›Artificial intelligence (AI) technology refers to computers or machines that are programmed to perform tasks that we traditionally think only humans can do – by mimicking human thought or behaviour.
How do you explain machine learning to a 5 year kid? ›Explaining machine learning to a kid the easy way
You can explain machine learning to a 5-year-old kid by telling them it happens when computers have access to information. Over time, this lets them learn how to make decisions without a human telling it what to do.
Learning AI is not an easy task, especially if you're not a programmer, but it's imperative to learn at least some AI. It can be done by all. Courses range from basic understanding to full-blown master's degrees in it. And all agree it can't be avoided.
How long does it take to learn AI? ›How Long Does It Take To Learn AI? Although learning artificial intelligence is almost a never-ending process, it takes about five to six months to understand foundational concepts, such as data science, Artificial Neural Networks, TensorFlow frameworks, and NLP applications.
Where can I learn AI for free? ›...
You don't.
- Stanford University's AI Course. ...
- Google's Machine Learning Course. ...
- Udacity's Free AI Course.
In conclusion, not only can machine learning exist without AI, but AI can exist without machine learning.
Can I learn artificial intelligence without coding? ›These SaaS tools offer the same computing power of AI giants, like Google and Apple, but with no coding skills required. No-code AI platforms make machine learning accessible to everyone – some are simply plug and play and some allow you to train advanced models to your specific needs.
Which programming language is used for artificial intelligence? ›Python is widely used for artificial intelligence, with packages for several applications including General AI, Machine Learning, Natural Language Processing and Neural Networks.
What are the 3 types of AI? ›Artificial Narrow Intelligence or ANI, that has a narrow range of abilities; Artificial General Intelligence or AGI, that has capabilities as in humans; Artificial SuperIntelligence or ASI, that has capability more than that of humans. Artificial Narrow Intelligence or ANI is also referred to as Narrow AI or weak AI.
What is the main purpose of AI? ›
In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it's not a replacement for humans – and won't be anytime soon.
Who is father of AI? ›If John McCarthy, the father of AI, were to coin a new phrase for "artificial intelligence" today, he would probably use "computational intelligence." McCarthy is not just the father of AI, he is also the inventor of the Lisp (list processing) language.
What AI exists today? ›Narrow AI or weak AI: This is the type of AI that exists today. It is called narrow because it is trained to perform a single or narrow task, often far faster and better than humans can. "Weak" refers to the fact that the AI does not possess human-level, i.e., general intelligence.
Is AI only about robots? ›Robotics and artificial intelligence are two related but entirely different fields. Robotics involves the creation of robots to perform tasks without further intervention, while AI is how systems emulate the human mind to make decisions and 'learn. '
Is AI difficult to learn? ›Learning AI is not an easy task, especially if you're not a programmer, but it's imperative to learn at least some AI. It can be done by all. Courses range from basic understanding to full-blown master's degrees in it. And all agree it can't be avoided.
How long does it take to learn AI? ›How Long Does It Take To Learn AI? Although learning artificial intelligence is almost a never-ending process, it takes about five to six months to understand foundational concepts, such as data science, Artificial Neural Networks, TensorFlow frameworks, and NLP applications.
Does AI require coding? ›Yes, if you're looking to pursue a career in artificial intelligence and machine learning, a little coding is necessary.
What can AI do that humans can t? ›Artificial intelligence is well known for solving problems and providing data-driven answers. Humans might take days and months to figure out the solution, but machines can easily do it in real-time. Unfortunately, despite its calibre, artificial intelligence can't solve puzzling questions.
What are pros and cons of AI? ›...
- High Costs of Creation: ...
- Making Humans Lazy: ...
- Unemployment: ...
- No Emotions: ...
- Lacking Out of Box Thinking:
Which of the given language is not commonly used for AI? Explanation: Among the given languages, Perl is not commonly used for AI. LISP and PROLOG are the two languages that have been broadly used for AI innovation, and the most preferred language is Python for AI and Machine learning.
Who first created artificial intelligence? ›
The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.
Who invented artificial intelligence? ›John McCarthy, a professor emeritus of computer science at Stanford, the man who coined the term "artificial intelligence" and subsequently went on to define the field for more than five decades, died suddenly at his home in Stanford in the early morning Monday, Oct. 24.