What is Artificial Intelligence?
Artificial intelligence refers to computer programs that simulate human intelligence processes. In particular, AI is used for expert systems, natural language processing, speech recognition, and machine learning. A Modern Approach became one of the leading texts in AI after it was published by Stuart Russell and Peter Norvig (link resides outside IBM). The article explores four characteristics of AI, which differentiates computer systems on the basis of rationality and thinking versus acting.
The human approach:
Artificial intelligence systems that think like humans
Artificial intelligence systems that act like humans
The ideal approach would be:
A system capable of rational thinking
Rationality is the act of acting rationally
According to Alan Turing, this could have been considered to be. “a system that contains behaviors resembling humans.”
How does artificial intelligence work?
AI has become a hot topic, so vendors have been scrambling to showcase the ways their products and services utilize AI. AI is often made up of many components, such as machine learning, which they refer to as AI. For machine learning algorithms to be written and trained, specialized hardware and software are required. AI is not synonymous with a single programming language, but a few are popular, including R, Python, and Java.
Artificial intelligence systems typically ingest large quantities of labelled training data, analyze the data for patterns, and then predict the future based on the patterns. An image recognition software that reviews millions of images to identify objects in images can learn to identify and describe objects from examples of text chats. Similarly, a chatbot can learn by examining examples of text chats how to produce lifelike exchanges with people.
Programming for cognitive skills such as learning, reasoning, and self-correction is at the heart of AI.
The process of learning. In this aspect of AI programming, the data is acquired and rules are developed to determine what information can be used. Those rules, called algorithms, tell computers how to complete a particular task in a step-by-step fashion.
A FEW IMPORTANT TAKEAWAYS
- Machines are programmed to simulate human intelligence using artificial intelligence.
- Artificial intelligence aims to improve learning, reasoning, and perception.
- Artificial intelligence is being used in many industries, such as finance and healthcare.
- Generally, weak artificial intelligence focuses on simple, single-task tasks, whereas strong artificial intelligence carries out complex tasks similar to what we do as humans.
What are the Four types of AI?
- The reactive machine
- A limited amount of memory
- Theories of Mind
- Awareness of one’s own feelings
A few examples of artificial intelligence
- Assistants that are smart (such as Siri and Alexa)
- Tools for mapping and predicting diseases
- The manufacturing industry and drone robots
- Health care treatment recommendations that are optimized and personalized
- A conversational bot for customer support and marketing
- Automated stock trading advisors
- Filters on email to prevent spam
- Monitoring tools for social media content that contains false news or dangerous content
- Netflix and Spotify recommendations for songs and shows
The two main categories of artificial intelligence are:
- Weak AI
- Artificial General Intelligence (AGI)
“Weak AI”: This type of artificial intelligence happens to operate within a narrow context and is a simulation of human intelligence. AI that targets a single task well is often considered narrow and so, while these machines seem intelligent, they actually have far more limitations and constraints than even the most fundamental human intelligence.
Artificial General Intelligence (AGI) refers to the kind of artificial intelligence that is sometimes referred to as “Strong AI,” as in robots from Westworld or Data from Star Trek: The Next Generation. Machines with AGI have general intelligence, and, like humans, can use that intelligence to solve virtually any problem.
Artificial Intelligence & Deep Learning
The technology behind narrow AI is largely derived from advances in deep learning and machine learning. It can be confusing to understand the differences between artificial intelligence, machine learning, and deep learning. As noted by venture capitalist Frank Chen, how they can be distinguished is as follows:
- A machine learning technique, deep learning, is a type of artificial intelligence. Machine learning algorithms and intelligence mimic human intelligence.
- The idea behind machine learning is that it allows a computer to “learn” by feeding it data, using statistical techniques, how to become more proficient at doing a task without being explicitly programmed for it, thus eliminating the need for millions of lines of code. In machine learning, both supervised (based on labeled data sets) and unsupervised (without labeled data sets) learning methods are used.
- As a form of machine learning, deep learning runs inputs through an architecture based on biologically inspired neural networks. A neural network consists of a number of hidden layers through which data is processed, allowing the machine to learn at a deeper level by connecting input and weighing it accordingly.
- Antiquity’s myths contain the earliest accounts of artificial intelligence and robots. Syllogisms and deductive reasoning developed by Aristotle played an important role in mankind’s quest to understand its own intelligence. Artificial intelligence in its current form is less than a century old. Although its roots go deep, it hasn’t been around for a long time. This short summary will give you a brief overview of some of the most important events in AI.
Deep Learning vs. Machine Learning
It’s worthwhile pointing out the differences between deep learning and machine learning since they are traditionally used interchangeably. We have already discussed that artificial intelligence and machine learning are subfields of deep learning. Machine learning is in fact a subfield of deep learning.
In how each algorithm learns, deep learning is different from machine learning. By automating much of feature extraction, deep learning eliminates the need for human interaction, thus allowing the use of large data sets. In the same MIT lecture Lex Fridman referred to deep learning as “scalable machine learning.“. Machine learning that is “non-deep,” or classical, relies greatly on human intervention to learn. A hierarchy of features is determined by human experts, usually requiring more structured data to learn how to distinguish between inputs.
In supervised learning, also known as “deep” machine learning, labeled datasets provide useful information to the algorithm, but they are not necessary. In addition, it is capable of ingesting unstructured data in its raw form (for instance, text, images), as well as automatically determining hierarchy features used to distinguish different categories of data. Machine learning requires human intervention to process data, but deep learning does not and can be scaled in much more interesting ways.
The Evolution of Artificial Intelligence:
- (1943) Warren McCullough and Walter Pitts publish A Logical Calculus of Ideas Implied in Nervous Activity. A mathematical model for constructing neural networks was presented in this paper.
- Hebb (1949) described brain connections as the result of experience in his book The Organization of Behavior: A Neuropsychological Theory. Neuroplasticity is the theory that connections between neurons become stronger with regular use. Artificial intelligence continues to rely heavily on Hebbian learning.
- Alan Turing publishes Computing Machinery and Intelligence (Computing Machines and Intelligence, 1950), which introduces what is now called the Turing Test, a way to determine whether a machine is intelligent.
- (1950) SNARC, the first neural network computer, is developed by Harvard undergraduates Marvin Minsky and Dean Edmonds.
- (1950) Claude Shannon publishes a paper entitled “Programming a Computer for Playing Chess.”
- (“The Three Laws of Robotics” by Isaac Asimov are published in 1950).
- (1952) Arthur Samuel develops a self-learning program for playing checkers.
- (1954) Georgetown-IBM machine translation experiment translated 60 carefully selected Russian sentences automatically into English.
- (1956) At the “Dartmouth Summer Research Project on Artificial Intelligence,” led by John McCarthy, AI is defined as an objective, defined as a field of study, and established as a paradigm to be followed by many companies after its establishment.
- (1956) Allen Newell and Herbert Simon introduce Logic Theorist (LT), the first computer program for reasoning.
- A number of years following his development of Lisp (in 1958), John McCarthy published a paper titled “Programs with Common Sense.” The paper proposed the Advice Taker, an AI system capable of learning from experience in a similar manner to humans.
- J. C. Johnson and Allen Newell (1959) A program named General Problem Solver (GPS) was designed to mimic how humans solve problems.
- (1959) Herbert Gelernter creates the Geometry Theorem Prover.
- (1959) At IBM, Arthur Samuel coined the terms “machine learning” and “learning from experience.”.
- (1959) Marvin Minsky and John McCarthy founded the MIT Artificial Intelligence Project.
- The Stanford AI lab was founded by John McCarthy (1963).
- (1966) A legislative report by the U.S. government on the Automatic Language Processing Advisory Committee (ALPAC) indicates that machine translation research has stalled, despite it being a Cold War initiative with the promise of instantaneous translation of Russian. MT programs that are funded by the government have been canceled as a result of the ALPAC report.
- (1969) The first expert systems successfully developed by Stanford are DESIGNER, a program for XX, and MYCIN, which is a tool for diagnosing blood infections.
- (1972) PROLOG, a logic oriented programming language, is conceived.
- (1973) The “Lighthill Report” documents the disappointments in artificial intelligence research and leads to severe funding cuts for this field.
- A DARPA cutback in academic grants is the result of frustration with AI development (between 1974 and 1980). Artificial intelligence funding and research stall together with the earlier ALPAC report and the Lighthill report from last year. “First AI Winter” is what is commonly referred to as this period.
- (1980) Digital Equipment Corporations launches R1 (also known as XCON), the first successful commercial expert system. R1, intended to configure orders for new computer systems, launches an investment boom in expert systems that will last well into the next decade, effectively ending the first season of artificial intelligence.
- (1982) Japan’s Minister of International Trade and Industry launches the ambitious Fifth Generation Computer System. A major goal of FGCS is to develop AI development platforms and performance comparable to supercomputers.
- (1983) The U.S. government launches the Strategic Computing Initiative in response to Japan’s FGCS to provide support for advanced computing and artificial intelligence research through DARPA.
- (1985) Companies invest over a billion dollars a year in expert systems, and Lisp machines are created to support them. AI programming languages such as Lisp are used to develop specialized computers built by companies like Symbolics and Lisp Machines Inc.
- During the Second AI Winter (1987-1993) lower-cost alternative products emerged as computing technology improved, and the Lisp machine market collapsed in 1987. Expert systems became too expensive to update and maintain, leading to their decline.
- DART, a logistics-planning and scheduling tool, was used extensively during Gulf War I (1991).
- (1992) Japan terminated the FGCS program since it had not achieved the ambitious goals set out a decade previously.
- (1993) After spending nearly $1 billion and falling far short of expectations, DARPA concluded its Strategic Computing Initiative.
- Gary Kasparov is beaten by IBM’s Deep Blue (1997)
- (2006) A self-driving car is called STANLEY and it wins the DARPA Grand Challenge.
- (2004) The U.S. military starts investing heavily in autonomous robots like Boston Dynamics’ “Big Dog” and iRobot’s “PackBot.”
- (2008a) Google makes breakthroughs in speech recognition through its iPhone app.
- A Watson-based answer trounces the competition on Jeopardy! (2011.)
- (2011 – 2013) Apple launches Siri, the AI-powered virtual assistant in its iOS operating system.
- (2012, May 19) Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network advanced deep learning algorithms of 10 million YouTube videos. As a result, neural networks and deep learning funding began to reach an all new era as neural networks and deep learning were taught what a cat looks like without being told what it is.
- (2014, California) Google has passed its state’s driving test with its self-driving car.
- Alexa, an Amazon live virtual assistant, is released (2014-2015)
- Lee Sedol, world champion Go player, is defeated by Google DeepMind’s AlphaGo (2016.) AI was viewed as being difficult to solve due to the complexity of ancient Chinese games.
- (2016) Hanson Robotics developed Sophia, an A.I.-powered humanoid robot capable of facial recognition, dialogue, and facial expression.
- Efficiencies of machine learning applications are increased due to the release of Google’s natural language processing engine, BERT (2018).
- (2018, June 25) Waymo launches its Waymo One service, through which users within the Phoenix metropolitan area can schedule a self-driving vehicle to pick them up.
- (2020) Baidu releases its LinearFold AI algorithm to scientific and medical teams seeking to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. A virus’s RNA sequence can be predicted with the algorithm in just 27 seconds, 120 times faster than other methods.
MYTHS RELATING TO ADVANCED ARTIFICIAL INTELLIGENCE
Artificial intelligence is the topic of an exciting discourse about what the future can/should bring to the human race. The world’s leading experts disagree on many fascinating issues, including: how AI will affect the job market, whether human-level AI will be developed, whether this will result in a flood of intelligence, and whether we should fear this or welcome it. In addition, there are many examples of boring pseudo controversies that occur because people misunderstand and discuss things that have already been discussed. Let’s clear up some myths in order to help us focus on the intriguing controversies and open questions, rather than misinterpretations.
Image Courtesy: www.futureoflife.org
A few examples and Real-time Applications of Artificial Intelligence
Alright, Surely now you are wondering where AI is used? The idea of autonomous cars and robots is one thing, but are there any examples from the present day? The following are some examples of artificial intelligence in our artificial intelligence tutorial
1. Netflix
Netflix is the most popular OTT platform today. As an acronym for Over The Top platforms, OTT platforms deliver content to users over the internet, usually for a fee.
I recently finished watching Stranger Things, a web television series centered around a group of teenagers in high school. These platforms are known as Other Than Television.
Within a few weeks, Netflix began recommending Typewriter, a web series of Indian origin that follows a group of teenagers as they investigate a ghostly mystery.
2. Siri
Are we used to hearing the phrase Hey Siri often? With Siri, iPhone and iPad users can access a personal assistant from Apple. The use of this voice-activated assistant is based on the convenience of the user.
By analyzing user behavior, Siri understands what the user is doing on their phone, how they are sending messages, and what they are calling. The only thing you have to do is unlock your phone, enable internet access, and then you can start using this feature. Call mom with ‘Hey Siri’, and Siri will generate a call to your mom right away.
3. Pandora
An AI-based platform such as Pandora determines what music should be played based on each individual’s preferences. It provides a number of songs that are less obvious.
In the example above, Pandora will suggest Hey You by Pink Floyd along with Stairway to Heaven by Led Zeppelin, which is also another classic rock song.
4. Flipkart
Flipkart is an e-commerce shopping platform that makes product recommendations to its customers based on the products they’ve previously viewed or purchased.
An old friend of mine recently searched for a certain novel on Flipkart a few days ago, and now he always sees it as a recommendation whenever he visits Flipkart or another site with commercial ads.
5. Echo
Echo is an Amazon device with its own built-in voice assistant. It uses Alexa, a cloud-based software. You can ask the software any question or give it any command and it will understand, comprehend, and respond with a feasible answer. She suggested that I wear a raincoat because it was going to rain in the evening when I asked her yesterday if I needed one.
Choosing a career in Artificial Intelligence
Doesn’t it seem interesting? Can we build tomorrow’s future by building something today? The field of AI can be pursued by anyone with the right qualifications. While it is true that AI is a vast field, becoming a professional in this field comes with its own specifications. For a career in artificial intelligence you can pursue the following drop down list
You should have a background in computing, programming languages such as Python, statistics, and software engineering in order to become a machine learning engineer. Starting with 7-8 lacs p.a., the initial salary would be around 15-20 lacs p.a.
A data scientist possesses the technical expertise to solve complex problems by learning coding languages such as Hadoop, SQL, Spark, Python, machine learning, statistics, and communication. It is estimated that freshers in the field of data science will earn somewhere between 4.5 to 6 lacs annually.
The job of a business intelligence developer involves designing, modeling, and maintaining complex data, as well as the ability to communicate the terminology related to AI to non-technical people as well. You can start earning as much as 5-9 lacs p.a. as a newbie.
Ideally, research scientists should have a Ph.D. in computer science or at least a master’s degree. For starters, candidates must demonstrate a good understanding of parallel computing, natural language processing, machine learning, and artificial intelligence.
For a fresher researcher, the salary typically ranges between $66,000 and $75,000 a year. The amount could reach 16 lacs per annum.
Engineers and scientists who work in the big data space create and manage infrastructure and tools so they can produce results from enormous amounts of data.
As well as knowledge of data mining, data migration, and C++, you must have experience with languages such as Java, Python, and Python. In such a situation, the starting salary would be between 8-9LPA.
WHY IS AI SAFETY NOW UNDER CONSIDERATION
Many leading scientists and technology figures have expressed concern about the risks posed by AI, including
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many others. Leading AI researchers have also expressed concern.
What’s the significance of the sudden headlines on the subject?
In the past, science fiction held the idea that a strong artificial intelligence may eventually succeed. The breakthroughs made in AI over the past few years, however, have enabled many experts to take seriously the possibility that super-intelligence will be achieved in this century. Just five years ago, it seemed impossible that AI would yield super-intelligence. Many AI researchers at the 2015 Puerto Rico Conference predicted that human-level AI will be possible by 2060, despite some experts still believing it will be centuries away. To accomplish the necessary safety research within decades, it is prudent to begin the process now.
We have no way of predicting how AI will behave since it has the potential to become more intelligent than any human. Since we have never created anything that is capable of outsmarting us, wittingly or unwittingly, we can’t use the past technological developments as a basis. We may be best able to understand what we might face by understanding the evolution of ourselves. Our generation now controls the planet because we’re the smartest, not because we’re the strongest, fastest, or biggest. Are we assured to remain in control if we are no longer the biggest and smartest?
We at FLI believe the future of civilization depends on our ability to balance the growing power of technology with the wisdom of our understanding of how to manage it. AI research accounts for more than one-third of the research funded by FLI, and FLI believes that the best way to win that race will not be to impede the former, but to accelerate the latter.
Summary
A long-term view of Artificial Intelligence suggests that it will prove to be a technology that is as mysterious as it is terrifying. We are afraid that it could fool us humans and that all that we can imagine could one day become real.
At the same time though, humankind will also be able to express itself more creatively. We may one day be able to travel through time and space thanks to AI, who knows! It’s just a matter of waiting and watching what happens.
I hope you have understood the technology behind artificial intelligence better after reading this introduction.
We are always here to answer questions related to artificial intelligence tutorials, so please feel free to drop a comment below! Please contact us if you have any questions or concerns.