The History of Artificial Intelligence AI

The History of Artificial Intelligence AI

In 1996 he succeeded as the first machine to defeat the then reigning world chess champion Garry Kasparov in a single match. Strained intelligence managed to grab the sustentation of the global public. In data centres and on mainframes, AI algorithms have been used for years. This is a huge speciality of the K model.

Introduction

Artificial Intelligence is not a new word and not a new technology for researchers. This technology is much older than you would imagine. Even there are the myths of Mechanical men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of AI that define the journey from the AI generation to till date development. As you may have noticed, Artificial Intelligence (AI) has been studied for decades and is still one of the most elusive topics in computer science. This is partly due to how large and innocuous the topic is. AI ranges from machines capable of truly thinking to inventing algorithms used to play board games. It has applications in almost every way we use computers in society. In recent years, incredible progress has been made in computer science and AI. Watson, Siri or deep learning show that AI systems are now providing services that should be considered intelligent and reactive. And today there are fewer and fewer companies that can do without artificial intelligence if they want to optimize their business or save money. Let us now take a look at how AI originated.

AI has two main dimensions, as shown in Table 2 below. The AI definitions on top of the table relate to processes and reasoning, whereas the ones at the bottom side address behaviour. The definitions on the left side of the table measure success in terms of fidelity to human performance, whereas the ones on the right side measure against an ideal concept of intelligence and rationality.

 

What is Artificial intelligence?

Artificial Intelligence is a worldwide name as well as a name. The technology for the minutiae of Sheens, which is created entirely artificially and without the goody of any living organisms, can walk out human-like behaviour and behaviour. Artificial intelligence products, upon contact With an idealist, are fully human and can perform Things like emotions, foresight, and decision-making are wontedly referred to as robot names. In 1956, the concept of machine intelligence emerged.

Various lawmaking algorithms and data studies reveal all this From the first computer to manufactured technical devices
Today’s smartphones are ripened with people in mind. Artificial Intelligence, which ripened very slowly In olden times but daily important steps, Today, the rise of intelligent robots shows how much progress has been made.

Some definitions of AI organized into four categories

Systems that think like a human

Systems that think rationally

The exciting new effort to make computers think … machines with minds, in the full and literal sense” (Haugeland 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem-solving, learning …” (Bellman, 1978)

The study of mental faculties through the use of computational models” (Chamiak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston 1992)

the system that acts like humans

the system that acts rationally

The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Ritch and Knight, 1991)

Computational Intelligence is the study of the design of intelligent agents.” (Poole et al. 1998) “AI … is concerned with intelligent behaviour in artefacts.” (Nilsson, 1998)

Basic Info of AI

The term artificial intelligence was first used by John McCarthy in 1956 when he presented the first education conference. At that conference, he tried to understand whether machines could think. The newly created research aimed to develop machines that could simulate every aspect of intelligence. The 1956 Dartmouth Conference is considered to be the birth of Artificial Intelligence. Vannevar Bush’s seminal work, as we think, proposed a system for increasing people’s knowledge and understanding of themselves. Then in 1961, Alan Turing wrote a text on the idea of machines being capable of imitating humans and doing intelligent things like playing chess.

No one can deny the ability of computers to process logic. But many people don’t know whether a machine can think. The precise definition of thought is important because there has been some strong controversy over whether it is thought. For example, there is the so-called ‘Chinese Room’ argument. This argument has caused the Kali company a lot of trouble, but the argument has been refuted by researchers in many ways, saying it undermines people’s faith in machines and so-called expert systems in life-critical applications.

BEGAN

There are very few ways to measure the future of AI, let alone measure it. Its current successes and failures have to be evaluated. Given the ever-increasing power of technology, it is likely that this will advance to a much greater extent very soon. If you look back at the past of AI Let’s see how fast it evolved over 60 years. This article takes you through the history of AI, from its beginnings to the age of autonomous cars, virtual assistants and AI robots.

In 1956, John McCarthy set up the Dartmouth Conference in Hanover, New Hampshire, reuniting leading researchers in complexity theory, language simulation, neural networks, and the relationship between randomness and creative thinking. The newly created research aimed to develop machines that could simulate every aspect of intelligence. The 1956 Dartmouth Conference is considered to be the birth of Artificial Intelligence.

In 1956, The Dartmouth Conference was followed by 17 years of incredible progress. Research projects carried out at MIT, the universities of Edinburgh, Stanford and Carnegie Mellon received massive funding, which eventually paid off. Programming computers started to perform algebra problems, prove geometric theorems, and understand and use English syntax and grammar.

by 1974, the general perception was that researchers had over-promised and under-delivered. Computers could not technically keep up the pace with the the complexity of problems advanced by researchers. For instance, an AI system
that analyzed the English language could only handle a 20-word vocabulary, because that was the maximum data the computer memory could store. ack in the ’60s, the Defense Advanced Research Projects Agency (DARPA) had invested millions of dollars in AI research without pressuring researchers to achieve particular results. However, the 1969 Mansfield Amendment required DARPA to fund only mission-oriented direct research. Researchers would only receive funding if their results could produce useful military technology, like autonomous tanks or battle management systems, which they did.

in 1980 they built the Dynamic Analysis and Replanning tool, a battle management system that proved to be successful during the first Gulf War. However, it wasn’t good enough. DARPA was disappointed about the failure of the autonomous tank project. They had expected a system that could respond to voice commands from a pilot. The system built by the SUR team could indeed recognize spoken English, but only if the words were uttered in a specific order. As a result, massive grant cuts followed and a lot of research was put on hold. Despite the early optimistic projections, funders began to lose trust and interest in the field. Eventually, the limitless spending and vast scope of research were replaced with a “work smarter, not harder” approach.

In 1981 was governed by what is now known as Weak Artificial Intelligence (Weak AI). The term is an interpretation of AI in a
narrow sense. Weak AI accomplished only specific problem-solving tasks like configuration; it did not encompass the full range of human cognitive abilities. The era of expert systems This was the first time AI solved real-world problems, saving corporations a lot of money. Expert systems could automate highly specific decisions, based on logical rules derived from
expert knowledge. For example, before expert systems were invented, people had to order each component for their computer systems and often dealt with the wrong cables and drivers. XCON, an expert system developed by Prof. John McDermott for the Digital Equipment Corporation (DEC), was a computer configurator that delivered valid and fully configured computers. XCON is reported to have earned DEC nearly $40M/year.

The expert systems led to the birth of a new industry – specialised AI hardware. Lisp, one of the most successful specialised computers at the time was optimised to process Lisp – the most popular language for AI. The golden age of Lisp machines did not last long.

By 1987, Apple and IBM were building desktop computers far more powerful and cheaper than Lisp machines. Soon the era of expert systems would come to an end. Japan invested $850 million in the Fifth Generation Computer initiative. This
was a 10-year project that aimed at writing programs and building machines that could carry conversations, translate languages, interpret images and think like humans. DARPA did not want to lose the AI race to the Japanese government. They continued to pour money into AI research through the Strategic Computing Initiative, which focused on supercomputing and microelectronics.

In 1988 the late ’80s, AI’s glory started to fade yet again. The expert systems proved to be too expensive to maintain; they had to be manually updated, could not handle unusual inputs and most of all could not learn. Unexpectedly, the advanced computer systems developed by Apple and IBM buried the specialised AI hardware industry. DARPA concluded that AI researchers underdelivered, so they killed the Strategic Computing Initiative project. Japan hadn’t made any progress either, since none of the goals of the Fifth Generation Computer project had been met, despite $400 million spending. This era marks the first time that AI may not have progressed as expected, but it has made steady progress in all its fields.

      • Intelligent tutoring

      • Case-based reasoning

      • Uncertain reasoning

      • Data mining

      • Natural language understanding & translation

      • Vision and multi-agent planning

    In 1997, IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov after six games. Deep Blue used tree search to calculate up to a maximum of 20 possible moves. The Nomad Robot, built by Carnegie Mellon University, navigated over 200 km of the Atacama Desert in Northern Chile, in an attempt to prove that it could also ride on Mars and the Moon.
    Late ’90s Cynthia Breazeal from MIT published her dissertation on Sociable Machines and introduced Kismet, the robot with a face that could express emotions.

    In 2005, Stanley, the autonomous vehicle built at Stanford, won
    DARPA Grand Challenge race.

    In 2010, Artificial Intelligence has been moving at a fast pace, feeding the hype apparently more than ever. Let’s not forget that in 2017 social humanoid robot Sophia was granted citizenship by the Kingdom of Saudi Arabia.

    In 2018, British filmmaker Tony Kaye announced that his next movie, Second Born, would star an actual robot, trained in
    different acting methods and techniques. AI has a lot in store for humanity – especially since the technologies developed by AI researchers in the past two decades have proven successful in so many fields: machine translation, Google’s search engine, data mining, robotics, speech recognition, medical diagnosis, etc.

    Milestones in AI History

    History of AI with Chronological Order:

    May 1. year: Alexander Heron in antiquity made automatons with mechanical mechanisms working with water and steam power.
    Year 1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers of cybernetic science, has made water-operated automatic controlled machines.
    Year 1623: Wilhelm Schickard invented a mechanic and a
    calculator capable of four operations.
    Year 1672: Gottfried Leibniz has developed a binary
    counting system that forms the abstract basis of today’s
    computers.
    Year 1822-1859: Charles Babbage is a mechanical calculator. Ada Lovelace is regarded as the first computer rogrammer because of the work he has done with babbage’s punched cards on his machines. Lovelace’s ork includes algorithms.
    Year 1923: Karel Capek first introduced the robot concept in the theater play of Rossum’s Universal Robots (RUR –
    Rossum’s Universal Robots).
    Year 1931: Kurt Gödel introduced the theory of deficiency,
    which is called by his own name.
    Year 1936: Konrad Zuse developed a programmable computer named Z1 named 64K memory.
    Year 1946: ENIAC (Electronic Numerical Integrator and
    Computer), the first computer in a room size of 30 tons,
    started to work.
    Year 1948: John von Neumann introduced the idea of selfreplicating program.
    Year 1950: Alan Turing, founder of computer science,
    introduced the concept of the Turing Test.
    Year 1951: The first artificial intelligence programs for the Mark 1 device were written.
    Year 1956: The logic theorist (Logic Theory-LT) program
    for solving mathematical problems is introduced by
    Neweell, Shaw and Simon. The system is regarded as
    the first artificial intelligence system.
    The end of the Year 1950s – the beginning of the1Year960s: A schematic network for machine translation was
    developed by Margaret Masterman et al.
    Year 1958: John McCarty of MIT created the LISP (list Processing language) language.
    Year 1960: JCR Licklider described the human-machine
    relationship in his work.
    Year 1962: Unimation was established as the first company
    to produce robots for the industrial field.
    Year 1965: An artificial intelligence program ELIZA is
    written.
    Year 1966: The first animated robot “Shakey” was produced
    at Stanford University.
    Year 1973: DARPA begins development for protocols called
    TCP / IP.
    Year 1974: The Internet has begun to be used for the first
    time.
    Year 1978: Herbert Simon earned a Nobel Prize for his
    limited Rationality Theory, which is an important work
    on Artificial Intelligence.
    Year 1981: IBM produced the first personal computer.
    Year 1993: Production of Cog, a human-looking robot at
    MIT, began.
    Year 1997: Deep Blue named supercomputer defeated world
    famous chess player Kasparov.
    Year 1998: Furby, the first artificial intelligence player, was
    driven to the market.
    Year 2000: Kismet named robot which can use gesture and
    mimic movements in communication is introduced.
    Year 2005: Asimo, the closest robot to artificial intelligence
    and human ability and skill, is introduced.
    Year 2010: Asimo is made to act using mind power.
    Year 2011: In the year 2011, IBM’s Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.
    Year 2012: Google has launched an Android app feature “Google now”, which was able to provide information to the user as a prediction.
    Year 2014: In the year 2014, Chatbot “Eugene Goostman” won a competition in the infamous “Turing test.”
    Year 2018: The “Project Debater” from IBM debated on complex topics with two master debaters and also performed extremely well.

    Year2019-2024: Artificial intelligence (AI) in the Asia Pacific region’s energy market is projected to experience a compound annual growth rate (CAGR) of 25.6 percent. The global AI in energy market is expected to reach 7.78 billion U.S. dollars

    Conclusion

    AI is being widely used in all sectors. Now AI has developed to a remarkable level. The concepts of deep learning, big data and data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM and Amazon are working with AI and creating amazing devices. The future of artificial intelligence is inspiring and will come with higher intelligence. The most advanced chat would be GPT 4 is the most useful and GPT 4 is being used in all. Click the link above for GPT 4.

    Daily Visitors
    0

    1 thought on “The History of Artificial Intelligence AI”

    1. Pingback: GPT - Generative Pre-trained Transformer - WORLD HISTORY

    Leave a Comment

    Your email address will not be published. Required fields are marked *

    Get Updates And Stay Connected -Subscribe To Our Newsletter



    Category



    FOLLOW US

    Scroll to Top