-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Artificial Intelligence: History, Trends, and Outlook

“Intelligence” is generally defined as “the ability to learn or understand, or to deal with new or trying situations.”1 A trait or capacity normally associated with biological beings such as chimpanzees, dolphins, and, of course, humans, recent scientific and engineering developments have enabled computers to exercise “artificial intelligence” or AI. AI is the simulation of human intelligence processes [especially learning and adaptive behavior] by machines, especially computer systems.”2

Artificial intelligence is powering a wide variety of business and consumer applications, such as sifting through mountains of Big Data to extract precious business intelligence or permitting a vehicle to drive itself.

Machine Learning

The most prominent of AI technologies is “machine learning” (ML), which enables a system to enhance its awareness and capabilities – that is, to learn – without being explicitly programmed to do so.

In some cases, ML systems learn by studying information contained in data warehouses. In other cases, ML systems learn by conducting thousands of data simulations, detecting patterns, and drawing inferences.

ML systems don’t deduce the truth as humans do; rather they forecast the truth based on available data. As analyst Nick Heath writes, “At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data. Those predictions could be:

  • “Answering whether a piece of fruit in a photo is a banana or an apple,
  • “Spotting people crossing the road in front of a self-driving car,
  • “[Determining] whether the use of the word book in a sentence relates to a paperback or a hotel reservation,
  • “[Deciding] whether an e-mail is spam, or
  • “Recognizing speech accurately enough to generate captions for a YouTube video.”3

The Promise of AI

Dr. Jim Hendler, director of the Rensselaer Institute for Data Exploration and Application (IDEA), contends that the scale of information growth – and the pace of information change – make it impossible for humans to handle Big Data without intelligent computers. “The natural convergence of AI and Big Data is a crucial emerging technology space. Increasingly, big businesses will need the AI technology to overcome the challenges or handle the speed with which information is changing in the current business environment.”4

For another perspective, analysts Erik Brynjolfsson and Andrew McAfee assert that artificial intelligence is “the most important general-purpose technology of our era,” as potentially influential as the invention of the internal combustion engine, which “gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.”5

The Peril of AI

Much like genetic engineering, many of the ways in which artificial intelligence will be used – and potentially misused – are presently unknown, which is an unsettling proposition for some individuals, particularly government officials, who are already considering means of harnessing future AI development. Some prominent people, including the late physicist and cosmologist Stephen Hawking and Elon Musk (the futurist behind PayPal, Tesla, and SpaceX) contend that AI presents an existential threat – a bits and bytes version of the asteroid that killed the dinosaurs 65 million years ago.

Intelligence? – Well, Not Really

It should be observed that the term artificial intelligence is misleading, since to many people intelligence connotes consciousness, or awareness of one’s surroundings. An AI program is not alive (as biological beings are), and it is not conscious. A chess-playing AI program, for example, does not know what chess is. As with all AI applications, the program is ingesting massive amounts of information, making millions of rules-based calculations, forming predictions (in this case, about its opponent’s strategy), and selecting moves designed to counter that strategy, i.e., to win the game. None of this is performed by the AI program with any appreciation of the circumstances of the game, or the nature or significance of its programmed achievements.   

Evolution & Revolution

“The main opportunities of artificial intelligence lie in its ability to:

“Reveal better ways of doing things through advanced probabilistic analysis of outcomes.

“Interact directly with systems that take actions, enabling the removal of human-intensive calculations and integration steps.”

– Gartner6

A Brief History of AI

While interest in artificial intelligence can be traced back to Homer who wrote of mechanical “tripods” waiting on the gods at dinner, only in the last half century have computer systems and programs achieved a level of sophistication sufficient to allow actual AI development.7

After a slow and deliberate start, the evolution of AI has been accelerating. The following are among the more prominent technological highlights.

In 1950, Alan Turing authored a seminal paper promoting the possibility of programming a computer to behave intelligently, including the description of a landmark imitation game we now know as “Turing’s Test.”8

In 1955, the term “artificial intelligence” was coined by John McCarthy, a math professor at Dartmouth.

In the 1960s, major academic laboratories were formed at the Massachusetts Institute of Technology (MIT) and Carnegie Mellon University (CMU) [then Carnegie Tech working with the Rand Corporation]. At the same time, the Association for Computing Machinery’s Special Interest Group on Artificial Intelligence (ACM SIGART) established a forum for people in disparate disciplines to share ideas about AI.9

The 1970s saw the development of “expert systems,” computer systems that simulated human decision-making through a strictly prescribed rule set.

In a 1997 man-machine chess match, IBM’s “Deep Blue” program defeated world champion Gary Kasparov, a turning-point in the evolution of AI, at least from the public’s perspective.

In 2011, in a double-down event for the AI industry, IBM’s Watson defeated a group of human champions in the game show “Jeopardy!”.

In 2011, Apple released Siri, a virtual assistant that uses a natural language interface to answer questions and perform tasks for its human owner.10

In 2012, Google researchers Jeff Dean and Andrew Ng trained a neural network of 16,000 processors to recognize images of cats by showing it 10 million unlabeled cat images from YouTube videos.11

From 2015 to 2017, Google Deepmind’s AlphaGo, a computer program that plays the board game Go, defeated a number of human champions.12

In 2017, the Facebook Artificial Intelligence Research lab trained two chatbots to communicate with each other in order to learn negotiating skills. Remarkably, during that process the two bots invented their own language.13

In 2020, Baidu released the LinearFold AI algorithm to medical teams developing a COVID-19 vaccine. “The algorithm can predict the RNA sequence of the virus in only 27 seconds, which [was] 120 times faster than other methods.”14

In 2022, OpenAI released ChatGPT, heralding the arrival of “generative AI,” a new brand of artificial intelligence capable of generating unique business, literary, and artistic content, literally creating brand new digital images, video, audio, text, even computer code in response to user requests.

AI Is Everywhere

AI is revolutionizing everything. Some of the more interesting and broadly representative AI use cases include:

E-Commerce – Protecting customers from credit card and other fraud.

Robotics – Enabling robots to “do the heavy lifting” in hospitals, factories, and warehouses.

Healthcare – Detecting diseases and analyzing chronic conditions.

Automotive – Permitting cars and trucks to drive themselves.

Marketing – Delivering personalized ads to consumers.

Communications – Conversing 24X7 with customers (and prospective customers) via chatbots.

Astronomy – Discovering exoplanets, or planets orbiting distant stars.

Cybersecurity – Defending against malware attacks, including ransomware viruses.

Transportation – Finding the most efficient route, minimizing fuel consumption and expediting shipments.

Manufacturing – Locating defects during, rather than after, production.15

AI in 2024

Analyst Esther Shein reports that a global survey of CIOs, CTOs and other tech leaders reveals that the “top potential applications for AI” in 2024 are:

  • “Real-time cybersecurity vulnerability identification and attack prevention (54 percent).
  • “Increasing supply chain and warehouse automation efficiencies (42 percent).
  • “Aiding and accelerating software development, automating customer service (38 percent).
  • “Automating customer service (35 percent).
  • “Speeding up candidate screening, recruiting and hiring time (34 percent).
  • “Accelerating disease mapping and drug discovery (32 percent).
  • “Automating and stabilizing utility power sources (31 percent).”16

The AI Outlook

AI and Jobs

A Pew Research report, “AI, Robotics, and the Future of Jobs” published in August 2014, explored the impact of AI and robotics on job creation and retention. Based on the responses of nearly 1,900 experts, the report found reasons to be hopeful and reasons to be concerned. The “concerning” aspects of AI, still largely relevant today, are as follows:

  • “Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
  • “Certain highly skilled workers will succeed wildly in this new environment – but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
  • Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.17

Edge AI

In the view of many experts, the future of artificial intelligence – and, indeed, edge computing – is “edge AI,” in which machine learning algorithms process data generated by edge devices locally. To borrow a communications concept, local processing removes the latency inherent in remote processing, in which data is collected by an edge device, transmitted to another device or cloud where it is subsequently processed; the processed data is then returned to the originating device, where the data is used to facilitate an action.

In edge AI, this data diversion from and to the edge device is eliminated by utilizing AI algorithms incorporated within the edge device or the edge system to process the data directly. This removes the “middleman” element.

By combining data collection with smart data analysis and smart data action, edge AI:

  • Expedites critical operations, especially where speed is essential (autonomous vehicle operation is a classic example); and
  • Eliminates multiple points of failure, since everything occurs locally, or at the edge.

In many respects, edge AI is the technology (or class of technologies) that will ultimately enable enterprises to exploit the new class of intelligent resources represented by the Internet of Things.

Natural Language Generation

One of the transformative achievements of artificial intelligence research, natural language generation (NLG), also known as automated narrative generation (ANG), is a technology that converts enterprise data into narrative reports by recognizing and extracting key insights contained within the data and translating those findings into plain English (or another language). There are articles, for example, prepared by respected news outlets like the Associated Press that are actually “penned” by computers. While perhaps overly optimistic, some analysts have contended that as much as 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention.

Natural language generation is commonly used to facilitate the following:

  • Narrative Generation – NLG can convert charts, graphs, and spreadsheets into clear, concise text.
  • Chatbot Communications – NLG can craft context-specific responses to user queries.
  • Industrial Support – NLG can give voice to Internet of Things (IoT) sensors, improving equipment performance and maintenance.
  • Language Translation – NLG can transform one natural language into another.
  • Speech Transcription – First, speech recognition is employed to understand an audio feed; second, NLG turns the speech into text.
  • Content Customization – NLG can create marketing and other communications tailored to a specific group or even individual.
  • Robot Journalism – NLG can write routine news stories, like sporting event wrap-ups, or financial earnings summaries.

The Internet of Things

The age of the Internet of Things (IoT), in which every machine or device is “IP addressable” and, therefore, capable of being connected with another machine or device, is rapidly approaching. Since the IoT will produce Big Data, often billions of data elements, AI will be needed to “boil” that data down into meaningful and actionable intelligence.

As analyst Brian Buntz reports, “According to Richard Soley, executive director of the Industrial Internet Consortium, ‘Anything that’s generating large amounts of data is going to use AI because that’s the only way that you can possibly do it.'”18

Analyst Iman Ghosh observes that the union of AI and IoT, or AIoT, is already affecting four major technology segments:

  • Wearables, which “continuously monitor and track user preferences and habits,” particularly impactful in the healthcare sector.
  • Smart Homes, which “[learn] a homeowner’s habits and [develop] automated support.”
  • Smart Cities, where “the practical applications of AI in traffic control are already becoming clear.”
  • Smart Industry, where “from real-time data analytics to supply-chain sensors, smart devices help prevent costly errors.”19

Generative AI

A new brand of artificial intelligence (or, perhaps more precisely, machine learning), “generative AI” (GAI) refers to programs that can generate unique business, literary, and artistic content, literally creating brand new digital images, video, audio, text, and even computer code.

Generative AI in general – and its principal exemplar, ChatGPT, in particular – have experienced intense scrutiny owing to GAI’s ability to disrupt or even displace the human element in creative arts, robbing writers, artists, programmers, and other “creators” of their livelihood and denying audiences the opportunity to experience human-generated content.

In addition to their adverse effect on employment, generative AI programs have been charged with:

  • Copyright infringement, owing to their use of proprietary materials for training purposes;
  • A propensity to lie or suffer “hallucinations”; and
  • A general disregard for personal privacy.

AI As an Existential Threat

Even if artificial intelligence systems never achieve a sentient or self-aware state, many well-respected observers believe they pose an existential threat.

Nick Bostrum, author of the book Superintelligence, suggests that while self-replicating nanobots (or microscopic robots) could be arrayed to fight disease or consume dangerous radioactive material, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.”20

The critics, including Elon Musk and the late Stephen Hawking, allege two main problems with artificial intelligence:

  • First, we are starting to create machines that think like humans but have no morality to guide their actions.
  • Second, in the future, these intelligent machines will be able to procreate, producing even smarter machines, a process often referred to “superintelligence”. Colonies of smart machines could grow at an exponential rate – a phenomenon for which mere people could not erect sufficient safeguards.21

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era. “So when there is something smarter than us on the planet, it will rule over us on the planet.”22

Controlling AI

In light of the inherently obscure nature of artificial intelligence, the political establishment is declaring their intention to both promote and restrict AI according to its positive and negative effects.

For example, on October 30, 2023, President Biden issued an Executive Order (EO) “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order:

  • “Establishes new standards for AI safety and security,
  • “Protects Americans’ privacy,
  • “Advances equity and civil rights,
  • “Stands up for consumers and workers,
  • “Promotes innovation and competition,
  • “Advances American leadership around the world, and more.”23

Commenting on the enterprise effect of this one EO, the attorneys at Morgan, Lewis & Bockius LLP, advise that:

Lawyers and in-house legal departments will need to stay up to date on new regulations and guidance related to AI. The order directs various agencies to issue regulations, guidance, and reports on topics like non-discrimination, privacy, cybersecurity, and IP for AI systems. Lawyers, especially those working in related areas like civil rights, employment, privacy, cybersecurity, and IP, will need to monitor these developments and understand how they impact their practice areas.

Corporate leadership and their counsel will need to develop expertise in AI as it features more prominently in litigation. As AI becomes more widespread, it is likely to appear more regularly as a supporting tool to enhance the litigation process and to ensure that foot faults do not occur through the possible misuse of AI in the process. Lawyers will increasingly need AI literacy and expertise to effectively argue cases involving algorithmic decision-making and weigh in on issues such as liability for AI systems. Developing expertise in AI safety, privacy, and security will become more important.”24

The Master Algorithm

If artificial intelligence initiatives are aimed at creating machines that think like human beings – only better – then the ultimate goal of AI is to produce what analyst Simon Worrall calls the “Master Algorithm. “The Master Algorithm is an algorithm that can learn anything from data. Give it data about planetary motions and inclined planes, and it discovers Newton’s law of gravity. Give it DNA crystallography data and it discovers the Double Helix. Give it a vast database of cancer patient records and it learns to diagnose and cure cancer. My gut feeling is that it will happen in our lifetime.”25

Recommendations

AI Cannot Be Ignored

Given the potential benefits of artificial intelligence, enterprise leaders must prioritize the use of AI. In fact, an enterprise executive who fails to avail herself of the opportunities afforded by AI may be judged by her board to have committed malpractice.

Expect the Unexpected

Predicting the future is always problematic, but particularly so when the object of the predictions has seemingly unlimited potential, as with artificial intelligence. Analyst Ron Schmelzer previews the dilemma by reminding us how little we knew – or imagined – about the possibilities surrounding portable phones when they were introduced. “In the 1980s the emergence of portable phones made it pretty obvious that they would allow us to make phone calls wherever we are, but who could have predicted the use of mobile phones as portable computing gadgets with apps, access to worldwide information, cameras, GPS, and the wide range of things we now take for granted as mobile, ubiquitous computing. Likewise, the future world of AI will most likely have much greater impact in a much different way than what we might be assuming today.”26

Devise an AI Plan

Analyst Kristian Hammond offers a variety of suggestions for managing artificial intelligence in the enterprise. These include:

  • Remember that your goal is to solve real business problems, not simply to put in place an ‘AI strategy’ or ‘cognitive computing’ work plan. Your starting point should be focused on business goals related to core functional tasks.
  • Focus on your data. You may very well have the data to support inference or prediction, but no system can think beyond the data that you give it.
  • Know how your systems work, at least at the level of the core intuition behind a system.
  • Understand how a system will fit into your workflow and who will use it. Determine who will configure it and who will work with the output.
  • Remain mindful that you are entering into a human computer partnership. AI systems are not perfect and you need to be flexible. You also need to be able to question your system’s answers.”27

AI pioneer Andrew Ng advises enterprise officials to keep it simple. “The only thing better than a huge long-term [AI] opportunity is a huge short-term opportunity, and we have a lot of those now.”28

Invest in Narrow AI

President Obama’s National Science and Technology Council (NSTC) Committee on Technology drew a sharp distinction between so-called “Narrow AI” and “General AI.”

Remarkable progress has been made on Narrow AI, which addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Narrow AI underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, and is finding important applications in medical diagnosis, education, and scientific research.

General AI (sometimes called Artificial General Intelligence, or AGI) refers to a future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.29

From an enterprise perspective, continuing to invest in Narrow AI – over the much more speculative General AI – seems the prudent course.

Protect Human Resources

Advances in information technology and robotics are transforming the workplace. Productivity gains are enabling enterprises to perform more work with fewer workers. Artificial intelligence programs – including AI-enhanced robots – will only accelerate this trend. The enterprise Human Resources department should develop a strategy for retraining – or otherwise assisting – workers displaced due to AI.

Beware of Legal Liabilities

Where there is new technology, there is litigation. Legal Tech News emphasizes “the importance of not waiting until an incident occurs to address AI risks. When incidents do occur, for example, it’s not simply the incidents that regulators or plaintiffs scrutinize, it’s the entire system in which the incident took place. That means that reasonable practices for security, privacy, auditing, documentation, testing and more all have key roles to play in mitigating the dangers of AI. Once the incident occurs, it’s frequently too late to avoid the most serious harms.”30

Insist on Explainable AI

Any process, whether digital or physical, whether performed by a human or artificial intelligence, must be trusted. AI applications often rely on derived logic, which may be difficult for humans to discern and understand. To gain essential trust in AI operations, the operations must be explainable. To that end, the US National Institute of Standards and Technology (NIST) has developed four principles of explainable artificial intelligence – principle to which AI systems should adhere. 

Principle 1. Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.

The Explanation principle obligates AI systems to supply evidence, support, or reasoning for each output.

Principle 2. Meaningful: Systems provide explanations that are understandable to individual users.

A system fulfills the Meaningful principle if the recipient understands the system’s explanations.

Principle 3. Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.

Together, the Explanation and Meaningful principles only call for a system to produce explanations that are meaningful to a user community. These two principles do not require that a system delivers an explanation that correctly reflects a system’s process for generating its output. The Explanation Accuracy principle imposes accuracy on a system’s explanations.

Principle 4. Knowledge Limits: The system only operates under conditions for which it was designed, or when the system reaches sufficient confidence in its output.

The previous principles implicitly assume that a system is operating within its knowledge limits. This Knowledge Limits principle states that systems identify cases they were not designed or approved to operate, or their answers are not reliable. By identifying and declaring knowledge limits, this practice safeguards answers so that a judgment is not provided when it may be inappropriate to do so. The Knowledge Limits Principle can increase trust in a system by preventing misleading, dangerous, or unjust decisions or outputs.31

Web Links

References

1 Webster’s Dictionary.

2 TechTarget.

3 Nick Heath. “What Is Machine Learning? Everything You Need to Know.” ZDNet. May 14, 2018.

4 Sharon Fisher. “How Artificial Intelligence Could Change Your Business.” Forbes. June 30, 2014.

5 Erik Brynjolfsson and Andrew McAfee. “The Business of Artificial Intelligence.” Harvard Business Review. July 2017.

6 “What Is Artificial Intelligence?” Gartner, Inc. 2022.

7-9 Bruce G. Buchanan. “A (Very) Brief History of Artificial Intelligence.” American Association for Artificial Intelligence. Winter 2005.

10-13 Rebecca Reynoso. “A Complete History of Artificial Intelligence.” G2.com. March 1, 2019.

14 Karin Kelley. “What is Artificial Intelligence: Types, History, and Future.” Simplilearn Solutions. October 28, 2022.

15 Avijeet Biswal. “AI Applications: Top 18 Artificial Intelligence Applications in 2024.” Simplilearn Solutions. October 26, 2023.

16 Esther Shein. “AI Ranked the Most Important Technology in 2024, IEEE Survey Finds.” TechnologyAdvice. November 3, 2023.

17 Irving Wladawsky-Berger. “Assessing the Future Impact of AI, Robotics on Job Creation.” Wall Street Journal. September 5, 2014. 

18 Brian Buntz. “A Guide to Using AI for IoT Applications.” Informa PLC. March 12, 2020.

19 Iman Ghosh. “AIoT: When Artificial Intelligence Meets the Internet of Things.” Visual Capitalist. August 12, 2020.

20-22 Nick Bilton. “Artificial Intelligence As a Threat.” The New York Times. November 5, 2014.

23 “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” The White House. October 30, 2023.

24 Dion M. Bregman, Nicholas M. Gess, Giovanna M. Cinelli, W. Barron A. Avery, and Eric S. Bord. “Biden Issues Sweeping Executive Order Presenting Opportunity and Risk for AI.” Morgan, Lewis & Bockius LLP. November 1, 2023.

25 Simon Worrall. “How Artificial Intelligence Will Revolutionize Our Lives.” National Geographic. October 7, 2015.

26 Ron Schmelzer. “The AI-Enabled Future.” Forbes. October 17, 2019.

27 Kristian Hammond, PhD. “Practical Artificial Intelligence for Dummies, Narrative Science Edition.” Narrative Science. 2015:40-1.

28 George Lawton. “Andrew Ng’s AI playbook for the Enterprise: 6 Must-Dos.” TechTarget. May 20, 2019.

29 “Preparing for the Future of Artificial Intelligence.” Executive Office of the President: National Science and Technology Council Committee on Technology. October 2016:7.

30 “The Liabilities of Artificial Intelligence Are Increasing.” Legal Tech News. 2020:3-4.

31 P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, David A. Broniatowski, and Mark A. Przybocki. Draft NISTIR 8312 “Four Principles of Explainable Artificial Intelligence.” US National Institute of Standards and Technology. August 2020:2-4.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues