-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

What is Machine Learning: Market, Trends and Applications

Machine learning (ML) is an artificial intelligence (AI) technology that enables a system to enhance its awareness and capabilities – that is, to learn – without being explicitly programmed to do so. In some cases, ML systems learn by studying information contained in data warehouses. In other cases, ML systems learn by conducting thousands of data simulations, detecting patterns, and drawing inferences.

Executive Summary

Machine learning (ML) is an artificial intelligence (AI) technology that enables a system to enhance its awareness and capabilities – that is, to learn – without being explicitly programmed to do so.

In some cases, ML systems learn by studying information contained in data warehouses. In other cases, ML systems learn by conducting thousands of data simulations, detecting patterns, and drawing inferences.

ML systems don’t deduce the truth as humans do; rather they forecast a perceived truth based on available data. As analyst Nick Heath observes, “At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data. Those predictions could be:

  • “Answering whether a piece of fruit in a photo is a banana or an apple,
  • “Spotting people crossing the road in front of a self-driving car,
  • “[Determining] whether the use of the word ‘book’ in a sentence relates to a paperback or a hotel reservation,
  • “[Deciding] whether an e-mail is spam, or
  • “Recognizing speech accurately enough to generate captions for a YouTube video.”1

The Origins of ML

The term “machine learning” was first defined by Arthur Samuel in 1959 as “the ability to learn without being explicitly programmed.”2 The concept gained traction in the 1990s when advances in storage and processing technology (specifically, massively parallel processing) enabled enterprises to create and curate enormous data warehouses where financial and customer transactions, as well as other information, could be stored and analyzed in an effort to identify and exploit previously-unknown competitive advantages. In this context, machine learning is simply the latest technological tool for performing large-volume data analysis – both for structured and, increasingly, unstructured data.

ML and the Enterprise

The application of machine learning to enterprise operations is expected to expand, even accelerate, owing to:

  1. The exponential growth in Big Data; in particular, data produced by Internet of Things (IoT) platforms, systems, applications, and sensors.
  2. The increasing generation of “synthetic” data through data extrapolation and simulation.
  3. The steady advancements in machine learning algorithms, making machines smarter.
  4. The collapsing costs of storage infrastructure, making ML affordable.
  5. The transformative effects of machine learning on business processes, helping enterprises realize the goals of 90s-era business process reengineering theory.
  6. The influence of machine learning on robotics and other allied AI fields.
  7. The ability to displace expensive blue- and white-collar personnel, as executives recognize that technology, not globalization, is the real engine of enterprise cost-cutting and profitability.
  8. The overwhelming success of ML-powered chatbots, notably OpenAI’s ChatGPT.

The Machine Learning Market

Grand View Research predicts that the global machine learning market, valued at approximately $52.02 billion in 2023, will reach $419.94 billion in 2030, realizing a staggering compound annual growth rate (CAGR) of 34.8 percent during the 2023-2030 forecast period.

A principal driver is improving the human condition, literally. “Machine learning is transforming healthcare by aiding in medical diagnostics. For instance, Google’s DeepMind division created an algorithm that can recognize retinal pictures of eye conditions like diabetic retinopathy. Early detection, prompt treatment, and a reduction in the workload for medical staff are all made possible by this technology. In addition, personalized medicine, disease outbreak prediction, and medication development also involve machine learning.”3

Description

A computer program is said to learn from experience E with respect to some task T 
and performance measure P if its performance on T, as measured by P, improves with experience E.

As an area of keen interest to both the private and public sectors, a report from President Obama’s Executive Office of the President: National Science and Technology Council Committee on Technology saw machine learning as “a statistical process that starts with a body of data and tries to derive a rule or procedure that explains the data or can predict future data.

“This approach – learning from data – contrasts with the older ‘expert system’ approach to AI, in which programmers sit down with human domain experts to learn the rules and criteria used to make decisions, and translate those rules into software code. An expert system aims to emulate the principles used by human experts, whereas machine learning relies on statistical methods to find a decision procedure that works well in practice.”4

As described by analyst Singh Niven, “Developing machine learning applications is different than developing standard applications. Instead of writing code that solves a specific problem, machine learning developers create algorithms that are able to take in data and then build their own logic based on that data.”5

ML Algorithms

Machine learning methods rely on a wide variety of complex algorithms, including:

  • Decision trees
  • Ordinary least squares regression
  • Clustering

Clustering, for example, is the process of grouping a set of objects such that objects in the same group (or cluster) are more similar to each other than to objects in other groups. Perhaps most familiar to consumers, ML algorithms allow Netflix to make movie recommendations based on recently-viewed films and permit Amazon to predict what new books a person might prefer based on past purchases.6

While the algorithmic approach to machine learning is generally productive, analyst Daniel Faggella reports that “researchers have found that some of the most interesting questions arise out of none of the available machine learning algorithms performing to par.” Although this phenomenon may be attributed most often to bad training data, it also occurs when working with new domains.7

Conversational Commerce

One prominent product made possible by machine learning is the “intelligent personal assistant” (IPA) – also known as a “smart personal assistant,” “intelligent virtual assistant,” “virtual digital assistant,” “virtual assistant,” or “chatbot.” The IPA is essentially a software program that helps people complete basic tasks. Typically, an IPA will answer questions and perform actions based on voice commands and location awareness. Popular models include Apple’s Siri and Amazon’s Alexa. Responding instantly to user requests, which include searching, purchasing, controlling connected devices, and “facilitating professional tasks and interactions,” analyst Joanna Goodman sees IPAa ushering in an era of “conversational commerce,” offering:

  • Instant consumer gratification
  • Instant revenue for businesses

As employed by enterprise personnel, IPAs provide instant responses to queries, thus improving productivity and job satisfaction.8Today’s most popular IPA – and a phenomenon since its release in November 2022 – is OpenAI’s ChatGPT.

Deep Learning

As evaluated by President Obama’s Executive Office of the President: National Science and Technology Council Committee on Technology, “Some of the most impressive advancements in machine learning have been in the subfield of deep learning, also known as deep network learning.

“Deep learning uses structures loosely inspired by the human brain, consisting of a set of units (or ‘neurons’). Each unit combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream. For example, in an image recognition application, a first layer of units might combine the raw data of the image to recognize simple patterns in the image; a second layer of units might combine the results of the first layer to recognize patterns-of-patterns; a third layer might combine the results of the second layer; and so on.”9As illustrated in Figure 1, deep learning (DL) is a logical subset of machine learning (ML), with machine learning being a subset of artificial intelligence (AI).

Figure 1. Artificial Intelligence – Machine Learning – Deep Learning Source: Wikimedia Commons
Figure 1. Artificial Intelligence – Machine Learning – Deep Learning
Source: Wikimedia Commons

According to analyst Karen Hao, both machine and deep learning methodologies come in three basic “flavors”:

  1. Supervised Learning – “The most prevalent [variety], the data is labeled to tell the machine exactly what patterns it should look for. Think of it as something like a sniffer dog that will hunt down targets once it knows the scent it’s after.”
  2. Unsupervised Learning – “The data has no labels. The machine just looks for whatever patterns it can find. This is like letting a dog smell tons of different objects and sorting them into groups with similar smells.” Unsupervised learning has “gained traction in cybersecurity.”
  3. Reinforcement – The newest methodology, “a reinforcement algorithm learns by trial and error to achieve a clear objective. It tries out lots of different things and is rewarded or penalized depending on whether its behaviors help or hinder it from reaching its objective. This is like giving and withholding treats when teaching a dog a new trick.”10

A fourth form of machine/deep learning is called Semi-supervised Learning , in which algorithms train on small sets of labeled data and apply what they discern to unlabeled data. This approach, it must be observed, is often invoked in the absence of quality data.11

Learning Techniques

Just as humans employ multiple learning techniques to increase their knowledge and inform their decision-making, machine learning practitioners have devised multiple, scenario-based approaches to artificial learning. As itemized by analyst Jason Brownlee, these techniques include:

Multi-Task Learning – “A type of supervised learning that involves fitting a model on one dataset that addresses multiple related problems.”

Active Learning – “A technique where the model is able to query a human user operator during the learning process in order to resolve ambiguity during the learning process.”

Online Learning – A technique which “involves using the data available and updating the model directly before a prediction is required, or after the last observation was made.”

Transfer Learning – “A type of learning where a model is first trained on one task, then some or all of the model is used as the starting point for a related task. It is different from multi-task learning as the tasks are learned sequentially … , whereas multi-task learning [is applied to multiple tasks] in parallel.”

Ensemble Learning – “An approach where two or more [models] are fit on the same data and the predictions from each model are combined.”12

These learning techniques support and reinforce common business values, such as:

  • Promoting resource “reusability” – Multi-Task Learning and Transfer Learning
  • Encouraging mentoring (in this case, human-to-machine) – Active Learning
  • Maintaining process relevance and viability (via timely business process reengineering) – Online Learning
  • Validating and refining operations – Ensemble Learning

Immature Learning

Some educators have observed that the process by which machines learn is somewhat similar to the process by which young children learn, with all the attendant problems.

As analyst Sandhya Mahadevan explains, what “children experience is not unlike that of machine learning, which uses a combination of supervised, unsupervised and reinforcement learning approaches. Unintended results range from mildly amusing to potentially harmful. For example, one of the more serious criticisms aimed at AI recently is that it has somehow ‘learned’ to stereotype and discriminate based on gender, race and socioeconomic status. Here, it is important to note that AI is a product of its training – and its training is imperfect. AI is trained on digital assets that reflect the same biases and blind spots that exist in human society. Some biases can be fixed within an iteration, while others will take generations.

“Early adopters of AI should not write off the technology for not being perfect, just like you would not completely dismiss children that make mistakes at, say, a piano recital or a spelling bee. AI may not feel the psychological effects of ridicule that children feel, but an overly derisive reaction to AI’s shortcomings will dampen adoption and development, precisely the things it needs to improve its accuracy and breadth of knowledge.”13

Applications

As shown in Figure 2 by Tata Consultancy Services, machine learning is the foundation of a wide range of enterprise applications.

Figure 2. Machine Learning Applications by Industry Sector Source: Tata Consultancy Services
Figure 2. Machine Learning Applications by Industry Sector
Source: Tata Consultancy Services

Cancer Detection

One area where machine learning shows huge promise is detecting cancer in computer tomography (CT) imaging. First, researchers assemble as many CT images as possible to use as training data. Some of these images show tissue with cancerous cells, and some show healthy tissues. Researchers also assemble information on what to look for in an image to identify cancer. For example, this might include what the boundaries of cancerous tumors look like. Next, they create rules on the relationship between data in the images and what doctors know about identifying cancer. Then they give these rules and the training data to the machine learning system. The system uses the rules and the training data to teach itself how to recognize cancerous tissue. Finally, the system gets a new patient’s CT images. Using what it has learned, the system decides which images show signs of cancer, faster than any human could. Doctors could use the system’s predictions to aid in the decision about whether a patient has cancer and how to treat it.14

Robot Learning

Machine learning is becoming a key enabling technology of robot learning, with core research being conducted at major institutions like the Massachusetts Institute of Technology (MIT) and Tokyo University, as well as high-tech innovators like Google.15

As analyst Karen Hao explains, “existing reinforcement-learning algorithms that allow robots to learn movements through trial and error still rely heavily on human intervention. Every time the robot falls down or walks out of its training environment, it needs someone to pick it up and set it back to the right position.

“Now a new study from researchers at Google has made an important advancement toward robots that can learn to navigate without this help. Within a few hours, relying purely on tweaks to current state-of-the-art algorithms, they successfully got a four-legged robot to learn to walk forward and backward, and turn left and right, completely on its own.”16

Physical Sciences

Researchers report that “in parallel to the rise of ML techniques in industrial applications, scientists have increasingly become interested in the potential of ML for fundamental research, [including physics]. To some extent, this is not too surprising, since both ML and physics share some of their methods as well as goals. The two disciplines are both concerned about the process of gathering and analyzing data to design models that can predict the behavior of complex systems. However, the fields prominently differ in the way their fundamental goals are realized. On the one hand, physicists want to understand the mechanisms of Nature, and are proud of using their own knowledge, intelligence and intuition to inform their models. On the other hand, machine learning mostly does the opposite: models are agnostic and the machine provides the ‘intelligence’ by extracting it from data. Although often powerful, the resulting models are … as opaque to our understanding as the data patterns themselves. Machine learning tools in physics are therefore welcomed enthusiastically by some, while being eyed with suspicions by others. What is difficult to deny is that they produce surprisingly good results in some cases.”17

ChatGPT on ML Apps

OpenAI’s ChatGPT, itself a product of ML, has its own thoughts about the utility of machine learning. When asked “What are the enterprise applications of machine learning?”, the program responded as follows:

Machine learning has found numerous enterprise applications across various industries. Here are some common examples:

Customer Relationship Management (CRM): Machine learning can be used to analyze customer data and behavior patterns to improve customer segmentation, lead scoring, churn prediction, and personalized recommendations.

Fraud Detection: Machine learning algorithms can analyze large volumes of data in real-time to detect fraudulent activities, such as credit card fraud, insurance fraud, or identity theft.

Supply Chain and Inventory Management: Machine learning can optimize supply chain operations by forecasting demand, optimizing inventory levels, improving logistics, and predicting maintenance needs.

Predictive Maintenance: Machine learning can analyze sensor data from machinery and equipment to predict maintenance needs, optimize maintenance schedules, and reduce downtime.

Risk Assessment and Underwriting: Machine learning can automate risk assessment in insurance underwriting by analyzing customer data, claims history, and external factors to determine policy premiums and assess risk.

Natural Language Processing (NLP) Applications: Machine learning can power chatbots, voice assistants, and sentiment analysis tools to improve customer service, automate interactions, and analyze customer feedback.

Sentiment Analysis: Machine learning can analyze social media data, customer reviews, and surveys to gauge public sentiment about products, services, or brands, enabling companies to make data-driven decisions.

Demand Forecasting: Machine learning models can analyze historical sales data, market trends, and external factors to forecast future demand accurately, enabling better production planning and inventory management.

Recommendation Systems: Machine learning algorithms can analyze user preferences and historical data to provide personalized recommendations for products, content, or services, enhancing the user experience and driving sales.

Healthcare Diagnosis and Treatment: Machine learning can analyze patient data, medical records, and imaging data to assist in disease diagnosis, treatment planning, drug discovery, and personalized medicine.

These are just a few examples, and the applications of machine learning in the enterprise are rapidly evolving. Organizations across industries are finding innovative ways to leverage machine learning to improve efficiency, automate processes, and gain a competitive edge.

Challenges

“Machine learning can’t get something from nothing … what it does is get more from less.”

– Dr. Pedro Domingo, University of Washington18

Despite the tremendous potential for turning raw data into actionable intelligence, machine learning has some controversial elements.

Not AI

A source of considerable irritation among some AI/ML professionals, there is widespread confusion about the difference between artificial intelligence and machine learning. Indeed, the two terms continue to be used interchangeably.

In a recent scholarly work, researchers Niklas Kühl, Max Schemmer, Marc Goutier, and Gerhard Satzger cobbled together (from multiple sources) a key differentiator between machine learning, which invokes statistical methods, and artificial intelligence:

“AI [systems] must be ‘machines with a mind’ that perform human thinking. Not only should they arrive at the same output as a human when given the same input, but also apply the same reasoning steps leading to this conclusion.”19 

Non Transparency

Test-taking students are frequently admonished to “show your work,” as a way of proving that their answers were formulated through logical means. Unfortunately, one of the troubling aspects of machine learning – and AI in general – is often the inability to conduct human oversight, particularly as the process of ML decision-making becomes more sophisticated, and less grounded in traditional data analysis techniques. The rationale for ML conclusions may, in some cases, be unexplained and unexplainable, creating a potential crisis in confidence.

Data Security

As with other information technologies, machine learning systems are vulnerable to attack. Exposures exist on two levels:

  • First, compromised data could result in ML applications “learning the wrong lessons.”
  • Second, compromised applications could result in erroneous data interpretations.

Either condition could adversely affect enterprise operations.

Adversarial ML

Adversarial machine learning (AML) is the term applied to efforts aimed at fooling, or otherwise disrupting, machine learning models. Common tactics include:

  • Presenting a model with inaccurate data during training; and
  • Offering “maliciously designed” data to an already trained model.20

More broadly, AML is concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences.

To help facilitate AML development, the US National Institute of Standards and Technology (NIST) has advanced “A Taxonomy and Terminology of Adversarial Machine Learning.” The report is intended as a step toward securing AI applications, especially against adversarial manipulations of machine learning, by developing an AML taxonomy and terminology.21

Employment Anxiety

Machine learning, like other forms of artificial intelligence, will disrupt job markets in unanticipated ways. According to the McKinsey Global Institute, “dealing with job displacement, retraining, and unemployment will require a complex interplay of government, private sector, and educational and training institutions, and it will be a significant debate and an ongoing challenge across society.”22

IP Appropriation

Bolstered by allegations from a high-profile litigant, comedian Sarah Silverman, OpenAI and Meta are defendants in two class-action lawsuits. As reported by The New York Times, “the lawsuits, in which she joined the authors Christopher Golden and Richard Kadrey, were filed … in the San Francisco Division of the US District Court of the Northern District of California. Each suit says that the company in question made copies of the authors’ works, including Silverman’s memoir, “The Bedwetter,” without permission by scraping illegal online ‘shadow libraries’ that contain the texts of thousands of books.”23

More broadly, these and other similar legal actions question the legitimacy of machine learning as presently practiced, as ML programs frequently “copy and ingest” copyrighted material. Ultimately, ML providers may have to compensate content creators whose work is “sucked up” into training data sets.

Recommendations

“Machine-learning algorithms find and apply patterns in data. And they pretty much run the world.”

– Karen Hao24

Implementing Machine Learning

While implementing machine learning is seldom fast and easy, the following six-step process, as offered by analyst Ed Burns, offers a relatively straightforward process:

  1. Identify a Problem – “The most effective machine learning projects tackle specific, clearly defined business challenges or opportunities.”
  2. Choose an Algorithm – “Different machine learning algorithms are better suited for different tasks. Cutting-edge deep learning algorithms are better at complicated things like image recognition or text generation.”
  3. Gather Relevant Data – “Data collection involves complicated tasks like identifying data stores, writing scripts to connect databases to machine learning applications, verifying data, cleaning and labeling data and organizing it in files for the algorithm to work on.”
  4. Build the Model – “This step … will differ substantially depending on whether the [ML] team is using a supervised machine learning algorithm or an unsupervised algorithm.”
  5. Develop the Application – “Now that the algorithm has developed a model of what the data looks like, data scientists and developers can build that learning into an application that addresses the business challenge or opportunity identified in the first step of the process.”
  6. Validate the Model – “Data scientists should verify that [the] application is delivering accurate predictions on an ongoing basis.”25

Business Process Reengineering

The goal of every enterprise is continuous improvement, a key feature of machine learning.

Analyst Shuvro Sarkar advises that machine learning is ideal for business process reengineering since “[ML] algorithms are iterative in nature, repeatedly learning and probing to optimize outcomes. Every time an error is made, machine learning algorithms correct [themselves] and [begin] another iteration of the analysis. And all of these calculations happen in milliseconds making it exceptionally efficient at optimizing decisions and predicting outcomes.”26

Human Resources Planning

Machine learning is a disruptive technology which enables information systems to perform functions previously performed by humans. As machine learning pervades the enterprise space, workforce reductions are inevitable.

As jobs disappear, the ethical enterprise has an obligation to mitigate the impact on employees and their families by:

  • Retraining valued workers, permitting them to assume new roles within the enterprise; or
  • Providing outplacement services, positioning displaced workers to find gainful employment elsewhere.

These strategies should be codified in a personnel plan designed specifically to manage machine learning-related human resources issues.

References

  • 1 Nick Heath. “What Is Machine Learning? Everything You Need to Know.” ZDNet. May 14, 2018.
  • 2 Shuvro Sarkar. “How to Use Machine Learning in Today’s Enterprise Environment.” ReadWrite. November 9, 2016.
  • 3 “Machine Learning Market Size, Share & Trends Analysis Report By Component (Hardware, Software, Services), By Enterprise Size, By End-use (Advertising & Media, Healthcare, Retail), By Region, And Segment Forecasts, 2023 – 2030.” Grand View Research. July 2023.
  • 4 “Preparing for the Future of Artificial Intelligence.” Executive Office of the President: National Science and Technology Council Committee on Technology. October 2016:8.
  • 5 Singh Niven. “Why Should You Care About Machine Learning?” Intel. August 11, 2016.
  • 6 James Le. “The 10 Algorithms Machine Learning Engineers Need to Know.” KDnuggets. August 24, 2016.
  • 7 Daniel Faggella. “What Is Machine Learning?” Emerj – Artificial Intelligence Research and Insight. February 19, 2019.
  • 8 Joanna Goodman. “Get Used to Virtual Assistants in Business Life.” Raconteur Media Ltd. July 26, 2016.
  • 9 “Preparing for the Future of Artificial Intelligence.” Executive Office of the President: National Science and Technology Council Committee on Technology. October 2016:9-10.
  • 10 Karen Hao. “What Is Machine Learning?” MIT Technology Review. November 17, 2018.
  • 11 Ed Burns. “In-Depth Guide to Machine Learning in the Enterprise.” TechTarget. April 5, 2021.
  • 12 Jason Brownlee. “14 Different Types of Learning in Machine Learning.” Machine Learning Mastery Pty. Ltd. November 11, 2019.
  • 13 Sandhya Mahadevan. “AI Isn’t a Human-Like Machine; It’s a Machine-Like Child.” Gartner, Inc. March 01, 2023.
  • 14 “DOE Explains … Machine Learning.” US Department of Energy. 2022.
  • 15 “The Future of Machine Learning – Time Travel into the Future.” TechVidvan. February 19, 2020.
  • 16 Karen Hao. “This Robot Taught Itself to Walk Entirely on Its Own.” MIT Technology Review. March 2, 2020.
  • 17 Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, and Lenka Zdeborova. “Machine Learning and the Physical Sciences.” The Authors. December 6, 2019.
  • 18 Daniel Faggella. “What Is Machine Learning?” Emerj – Artificial Intelligence Research and Insight. February 19, 2019.
  • 19 Niklas Kühl, Max Schemmer, Marc Goutier, and Gerhard Satzger. “Artificial intelligence and machine learning.” Electron Markets 32, 2235–2244 (2022). https://doi.org/10.1007/s12525-022-00598-0.
  • 20 Kyle Wiggers. “Adversarial Attacks in Machine Language: What They Are and How to Stop Them.” VentureBeat. May 29, 2021.
  • 21 Elham Tabassi, Kevin J. Burns, Michael Hadjimichael, Andres D. Molina-Markham, and Julian T. Sexton. Draft NISTIR 8269: “A Taxonomy and Terminology of Adversarial Machine Learning.” US National Institute of Standards and Technology. October 2019:1.
  • 22 Joe McKendrick. “What’s Inside the ‘Black Box’ of Machine Learning?” RTInsights. 2016.
  • 23 Zachary Small. “Sarah Silverman Sues OpenAI and Meta Over Copyright Infringement.” The New York Times. July 10, 2023.
  • 24 Karen Hao. “What Is Machine Learning?” MIT Technology Review. November 17, 2018.
  • 25 Ed Burns. “In-Depth Guide to Machine Learning in the Enterprise.” TechTarget. April 5, 2021.
  • 26 Shuvro Sarkar. “How to Use Machine Learning in Today’s Enterprise Environment.” ReadWrite. November 9, 2016.
EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues