A brief history of machine learning
Table of Contents
Just a few decades ago, machine learning was nowhere near reality and seemed like the subject of a science fiction novel. Today, it’s an essential technology in the field of artificial intelligence that helps us to carry out tasks ranging from driving cars to finding the best products.
Thanks to countless mathematicians, philosophers, and computer scientists, we have gone a very long way from the dream of self-learning machines to the actual field of machine learning. The future is only going to expand this trend as the machine learning market is expected to grow from $1.03 billion in 2016 to a smashing $8.81 billion by 2022, registering a Compound Annual Growth Rate of 44.1% during that period.
But how did machine learning come about? Who was the first person to create the concept of a self-learning machine? And how did that concept develop over time? In this article, we take a closer look at the history of machine learning to show you how its origins, methods, the current trends and key use cases, and its future.
What is machine learning?
It’s always good to start with the basics, so let’s answer this question. Machine learning is a particular application of artificial intelligence (AI) that provides machines with the ability to automatically learn and improve from experience – but without being explicitly programmed to do so.
This is the essence of machine learning. It focuses on building software that can access data and use it to train itself in order to provide better results, without software developers having to train it themselves.
The primary goal of machine learning is to allow machines such as computers to learn automatically without any human intervention or assistance and then adjust their actions accordingly based on the insights they have uncovered.
How does the process work?
It all starts with observations or data such as direct experience, instruction, or examples. A machine learning algorithm looks for patterns in data and analyzes the examples we feed it. On the basis of these examples, it generates insights that allow making smarter decisions.
What are the most popular methods?
We can generally divide machine learning algorithms into those based on supervised and unsupervised learning.
- Supervised machine learning algorithms are able to apply what they have learned in the past to new data with the help of labeled examples. They do that to predict future events. Such algorithms begin with the analysis of the known training data set and then produce insights to make predictions about the output values. The system can provide targets for any new input after being exposed to sufficient training. The algorithm can also compare its results with the correct and intended output to find errors and modify the model accordingly.
- Unsupervised machine learning algorithms are used when data fed to the algorithm is not labeled or classified in any way. Unsupervised learning is based on systems that analyze patterns in unlabeled data. The system doesn’t arrive at the output that can be just right or wrong, but rather it explores the data and draws inferences from data sets.
- Semi-supervised machine learning algorithms fall somewhere in between these two sides. They use both labeled and unlabeled data for training. Typically, it’s a small number of labeled data and a very large amount of unlabeled data. Systems that use this method can considerably improve their learning accuracy. Most of the time, semi-supervised learning systems are chosen when the acquired labeled data calls for skilled and relevant resources in order to learn from it. Otherwise, acquired unlabeled data doesn’t really mean that we need any additional resources.
- Reinforcement machine learning algorithms represent another type of learning method. This type of algorithm interacts with the environment by producing actions and then discovering errors or rewards. Delayed reward or trial and error searches are the key characteristics of reinforcement learning. The method allows machines to automatically determine the best behavior within a specific context to maximize their performance and get the best reward. They require to learn which actions work best – the so-called reinforcement signal.
The origins and history of machine learning
1950 – this is the year when Alan Turing, one of the most brilliant and influential British mathematicians and computer scientists, created the Turing test. The test was designed to determine whether a computer has human-like intelligence. In order to pass the test, the computer needs to be able to convince a human to believe that it’s another human. Apart from a computer program simulating a 13-year-old Ukrainian boy who is said to have passed the Turing test, there were no other successful attempts so far.
1952 – Arthur Samuels, the American pioneer in the field of artificial intelligence and computer gaming, wrote the very first computer learning program. That program was actually the game of checkers. The IBM computer would first study which moves lead to winning and then put them into its program.
1957 – this year witnessed the design of the very first neural network for computers called the perceptron by Frank Rosenblatt. It successfully stimulated the thought processes of the human brain. This is where today’s neural networks originate from.
1967 – The nearest neighbor algorithm was written for the first time this year. It allows computers to start using basic pattern recognition. This algorithm can be used to map a route for a traveling salesman that starts in a random city and ensures that the salesman passes by all the required cities in the shortest time. Today, the nearest neighbor algorithm called KNN is mostly used to classify a data point on the basis of how their neighbors are classified. KNN is used in retail applications that recognize patterns in credit card usage or for theft prevention when implemented in CCTV image recognition in retail stores.
1981 – Gerald Dejong introduced the concept of explanation-based learning (EBL). In this type of learning, the computer analyzes training data and generates a general rule that it can follow by discarding the data that doesn’t seem to be important.
1985 – Terry Sejnowski invented the NetTalk program that could learn to pronounce words just like a baby does during the process of language acquisition. The artificial neural network aimed to reconstruct a simplified model that would show the complexity of learning human-level cognitive tasks.
The 1990s – during the 1990s, the work in machine learning shifted from the knowledge-driven approach to the data-driven approach. Scientists and researchers created programs for computers that could analyze large amounts of data and draw conclusions from the results. This led to the development of the IBM Deep Blue computer, which won against the world’s chess champion Garry Kasparov in 1997.
2006 – this is the year when the term “deep learning” was coined by Geoffrey Hinton. He used the term to explain a brand-new type of algorithms that allow computers to see and distinguish objects or text in images or videos.
2010 – this year saw the introduction of Microsoft Kinect that could track even 20 human features at the rate of 30 times per second. Microsoft Kinect allowed users to interact with machines via gestures and movements.
2011 – this was an interesting year for machine learning. For starters, IBM’s Watson managed to beat human competitors at Jeopardy. Moreover, Google developed Google Brain equipped with a deep neural network that could learn to discover and categorize objects (in particular, cats).
2012 – Google X lab developed a machine learning algorithm able to autonomously browse YouTube videos and identify those that contained cats.
2014 – Facebook introduced DeepFace, a special software algorithm able to recognize and verify individuals on photos at the same level as humans.
2015 – this is the year when Amazon launched its own machine learning platform, making machine learning more accessible and bringing it to the forefront of software development. Moreover, Microsoft created the Distributed Machine Learning Toolkit, which enables developers to efficiently distribute machine learning problems across multiple machines. During the same year, however, more than three thousand AI and robotics researchers endorsed by figures like Elon Musk, Stephen Hawking, and Steve Wozniak signed an open letter warning about the dangers of autonomous weapons that could select targets without any human intervention.
2016 – this was the year when Google’s artificial intelligence algorithms managed to beat a professional player at the Chinese board game Go. Go is considered the world’s most complex board game. The AlphaGo algorithm developed by Google won five out of five games in the competition, bringing AI to the front page.
2020 – Open AI announced a groundbreaking natural language processing algorithm GPT-3 with a remarkable ability to generate human-like text when given a prompt. Today, GPT-3 is considered the largest and most advanced language model in the world, using 175 billion parameters and Microsoft Azure’s AI supercomputer for training.
Trends and key use cases of machine learning
Data analysis
Organizations generate more data than ever before, and machine learning algorithms come to the rescue for making sense of that data. Machine learning can speed up the process of uncovering the most valuable information from data sets by doing the heavy lifting in the time-consuming process of reviewing all the data. Machine learning-based tools assist managers in decision-making processes and help teams in departments such as sales, marketing, or production to crunch the numbers faster.
Personalization
Customers expect to receive personalized experiences from brands today. Personalization, as a driver of customer loyalty, has become especially important when delivered via online and mobile apps. Machine learning can help companies to achieve that by offering customers with personalized product recommendations and channels that they use most.
Fraud detection
As more and more consumers turn to online channels for shopping, cybercriminals gain many opportunities to commit fraud. Organizations employ many types of online security measures, where machine learning holds the greatest promise. For example, they use machine learning tools to identify fraudulent transactions such as money laundering and separate them from legitimate ones. Machine learning algorithms help to examine specific features in the data set and build a model that offers a strong basis for reviewing every single transaction for signs that it could be fraudulent. That way, organizations can stop the process before the transaction is completed and avoid bigger problems.
Dynamic pricing
The travel and retail industries see many opportunities for changing pricing based on fluctuating demand. However, incorporating dynamic pricing can be challenging across large enterprises with multiple locations or customer segments. This is where machine learning helps as well. For example, Airbnb and Uber use machine learning to create dynamic prices for each user on the go. Moreover, machine learning helps to minimize waste time and optimize the ride-sharing aspect of Uber. For instance, the app can temporarily change pricing in a given area to gain a higher revenue stream or reduce rates when the demand is much lower.
Natural language processing (NLP)
Tasks like tech support, help desks, customer service, and many others can now be solved thanks to machine learning algorithms and their capability for natural language processing. Computers can take over human agents because NLP offers automated translation between computer and human languages. Machine learning-powered tools like chatbots and virtual assistants focus on context, jargon, meaning, and many other subtle nuances in the human language to sound more human.
The future of machine learning
Improvements in unsupervised learning algorithms
In the future, we’ll see more effort dedicated to improving unsupervised machine learning algorithms to help to make predictions from unlabeled data sets. This function is going to become increasingly important as it allows algorithms to discover interesting hidden patterns or groupings within data sets and help businesses understand their market or customers better.
The rise of quantum computing
One of the major applications of machine learning trends lies in quantum computing that could transform the future of this field. Quantum computers lead to faster processing of data, enhancing the algorithm’s ability to analyze and draw meaningful insights from data sets.
Focus on cognitive services
Software applications will become more interactive and intelligent thanks to cognitive services driven by machine learning. Features such as visual recognition, speech detection, and speech understanding will be easier to implement. We’re going to see more intelligent applications using cognitive services appear on the market.
We hope this article helped you understand what machine learning used to be, what it is right now, and what it will become in the future.
If you’re looking for machine learning experts for your project, reach out to us. Our teams have plenty of experience in delivering machine learning-based solutions to clients operating in industries such as automotive, healthcare, and retail. Check out our case studies to learn more.