One of the biggest technology trends at the moment is that of machine learning. Machine learning has actually existed in one form or another for quite some time. In the past, it was commonly referred to as artificial intelligence. In spite of the technology’s heritage however, machine learning is currently seeing a massive resurgence.
The current trend toward the use of machine learning stems directly from the so called big data revolution from a few years ago. The idea is that because such vast quantities of data are now available to us, machine learning algorithms can put this data to work in ways that have never before been possible.
At its simplest, machine learning comes in several different forms, and is largely geared toward predictive analysis and pattern matching. Machine learning algorithms analyze large repositories of data in an attempt to discover patterns or trends within that data. If pattern recognition is the ultimate goal, then the analytics can be used to spot known patterns within newly created data. Of course, this new data is also analyzed and is used to further refine the data model.
One example of real world use of large scale pattern matching technology is handwriting recognition. The postal service for example, uses handwriting recognition to aid in the delivery of mail. Because everyone’s handwriting is a little bit different, it is not nearly as easy for a computer to recognize handwritten text as it is for the computer to perform optical character recognition on text that has been typed.
Given enough time and a sufficient number of data samples, a computer can learn to read handwritten text. In this case, the machine learning algorithms do not rely solely on character recognition. Otherwise, the algorithms would have trouble distinguishing between characters that look similar to one another, such as O and 0, or l, 1, and I. Instead, the algorithms examine the characters with regard to the position of other known characters. A zero for instance, would not typically appear in the middle of a word, so the computer can infer that if letters appear on both sides of a character that looks like a zero, then the character is probably an O instead. Of course, other pattern matching techniques such as dictionary matches can also be used. Over time, as the computer analyzes more and more data, it actually learns how to read handwritten text.
Facial recognition software is another example of applied machine learning that is based on pattern matching. We’ve all seen photos on social media sites that have been tagged with the identity of the person in the photo. If a person’s photo is tagged enough times, a computer can learn what the person looks like, and eventually be able to identify the person in a photo that has not been tagged.
The reason why facial recognition depends on true machine learning, and not just pixel matching, is because every photograph is different. Even if the same person appeared in a hundred different photographs, there would be differences in lighting, posture, and facial expression. The computer has to have enough data available to be able to establish a person’s identity, even if they age or use an unusual facial expression.
Another type of machine learning that is beginning to take hold is predictive analysis. The idea behind predictive analysis is that if a large enough dataset can be deeply analyzed, then it should be possible to use trends within that data to predict future data.
Internet search engines use predictive analysis all the time. Search suggestions are the search engine’s attempt at predicting what you are about to type, based on the queries that others have typed. Of course, this is a really simple example. Much more complex algorithms for predictive analysis are currently being developed and used in the financial and healthcare sectors, and are being put to work in areas like weather forecasting.
One of the areas in which machine learning is currently being the most heavily used is speech recognition. Speech recognition is obviously nothing new. Companies such as IBM and Dragon have been offering speech recognition products for decades. Even so, there are two ways in which speech recognition is benefitting tremendously from machine learning.
The first such benefit is that of speech recognition accuracy. Cloud based machine learning algorithms analyze speech input and constantly refine speech recognition models, based on newly acquired data. The personal digital assistants that are built into our cell phones often perform speech recognition that is far more accurate than some of the commercial speech recognition products from just a few years ago.
The second way that speech recognition has benefited from machine learning is in the computer’s ability to respond to spoken input. It’s one thing for a computer to be able to determine which words are being spoken. It is quite another thing for the computer to be able to infer meaning from those spoken words and formulate an intelligent response. Without machine learning, products such as Google Home and Amazon Echo would not be probably be limited to providing predetermined responses to verbal queries, rather than becoming “smarter” over time.
At the present time, machine learning is being put to work in a countless variety of ways. As the big data trend continues, and the available computing power continues to increase, machine learning will likely continue to become more useful.