Are we on the verge of a robotic super era? Is just about every business “doing” Machine Learning in one way or another?
When it comes to Machine Learning, Artificial Intelligence and Deep Learning, there are a lot of sensational tales being traded around the tech – and practically every – industry. All this speculation made me draw a parallel with Dan Ariely’s quote about big data:
“Machine Learning is like teenage sex. Everybody talks about it. Only some really know how to do it. Everyone thinks everyone else is doing it. So, everyone claims they’re doing it”.
As a Machine Learning Engineer for Media Distillery, I thought it’s time to dispel some of the misinformation, but because of this topic, also be honest about where it will remain a bit of a mystery
We hear all kinds of rumours about what Artificial Intelligence, Machine Learning and Deep Learning can do, but what do they mean? Let’s start our look at understanding these terms by taking an anatomical overview. Picture AI, ML and DL as a series of subsets; with Machine Learning being a sub-set of AI and Deep Learning being a subset of Machine Learning. In computer science, Machine Learning and Deep Learning are more clearly defined. But AI however is a historical term from the 1950s and one which we humans tend to romanticise. That’s because AI was the first term people ever used to describe any instance in which a machine demonstrates “human-like intelligence”. As a result, it’s broad, vague and ephemeral. Think of chess-playing computer Deep Blue’s victory over world champion Garry Kasparov. Once hailed as an example of “human-like intelligence”, this romantic notion was later dispelled by even one of Deep Blue’s programmers, Joe Hoane, as simply the result of the machine being able to make faster calculations
On the other hand, Machine Learning and Deep Learning are more specifically defined domains in computer science. “Really knowing how to do” Machine Learning requires data scientists to use specific types of algorithms and methodologies to teach the computer to recognize patterns. “Doing it right” is about the quality of your data and clever methods of modelling that data so you can extract an insight.
Since the core of our technology at Media Distillery relies on Machine Learning, you might guess that thinking about data and how best to model that data sends our pulses racing. But it’s not sci-fi fantasies that start off the process, but common sense. We start with: what data do we want to input for the results we want to get? We clean the data, remove the outliers, and then combine Machine Learning models for each of the areas we want the machine to “perceive” inside a video. Broadly speaking, these areas are: face, speech, object, text and logo recognition, as well as object detection. Our models can extract information in real-time and when you layer them, our technology starts approximating an AI. We use trial and error and training to achieve a good result. It takes millions of gigabytes of data for the machine to recognize a face, an ability that a human child develops relatively quickly in early childhood. In other words, Machine Learning isn’t “magic” but the result of human ingenuity.
At this point, you might be asking “Is that all there is to it?” Is it that cut and dried? While everyone can think everyone else is doing it, that’s not possible since it takes time and skill to fully understand the technology. Because we are programming computers to approximate human-like reasoning, it’s not easy to understand out of the box. It’s an ongoing process of discovery that requires a significant amount of work. One of the biggest myths around Machine Learning is that it is a magic wand and will produce objective supercomputers that will solve everything. But the fact is because a computer depends on what data it’s fed, it can end up being the least-objective human. Think of the tools that not only detect hate speech on Twitter but end up producing even more extreme tweets. As a result, we’re seeing the topic of “human-caused bias in Machine Learning” becoming increasingly important in computer science. Companies like Google are hiring people in the area of AI compliance to check if algorithms are harmful and to manage them to be ethical.
Since the technology and our understanding of it is still in flux, it seems that practically everyone can justify they are “doing” at least AI in some form or another. So where does that leave us when sifting through all the rumours, myths and sensational overpromises of supercomputers? If we look at human behaviour, we know that all hypes go through a cycle of highs and lows, and AI has gone through several. These are referred to as “AI winters”, periods of time when the term becomes unpopular because of its association with overhyped expectations and overpromising. The first AI winter was in 1984, and analysts say that we will soon be entering the third AI winter. Like the self-driving car, AI and Machine Learning have a lot to promise, but we’re not there yet.
The bottom line is, computers can’t think. Only the people who program them can. And as we advance into ever cleverer ways of teaching a machine to approximate a “human-like intelligence”, our own perceptions of what constitutes AI and Machine Learning will change. In the end, technology is a mirror, a shifting reflection of what it means to be human and that is where much of the mystery still lies.
June 3, 2019