Machine Learning Relationship with Ai

Of the many downsides to the recent revival and popularity of Ai (Artificial Intelligence) one is that we see a lot of vendors, professional services firms, and end users jumping on the Ai bandwagon branding their technologies, products, service offerings, and projects as Ai products, projects, or offerings without it necessarily being the case. On the one side, there isn’t a well-accepted delineation between what is definitely Ai and what is definitely not Ai. This is because there isn’t a well-accepted and standard definition of what is artificial intelligence. Indeed, there isn’t a standard definition of intelligence, period.

Perhaps it is best to start with the overall goals of what we’re trying to achieve with Ai, rather than definitions of what Ai is or isn’t. Since the origin of Ai in the 1950s, the goals of intelligent systems are those that mimic human cognitive abilities. This means the ability to perceive and understand its surroundings, learn from training and its own experiences, make decisions based on reasoning and thought processes, and the development of intuition in situations that are vague and imprecise; basically the world in which we live in. From a delineation perspective, it’s easy to classify the movements towards AGI (Artificial General Intelligence) as Ai initiatives. After all, AGI systems are attempting to create systems that have all the cognitive capabilities of humans, and then some. Therefore certainly all AGI initiatives as Ai initiatives.

On the flip side, simply automating things doesn’t make them intelligent. It may take time and effort to train a machine to understand the difference between an image of a cat and an image of a dog or even between different species of dogs, but that doesn’t mean that the system can understand what it is looking at, learn from its own experiences, and make decisions based on that understanding. Similarly, a voice assistant can process your speech when you ask it “What weighs more: a ton of carrots or a ton of peas?”, but that doesn’t mean that the assistant understands what you are actually talking about or the meaning of your words. So, can we really argue that these systems are intelligent?

In a recent interview with MIT Professor Luis Perez-Breva, he argues that while these various complicated training and data-intensive learning systems are most definitely Machine Learning capabilities, that does not make them Ai capabilities. In fact, he argues, most of what is currently being branded as Ai in the market and media is not Ai at all, but rather just different versions of machine learning where the systems are being trained to do a specific, narrow task, using different approaches, of which Deep Learning is currently the most popular. He argues that if you’re trying to get a computer to recognize an image just feed it enough data and with the magic of math, statistics and neural nets that weigh different connections more or less over time, you’ll get the results you would expect. But what you’re really doing is using the human’s intelligence of what the image is to create a large dataset that can then be mathematically matched against inputs to verify what the human understands.

How Machine Learning relate to AI?

The view espoused by Professor Perez-Breva is not isolated or outlandish. In fact, when you dig deeper into these arguments, it’s hard to argue that the narrower the ML task, the less Ai it in fact is. However, does that mean that ML doesn’t play a role at all in Ai? Or, at what point can you say that a particular machine learning project is an Ai effort in the way we discussed above? If you read the Wikipedia description of Ai, it will tell you that, as of 2017, the industry generally accepts that “successfully understanding human speech, competing at the highest level in strategic game systems, autonomous cars, intelligent routing in content delivery network and military simulations” can be classified as Ai systems.

The line between intelligence and just math or automation is a tricky one. If you decompose any intelligent system, even the eventual end goal of AGI, it will look just like bits and bytes, neural networks, decision-trees, lots of data, and mathematical algorithms. Similarly, if you decompose the human brain, it’s just a bunch of neurons firing electrochemical pathways. Are humans intelligent? Are zebras intelligent? Is bacteria intelligent? Where’s the delineation between intelligence in living organisms? Perhaps intelligence is not truly a well-defined thing, but rather an observation of the characteristics of a system that exhibit certain behaviors. In this light, one of those behaviors is understanding and perceiving its surroundings, and another of those is learning from experiences and making decisions based on those experiences. In this light, ML definitely forms a part of what is necessary to make Ai work.

Over the past 60+ years there have been many approaches and attempts to get systems to learn to understand its surroundings and learn from its experiences. These approaches have included decision trees, association rules, artificial neural networks of which Deep Learning is one such approach, inductive logic, support vector machines, clustering, similarity and metric learning including nearest-neighbor approaches, Bayesian networks, reinforcement learning, genetic algorithms and related evolutionary computing approaches, rules-based machine learning, learning classifier systems, sparse dictionary approaches, and more. For the layperson, we want to stress that Ai is not interchangeable for ML and certainly ML is not interchangeable with Deep Learning. But ML supports the goals of Ai, and Deep Learning is one way to do certain aspects of ML. Or to put it another way, doing machine learning is necessary, but not sufficient, to achieve the goals of Ai, and Deep Learning is an approach to doing ML that may not be sufficient for all ML needs.

What Parts of AI are not Machine Learning?

It’s an interesting exercise to think about how we, as humans, have gained the intelligence that we have now. In some instances, we learned from simply being part of our environment such as learning how gravity works, how to speak to others and understand what they are saying, and cultural norms. In other instances, we learn in a teaching environment from instructors who knew a particular abstract subject area such as math or physics. In yet other instances we learn from repeating a particular task over and over again to get better at that task, such as music or sports. From an Ai perspective, these are just different kinds of learning, and therefore, different machine learning strategies. Supervised learning for being taught how to do things. Unsupervised learning when you’re learning from observing the world. Reinforcement learning when you’re learning by trial and error. Therefore, doesn’t it make sense that all forms of machine learning should be considered Ai? What else could there be?

Some say machine learning is a form of pattern recognition, understanding when a particular pattern occurs in nature or experience or through senses, and then acting on that pattern recognition. When you look at it from that perspective, it becomes clear that the learning part must be paired with an action part. Decisions and reasoning is not just applying the same response to the same patterns over and over again. If that was the case, then all we’re doing is using ML to simply automate better. Given the same inputs and feedback, the robot will perform the same action. But do humans really work that way? We experiment with different outcomes. We weigh alternatives. We respond differently when we’re stressed than when we’re relaxed. We prioritize. We think ahead and think about the potential outcomes of a decision. We play politics and we don’t always say what we want to say. And the big one: we have emotions. We have self-consciousness. We have awareness. All of these things move us beyond the task of learning into the world of perceiving, acting, and behaving. These are the frontiers of Ai.

Moving Threshold of Intelligence

In reading this piece, you’re actually yourself thinking and learning about ML and Ai, the relationships to each other, and whether or not specific ML activities are accomplishing the goals of what we aim to achieve in Ai. Likewise, even for those at the extremes of the Ai spectrum considering only AGI to be truly Ai or on the other polar opposite that consider any application of ML to be Ai, the truth lies somewhere in the middle. Some machine learning initiatives are more like automation and application of formulas that can’t continuously evolve or respond to change, while other machine learning efforts are closer to intelligence, which can change and adapt over time with experience, improving at their task or desired outcome.

Conclusion

The tech industry continues to iterate on ML and address problems previously considered to be more complicated and difficult. As the collection of ML activities mature, while some are definitely not Ai-like at all or particularly intelligent, others are progressing the industry down the path of Ai. Eventually we’ll start to see the sort of technology evolution that has long been the goal of Ai.

You Rock!

Avatar
Rihad Variawa
Data Scientist

I am the Sr. Data Scientist at Malastare AI and head of global Fintech Research, responsible for overall vision and strategy, investment priorities and offering development. Working in the financial services industry, helping clients adopt new technologies that can transform the way they transact and engage with their customers. I am passionate about data science, super inquisitive and challenge seeker; looking at everything through a lens of numbers and problem-solver at the core. From understanding a business problem to collecting and visualizing data, until the stage of prototyping, fine-tuning and deploying models to real-world applications, I find the fulfillment of tackling challenges to solve complex problems using data.

Next
Previous
comments powered by Disqus