Subscribe Us

Why Are We Training AI like Dogs Instead of Humans?

The fundamental problem of the modern AI is that it tries to create a sophisticated trained dog not a intelligent agent capable to learn, develop and improve. This approach is a dead end and needs to be drastically changed. We need to create a new AI that would be able to learn the same way humans do.

image
Vlad Romashov Hacker Noon profile picture

Vlad Romashov

Ex theoretical physicist and engineer. Now work on the fusion of neuroscience, NLP and hermeneutics to create a new AI.

I first have to confess. I am relatively new to the AI industry, so please excuse me if my views appear a bit naive. However, precisely because of my fresh exposure to the subject, I might see what the AI industry veterans don’t, as it is not unusual to have your vision blurred after looking at something for a long time.

Having spent a couple of years in the AI field, I only have the utmost respect for the industry pioneers and practitioners that produced incredible results in recent years. However, I believe there is a fundamental problem with the modern AI approach that limits what we can achieve using current tools and techniques. 

Essentially, instead of developing an intelligent agent that can learn, develop and improve, the modern AI approach tries to create a trained dog that can bring you your slippers.

This dog might be very efficient at this task, and you may settle for it as it might be good enough for some applications, but this dog is infinitely far from an intelligent being. The moment you ask it to do something slightly different, a bit more complicated, it will fail.

It might be a bit harsh, but in my view, it is not an AI at all. The true AI should be an agent capable of learning, adapting, and improving. Developing such an agent is indeed a humongous task, but we need to start somewhere, and I think I have a few ideas that can, at the very least, kick off a discussion on how we can get there. 

Is there really a problem with AI?

Well, have a look at the recent AI projects and the results they delivered. It is difficult to argue with the notion that a lot of current AI projects have overpromised and underdelivered. For example, a few years ago, we were told that AI would replace numerous occupations, from HGV drivers to web developers and radiologists. What did actually happen? Well, not much, to say the least. The shortages of HGV drivers are as dire as it has ever been (at least here in the UK, where I am), and the number of radiologists only went up in recent years. It appears that spotting cancerous cells on an X-ray image, which seems to be an ideal task for AI, is beyond what the current AI systems can do. 

What about web developers and other occupations that allegedly would be made redundant by the emerging AI-powered robots? Again, they also appear to be perfectly safe from this threat. The issue is that as soon the problem the AI-powered system faces becomes even the tiniest bit less familiar or slightly outside the training set, it struggles to deal with it. At the same time, any human equipped with a few dozen hours of training would solve these problems in seconds without significant effort. Modern AI just cannot compete with humans apart from very few areas that require substantial computational efforts. 

Narrow AI vs General AI

The problem with the neural networks trained even on massive amounts of data is that they still struggle to extract the general knowledge and principles from the data. This lack of ‘narrow AI’ ability to generalise is well-known, and it is a fundamental problem of modern AI systems. On top of this, they also don’t seem to possess any common sense resulting in the inability to interpret even slightly unfamiliar circumstances. Metaphorically speaking, you can train it to bring you your slippers, but it is unlikely to perform the task as soon as you buy a pair of new ones. 

To address this issue, numerous organisations are now researching ‘artificial general intelligence (AGI)’. The AGI system would be able to learn any task a human can and potentially do this task better than a human. However, as of today, AGI systems remain speculative – they are a very, very distant possibility. The bottom line is that currently, we simply don’t know how to create an intelligent system that would be able to learn in the way humans do.  

So, the problem is apparent, but what is the solution? I wish I knew the answer, but I don’t – I only have a couple of ideas that might take us somewhere, so let me share them here. 

What is learning?

Suppose we define the requirement to the ‘proper’ AI system as the ability to learn and generalise so it would feel comfortable beyond the training set. In that case, we first need to define what learning actually is in this task. We will then need to find a way to digitise this process to allow a machine to grasp it. 

Let’s start with the learning. Numerous theories and approaches describe what learning is, but I think one suits our purpose best. This theory explains learning as the process of creating new objects, describing these objects with some properties and attributes, and then making connections between these new objects and the ones that already existed. These objects can be initially elementary, but as learning progresses, they become more complex and could be constructed of atomic objects. When you start teaching a child, you begin by explaining simple things such as plates, spoons, doors, windows, etc. As the children grow, they gradually become capable of grasping more complex concepts, e.g. molecules and forces, when they learn physics and chemistry. However, these more complex notions and objects in our minds are built on the foundation of the elementary concepts we learned earlier in life. 

To illustrate this learning process, let’s recall the famous Feynman’s technique for learning a particular subject? His method of understanding a complex issue implied breaking it down into concise thoughts and simple language so you could explain this issue to a child. To do this, you need to establish clear connections between the subject you need to explain and all the objects already present in your mind. You might then need to go down to the elementary object level to explain this to a child. While doing this, you will create a complete picture of the relevant objects and their connections to the subject you are learning. If you have ever used this technique, you will agree on how effective it is. 

To sum up, the entire learning process that can be implemented in an AI system should consist of two main parts:

  1. Assimilating new knowledge by creating new atomic or complex objects;
  2. Deep understanding of the issue by creating meaningful connections between the new objects and existing ones. 

If we present the learning process this way, it seems perfectly feasible to digitise it inside a computer by creating and storing the objects and their connections in its memory. 

AI agent capable of learning

The next step would be to create an AI agent that not only should be able to create and store objects, their properties, and connections between them but also to use this structure to make sense of the new observations and experiences. Such an agent, faced with something new, shall be able to interpret this fresh experience using the existing objects and find a meaningful way to respond. The response itself might vary significantly. It could be simply admitting that this new experience is not something the agent can understand, so additional learning is needed. On the other side of the spectrum, it might be creating new connections between the existing objects to interpret this new experience and make sense of it. The task of creating such a system is, of course, massive, so it might be worthwhile to start with something relatively simple.

I think we should first try to create an AI system that would be able to digest and interpret simple stories, the ones we would read to the 3-4-year-old child. To start moving down this road, we need to develop an AI agent capable of grasping simple objects, properties, and connections. We would then need to teach this agent to ask questions if it struggles to understand something and can not make connections between objects. Our responses to the agent shall follow the Feynman technique, i.e. we need to get down a few levels and only use objects and notions the agent already ‘know’. The agent shall then interpret our explanation by either creating new objects or creating new connections between the existing ones. Rinse and repeat. A long way to go, but it might take us somewhere the current AI will never do.  

Summary

In my perhaps naive and subjective view, the existing AI tools have severe limitations restricting what we can achieve. This issue manifested itself via a large number of unsuccessful AI projects. To overcome this problem, we need to create an intelligent agent that can learn the same way humans do. This new AI not only would able to generalise, but it would also possess common sense. Simply speaking, AI, to reach beyond its current limits, should be brought up and nurtured like a child, not trained like a dog, as it is the case now. 

I am an ex-professional engineer with PhD in theoretical physics. I am now transitioning to the AI field, NLP, to be specific. I work on the fusion between neuroscience, NLP, and hermeneutics to develop an AI system capable to learn the same way humans do. 

This article was also published here

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.



Why Are We Training AI like Dogs Instead of Humans?
Source: Trends Pinoy

Post a Comment

0 Comments