The Limits Of Machine Learning Are Still Human

How close are we to the AIs we see in sci-fi movies? Further than you might expect…

What are the limits of machine learning?

We’ve talked before about science fiction’s role in helping us think about the dangers of future products, but there’s also a set of dangers implcits in the way science fiction shapes our thought processes. Much of the scifi around AI suggests an independent self-learning machine, which evolves beyond its human creators.

That’s not the reality of today’s self-leanring machines at all. They are incredibly dependent on the humans that shape them, as this article in Nautilus explores:

Even when machines teach themselves, the preferred patterns are chosen by humans: Should facial recognition software infer explicit if/then rules, or should it treat each feature as an incremental piece of evidence for/against each possible person? And what features should the system attend to? Should it care about individual pixels? Maybe sharp edges between light and dark regions? These choices constrain what patterns the system deems likely or even possible. Finding that perfect combination has become the new job of the machine learning engineer.

Some have suggestes that neural networks will be the solution to this constraint. They allow high-level features to be inferred from more basic data without ditrect human intervention. So while humans trigger the proces, the computer the self-develops from there, without explicit instructions.

One Neural Network to rule them all

So, are we on the verge of the scifi self-lerning machine? Not quite:

Neural networks still aren’t a perfect fit for all problems. Even in the best cases, they require quite a bit of tweaking. A neural network consists of layers of “neurons,” each performing a calculation on an input and spitting out the result to the next layer. But how many neurons and how many layers should there be? Should each neuron take input from every neuron in the previous layer, or should some neurons be more selective? What transformation should each neuron apply to its inputs to produce its output? And so on.

Clearly, there are planty of research projects underway that aim to clear that hurdle, and make the networks more self-determining. But even then, a network dedicated to face process will probably be rubbish at speech recognition. The vision of a single, all-encompassing AI is a very long way off, and us biological humans will still have a very important role in sheperding those networks into making the most useful decision sets.

And that, in turn, suggests that we need to be quite careful about the range of people shaping the neural networks – but that’s fodder for another post.