Stephen Wolfram: making knowledge computable

By Adam Tinworth

23/04/2013 | Stephen Wolfram presented us with a challenging session, both in terms of problems with the live demo, and with the ideas he was trying to covey about modelling the world of knowledge computationally.

The goal of Wolfram Alpha was to make the whole of human knowledge computable. You have to find data, assess what the best sources are, and then digest it into a format computers can work with - this work is best done using human experts. Then you need to figure out how humans can work with it. Their answer is human natural language - making the computer understand a question human would ask another human. The huge amout of knowledge they gathered was a huge advantage in that. They're working with short snippets of text, which are more easily comprehensible than huge swathes of text.

There's more information in Wolfram Alpha than in the whole textual content of the web… Wolfram Alpha's knowledge engine is becoming a backend to many systems that want to inject knowledge into apps - including Apple's Siri.

Mathematica is a language to enable any sort of algorithmic work you want to do. It's nearly 25 years old now. It's a symbolic language that can deal with any kind of object as data. You can deal with an image as data, for example. They want to make the system as automated as possible. The user should just specify a goal, and the system will figure out the best algorithmic tools for doing it…

They're trying to make it as accessible and usable on as many platforms as they can, from mobiles to hosted servers. You can run it in embedded processors for systems like Raspberry Pi. Many of the capabilities are embedded in the language, not built using it. And there's plenty of knowledge built into it - all the knowledge in Wolfram Alpha is accessible to Mathematica.

To make the most of it, you need to collect data you can perform computation on - he's been collecting things like his key presses or e-mails sent, and can compute rhythms of behaviour from it, for example. Once you have data about things or people, you can figure out what they want to know before they do. Humans will set the goals, and the machines will figure out how to do it.

Future uses might include education, where the language is both the tools for exploring ideas and well as the way lessons are constructed. SystemModeler allows modelling of large-scale systems - like a plane's avionics - and then simulation of behaviours.

The computation universe could be mined to solve technology problems. Algorithmic discovery methods have made this technology possible - and could thus lead to others, including medical diagnosis. Humans are pretty bad at diagnosis, because there are so many areas of specialism now. This could change that.

They're doing something different from Google. Wolfram Alpha is trying to compute answers to questions never asked before, while Google is trying to help you find the answers to questions that have already been asked. Oh, and Serge Brin was an intern at Wolfram's company before he founded Google…