Reality mining with machine learning for maps and profit
Reality mining was an idea from nearly a decade ago, but its time may have finally come. Machine learning is opening up avenues of analysis and modelling that we could barely have imagined only eight years ago.
Eight years ago — eight years! — we succumbed to data love. We were right at the height of the mobile phone boom, and the potential of carrying connected, sensor-rich devices with us everywhere was exciting. We could map our environment in richer ways! We could quantify our bodies – the quantified self!
And, then, well, everyone realised it was much easier to just mine what people were doing on the web — and on social media in particular — and they got on with that instead.
That doesn’t mean that the original idea wasn’t good, though. It was just, well, premature. We lacked the tools we needed to really mine reality – because the datasets involved in the physical world make those of the digital world look tiny. In the last few years, we’ve got there. Machine learning is now here — heck, it’s probably built into your phone, if you’ve bought one in the last couple of years, and it’s changing all sorts of things:
In a study published in Nature on July 3, researchers from the Lawrence Berkeley National Laboratory used an algorithm called Word2Vec sift through scientific papers for connections humans had missed. Their algorithm then spit out predictions for possible thermoelectric materials, which convert heat to energy and are used in many heating and cooling applications.
Guess what? Many of those predictions were good, and have provided us with new, more efficient thermoelectric materials.
To use a real world metaphor, machine learning moves our mining from opencast era to the deep shaft manning era. We can do more than analyse the obvious, by using ML to find connections deep within data sets that human are unable to process in totality. It’s quite something.
Mapmaking with sensor data
Now, that’s just mining reality as expressed through scientific research. What other forms of sensor data could we use? Well, one of the other technologies that we discussed a few years back has struggled slowly towards the mainstream: autonomous vehicles. One of the key problems with that is they need much, much better maps than we have now – and they need to be updated as close to real-time as possible.
One again, ML-based deep learning comes into play:
The pressure on maps, when it comes to accuracy, precision, and update frequency, has never been higher. This is especially true as we move towards HD maps to accommodate for autonomous vehicles and deliveries. Robust training data is a crucial part of the puzzle to develop deep learning algorithms for autonomous vehicles. With simple onboard cameras and deep learning, they can update and improve their maps continuously. Deep learning is paving the way for real-time mapmaking using cameras.
The cars themselves – festooned, as they are, with sensors, can improve the mass in real time.
It seems that reality mining’s time may have come again. We’re now developing the tools that allow us to make rapid use of the vast data sets real-world sensors can generate, and finally start to build digital maps of reality that are comprehensible, useful and updatable – and without Facebook’s influence.
Wait, what’s that you said? Facebook is doing what?
Facebook is using machine learning models on high-resolution satellite imagery to paint a definitive picture of human settlement around the world
Facebook identified structures at scale using a process called weakly supervised learning. After training the model using large batches of photos, then checking over the results, Facebook was able to reach a 99.6% labeling accuracy for positive examples.
In other words, Facebook’s computers are now able to recognise not just your digital footprint – but where you live. Physically. From satellite photos.
It seems that we need to get on top of this digital Parallelwelten of physical reality, before today’s tech giants do…