Conjuring digital memories of #NEXT15
Google and Apple are competing to bring our digital memories to life - and using machine learning to do it.
I’ve been hankering after clever tools to play with my vast and boring collection – database even – of photographs for a long time. See many past posts about this. Google Photos is a big step in the right direction, but Apple’s machine learning in the new versions of its operating systems is just great. In particular, I love the way it generates movies form your old photos – at the rate of one or two a day. It’s a great way to experience a photo library that goes back over a decade. And once in a while, it throws up something really apt, like this:
That’s nearly a nice video of last year’s NEXT – bar some random pics from a brief visit to the city six years ago to speak at a journalism conference, and some tourist images from last year’s event. In a matter of minutes I’d swapped them out for this:
OK, it’s not professional quality. But it’s not far off what I could produce myself – as a videographer, I make a fine writer – and in a fraction of the time.
It’s astonishing what tech can do when it stops seeing things as digital analogies of physical objects, and sees them for what they are – databases of information that can be searched and manipulated in all sorts of interesting ways.
All of a sudden my devices have become the memories of my Digital Ego, finding and surfacing memories for me in a compelling format.