Could startups find meaning in our photo streams?

We're generating ever-increasing volumes of photos - so why isn't technology helping us find meaning in them?

Today, for many, is just another #ThrowbackThursday, where they share photos of themselves in the past via Twitter, Facebook or Instagram. Here, for example, is me, 23 years ago:

Adam at 19

Recovered from that?

One of the interesting things about this sort of phenomenon is that there are far more historical photos out there of participants than would have been conceivable even a few short years ago. Indeed, the arrival of the smartphone with a camera has totally changed our relationship with photography.

A recent post by Benedict Evans explores the profound change that’s happened with photos:

The universal scope of the camera and the saturation of our lives with the photos we take also means that ‘taking pictures’ is now no more meaningful a term than ‘writing’. Hence Snapchat, Instagram and Facebook or WhatsApp photo sharing are no more all ‘photos’ than Word, Indesign, WordPress and twitter are all ‘text’. Photos are no longer a category.

In fact, we’re building these vast libraries of digital images online, many o which are not private in anyway. Yet, there’s been precious little done with them. Even the mighty Google, the master at crunching data into interesting new forms seems to be steering clear of the sort of mass data analysis that could be done with vast data sets.

Photo recognition

For example, think of the huge amount of geotagged imagery it now has in Google+ through the auto backup feature in Android phones and the iOS Google+ app. Would it be feasible to build a data profile of areas of the world, based on that data, without publishing or using that data other than in the analysis?

This could be incredibly handy, especially for those of us with historical family photos. Here’s a image taken by my parents in the 1960s:

Historic beach

Plenty of information there – could Google identify the location for me, based on modern geotagged data? well, in some cases it will do. Drag an image into Google Image search, and on some (rare) occasions it will suggest a location. And your photos uploaded to Google+ are searchable by content to some extent:

Starting today, you’ll be able to find your photos more easily and connect with the friends, places and events in your Google+ photos. For example, now you can search for your friend’s wedding photos or pictures from a concert you attended recently. To make computers do the hard work for you, we’ve also begun using computer vision and machine learning to help recognize more general concepts in your photos such as sunsets, food and flowers.

It’s pretty safe to assume that we’ll see more of this as time goes on. However, beyond Google, finding examples of people or startups using published photos in an interesting, aggregated way still seems rare.

Synthesising the stars

There’s a fascinating piece on the Smithsonian site about Carnegie Melon University astronomy researchers who have found a novel use for photography:

To get detailed images of deep space, astronomers have a couple of options, says Technology Review. They can either use a long exposure to capture one really detailed image, or stack multiple less-detailed images together. Lang and colleagues opted for the second approach. But rather than using multiple photos taken with the same telescope, they looked to the web.

The team used a new algorithm to stack nearly 300* images of the Galaxy NGC 5907 that they found on Flickr, Bing, and Google. They did this by “[l]iterally searching for ‘NGC 5907’ and ‘NGC5907’,” explains Astrobites.

Hopefully, as images do become – as Evans put it – no longer a category, but an on-going process, more startups will find ways of drawing meaning beyond simple sharing from that stream.