What the failure of lifelogging tells us about technology and the future
Bryan Alexander 2016-09-18
Remember lifelogging? That was the idea that devices would video our lives for later viewing. It’s been a science fiction trope for a while, inspiring stories like John Crowley’s amazing “Snow”.
Yet in 2016 lifelogging hasn’t taken off. In fact, it hasn’t really happened at all. Computerworld has a fascinating discussion with one of its pioneers, Gordon Bell, who launched MyLifeBits in 2001 as a lifelogging experiment. Now he thinks several factors blocked it from happening, and other options have arisen to complicate things.
I think we can learn a good deal from this predicting and experimental misfire.

“balancing the deep creative possibilities of transparency and lifelogging with issues of privacy and control of personal information”. Recorded in Second Life, 2006.
One reason we’re not all making or consuming MyLifeBits bits is because two technologies took off that drew our potential lifelogging energies: smartphones and social media. Phones, not dedicated lanyard cameras, are where we increasingly take photos and record video. More importantly, social media, increasingly accessed via mobile devices, is where we dedicate our potentially lifelogging energies. That’s where we think to record and share our lives. That’s where we’re tagging, forming groups, and being reminded of (some) memories. And we do it through multiple media, including text. In a sense, Facebook etc. are our lifelogs, just low-fi, externally owned and controlled, and badly organized versions.
Mike Elgan, the article’s author, adds to Bell’s rueful account the problem of information overload, or what he dubs “data fatigue.” Seeing media and data as burdensome, we reject rather than embrace the idea of making more of the stuff. Elgan: “[i]t seems that the people working hard to forget things outnumber the people who are trying to remember things.”
Missing from both Bell’s and Elgan’s reflections is an extra problem, the way many people find surveillance creepy. We haven’t embraced anti-surveillance strategies in a serious way (sousveillance is scarce, too), but we don’t celebrate our exposure. Neither have we ironically taken up lifelogging. Lifelogging just hasn’t fit the post-Snowden age.
On the other hand, some technology didn’t advance rapidly enough:
In the future, [Bell] said, we could see the price of memory come way down, as well as breakthroughs in battery technology and artificial intelligence (A.I.). But for now, it’s not possible to automatically record everything using a mobile device. And it’s hard to manage and use the terabytes of data generated.
That’s a counterintuitive observation for an era obsessed with the speed of technology’s advance, but an apt one. While memory costs have certainly dropped, it’s still a pain to record a lot of video then transfer it, then store it, and make sure it’s searchable. This is one way video hasn’t conquered the world (yet).
What does this tell us about futuring and technology? First, incorrectly anticipating lifelogging means we can overestimate technology’s speed of development, which is always an important caution. Second, if Elgan and I are correct about information overload and surveillance fears, it means lifelogging proponents and prognosticators also failed to foresee the ways humans can fear technology.
Moreover, not predicting social media’s enormous triumph is a case of understating humanity’s fierce social instinct. We love to share with technologies, and always have, from making mix tapes to forming public libraries and painting cave walls. The extraordinary rise of Usenet in the 80s should have given people a big hint.
We can consider this another way, by viewing predictions of lifelogging not as errors of fact but as timeline mistakes. Bell et al were simply too soon, ahead of their time. If key technologies developed too slowly, perhaps they’ll be ready at a future time, and Bell was just too early. In addition, AI and artificial assistants, developing rapidly now, could take care of the archiving problem:
Instead of searching through terabytes of data, we’ll simply ask our assistant: “Hey, what was the name of that restaurant I enjoyed in London a few years ago?”
…In fact, if trends continue in their current direction, retrieval will be so automated that our virtual assistants will offer facts not only from the Internet, but also from our own experiences and memories.
Looking ahead, Elgan sees emergent trends in mobile and social media joining the operation to make lifelogging work, and yield a transformed life:
Next-generation earbuds may whisper in our ears, smartwatches may nudge us, and smartglasses may display just-in-time information based on our extensive lifelogging histories. For example, when we meet someone we’ll instantly start receiving relevant information about when we met that person before, what our mutual social connections are and more…
Once those problems are solved by better hardware and advanced A.I., lifelogging and the photographic memory it promises will be just another background feature of every mobile device we use.
Maybe. Maybe not. AI could plateau or flame out. Moore’s Law might stop, halting the shrinking hardware boom. And the backlash to Google Glass suggests ways we could fear people listening to their smartwatches, or feel uncomfortable when being dragged unwillingly into someone’s glasses-enabled self-archiving project.
There’s a third option. We could see more people consciously doing not continuous self-archiving, but structured, creative media creation around their lives. The tools are certainly there. So are examples, like narrative lifelogging experiments (one such) and daily self-portraits (like this).
DS106 offers a pedagogy to support this with its daily create assignment. Perhaps this kind of selective, deliberate practice is a way of avoiding Orwellian fears, too.
What do you think? Have you seen any lifelogging around, or have you done this yourself? What about educational implications?
(link via HackerNews; images by hirotomo t, Justin Hall, and John Perivolaris)
