I’m approaching my sixth month of having Google Glass, and it’s time to expose my views. My basic philosophy is that, because we humans operate on faulty and limited hardware, and because we are exposed to more and more information every year, it is our prerogative to vigilantly pursue technological means of enhancing our cognition. Google Glass, even it its prototypical form, is a major leap forward in cognition-enhancing technology. What are the major innovations behind Google Glass? In my opinion, there are many. First, there are the incremental improvements over mobile phone technology:
- Glass reduces querying time for information (Google searches, checking one’s calendar, looking at a map) from roughly 20 seconds to roughly 10 seconds.
While this advance may seem inconsequential, I have found that saving ten seconds ten times a day adds up quickly. But mostly, I have found that there are many instances where the value of information I would query is near the cost of acquiring that information. So reducing the cost even marginally has dramatically increased the frequency with which I actually perform the query.
- Glass reduces the time it takes to record information, in the form of photos or videos, to nearly zero.
Simply put, I’ve taken nearly a thousand photos and almost a hundred videos that I almost certainly would not have taken without glass. Glass captures pictures and video from the perspective of your eye, which makes recording concerts, parties, and even tennis games quite easy.
- Glass enables hands-free information querying and recording.
This is mostly relevant for driving and biking. I now have many more total opportunities to query and record information. It used to be that, while driving or biking, I would idly wonder when an album I’m listening to was recorded, or where my next turn is, or I’d want to take a picture of a sunset, and I would have to decide whether it was worth the risk to get my phone out of my pocket.
Then there are the non-incremental, potentially massively disruptive innovations:
- Glass can passively process audio.
Imagine that any time you are engaged in conversation or idle musings, a virtual assistant is waiting patiently to chime in with a relevant Wikipedia page, Facebook profile, Youtube video, etc. in an attempt to answer an implied (or explicit) question. While the current model requires deliberate activation by saying “ok glass”, presumably future models will feature contextual and spontaneous querying. Already Google is trying to expand its search engine into an assistant.
- Glass can passively process video.
Glass can see what you see and help you figure out what you’re looking at. If you see a species of plant or animal you don’t know, it can tell you. If you see someone and you have forgotten their name / the last time you saw them / where they work, it can tell you. If you see a famous landmark, building, or painting, it can provide context. If you witness a traffic accident or a robbery, it can save the video. If you are trying to learn a musical instrument or read a book in a foreign language, it can help. The possibilities are endless. The main technological hurdle now is that recording a constant stream of video, and analyzing it for features, is extremely computationally intense. But as we know, if the limiting factor is computational complexity, rather than algorithmic insight, Moore’s law should tell.
- Glass provides for passive learning.
Recently the Niantic team at Google launched a glassware app called Fieldtrip. It has a list of roughly 200,000 significant locations across the world – buildings, monuments, museums, murals, etc. – and when you approach one of them, it pops up on glass with some information. Usually it shows pictures of the landmark from when it was built and from recently, along with a few cards with historical information. Here are a few tidbits that I’ve learned since the launch a few weeks ago: UC Berkeley has the oldest public college dorm in California, and it’s still boys only; the Stanford entrance on Palm Drive, while now small, used to be decorated with Roman columns; Shoreline Boulevard in Mountain View was once dotted with wineries until a pest called the phyllorexa devastated the region in the 1890’s.
Generally speaking, while the internet has provided most people with the ability to learn about almost any topic, time and discipline are scarce resources. Imagine you are trying to learn French by taking online classes, and perhaps using software like DuoLingo or RosettaStone. Many people succeed with such methods, but it is common knowledge that the best way to learn French is to live in France. Why is this? I believe it’s because, while the energy and focus for active learning is limited, passive learning continues all day long. Reading road signs and menus in French, hearing snippets of conversation on the street or subway, and seeing movies and television in French provide a tremendous amount of reinforcement and practice.
In the same way, Google Glass allows for passive learning in a variety of disciplines. If I want to become an expert botanist, am I more likely to succeed by taking an online class, or by having a virtual assistant point out the various flora I walk by in my daily life? If I want to become an expert in the history of San Francisco, am I more likely to succeed by reading a few books, or by having a virtual assistant point out of the history of famous locations as I bike by them? If I want to become an expert in body language, am I more likely to succeed by getting a masters degree in psychology, or by having a virtual assistant show me a feed of what I’m seeing, but layered with exaggerated microexpressions and highlighted changes in blood pressure?
Time will tell how much of glass’s potential will be realized. In the meantime, I’m quite enjoying the incremental innovations.