This week—for one day only—Google allowed anyone to buy a pair of its much-hyped smart spectacles, Glass, for the cost of a month’s rent in Brooklyn.
Google’s calling people who’ve bought Glass, “Explorers.” That’s deliberate. Google actually doesn’t know what Glass should be for, but it wants everyone to help figure that out.
It’s a great deal for Google. The company’s m.o. from the beginning has been to build technology and then let applications follow. That philosophy led to the massive success of AdWords (after Google had built really cool, but rather unprofitable search technology). That’s why Google embarks on so many investment-intense projects, things like self-driving cars and taking roadside pictures of every street in the world. Now with Glass, Google’s doing it again—building cool tech with unclear applications—plus asking the rest of us what to do with it. Plus charging for the device itself. Not a bad business model, if you ask me.
But even though Google wants everyone to figure Glass out, I don’t think Glass is going to be for everyone.
Currently, Glass has a PR problem. Geeky early users have given it a sort of “creep factor.” But that will go away as the device disappears into its surroundings, starting with glasses, and then potentially—if recent patent revelations are an indication—contact lenses.
(read more about this Google patent on PatentBolt)
The above chart appears to be a bit of a conceptual mashup between Glass and a project Google announced in January this year. It’s a “smart” contact lens that would embed a glucose sensor in between two layers of contact lens material and allow diabetics to auto-beam blood sugar reports and warnings to their smartphones. It’s still in the works, but speculation has run amok about what could be next with smart lenses. First, cameras. Then? Content displaying over our eyes?
Technology is disappearing. As our chips get smaller and processors more powerful and less wired, gadgets and computers start to blend into our surroundings, or our other devices. (Calculators, for instance, used to be huge, and now they are software on any tiny device. Sophisticated computers are now being embedded into thermostats, and tracking sensors into plain old door locks that open with your phone. I once heard Ray Kurzweilgive a speech about how computers would one day be the size of red blood cells.)
So it’s conceivable that we’ll develop computers that could fit into our eyes. We’re a ways off from anything sophisticated. But the more important question at hand—the one that Google is asking people to help them figure out—is why would we want to?
Google undoubtedly wants Glass to break free of nerd-tech circles. That’s why we keep hearing about use cases like “artists” and “explorers” and “the disabled.” But at $1,500 a pop, I doubt that relatively many artists will buy in. It certainly won’t be a democratizing technology for creators in the way that cameraphones and Instagram have turned us all into photographers. And the use cases I’ve heard about blind people putting Google lensed cameras in their eyes to sense their surroundings are pretty bogus, too. There’s very little upside and a lot of downside and injury potential to having blind people stick microchips in their eyes when they could wear those same sensors in glasses or other things. (Now perhaps if we can get the technology to the point where it’s embedded surgically… that might be interesting…)
LinkedIn editor and Wired alum John Abell likens Glass to Segway: It was supposed to be the “people’s car 2.0.” That didn’t happen. But today Segway is a staple of mall cops everywhere. So it found its use case; it just wasn’t ubiquitous like the Volkswagon. I think Glass will be the same way. It’s not going to be the next cell phone. But doctors and scientists may really take to it.
Indeed, the applications that make sense for Google Glass seem to be primarily professions or lifestyles where enhanced vision is already needed:
- Scientists, doctors, anyone who works under a microscope or magnifier
- Education (Augmented reality enhanced learning? Maybe. But why glasses or contact lenses when we have huge screens and tablets?)
- Explorers, adrenaline junkies (Perhaps Glass becomes the new Go Pro for people who want continuous video recording without the equipment.)
- Tourists, navigators (Overlaying navigation or Wikipedia-style facts over the real world is a no-brainer.)
- Childrens toys and games (Imaginary friends to play with, etc? Sure. Though Oculus Rift seems a better bet for the future of high-end gaming.)
- Disabled assistance (I predict primarily for sensor detection in the body, which doesn’t have to be shown in a heads-up display, or certainly not a contact lens.)
You’ll note that looking at advertising is not on the list. Advertisers certainly will have a lot to get excited about with a new medium like in-vision display. But if there’s one thing technology has made we consumers good at, it’s avoiding interruptive advertising. (And demanding less of it.) As soon as ads start popping up in our Google Glass, Google Glasses are going to start ending up in the trash.
I look at technology like this through the lens of what we’re doing at my company, Contently, which is trying to create a better media world. Are there ways that layering content on top of our everyday vision can actually help make the world better? A lot of programmers bought Glass yesterday because they’re betting there are.
The real question is, when the hype subsides, will regular people want all the things those programmers build, or is Glass just the next Segway?
Written by Shane Snow. Shane is Chief Creative Officer of Contently. He writes about media and technology for Wired, Fast Company, Ad Age, and more, and tweets at @shanesnow. Visit LinkedIn today where this story is originally appeared