Google Lens Turns Your Camera Into a Search Box

Computer vision helps Google know what's in your photos. Search helps it make those photos useful.
LensGoogleTA466051104Converted.jpg
Getty Images

Google is remaking itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it's still primarily a search company. And today at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what's in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say "it's a restaurant," which you know, or "it's called Golden Corral," which you also know. It can automatically find you the hours, or call up the menu, or see if there's a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you'll learn that it's an Elatior Begonia, and that it really needs indirect, bright light to survive. It's a full-fledged search engine, starting with your camera instead of a text box.

The first home for Lens will be Google Photos, so it can go back through your existing library and find all sorts of new data. It's also coming to Google Assistant, which is where you'll primarily interact with new photos and searches. Over time, it'll come to all Google products. Remember a couple of years ago, when Google launched Word Lens, which let you hold up your camera to a sign in a foreign language, and like magic it was translated on your screen? Or Google Goggles, which you could use to get more information about whatever painting, landmark, or barcoded product you were looking at? Sounds like those were both precursors to Lens---and now, rather than a standalone app, it's available across Google products.

Lens is, in a way, a Google-y version of Snapchat's and Instagram's AR features: a way to understand more about the world just by looking at it. Of course, rather than place sharks next to your breakfast cereal, Google would rather show you nutrition facts. Or a blog post about the shocking truth behind the raisins. It's most similar to what Pinterest announced earlier this year, a feature also named Lens. Pinterest hopes you'll snap a photo of your outfit, or a package of broccoli, so that it can spit back shopping recommendations or killer new recipe ideas.

Google can do that too, and much more besides. The company's gotten very good at computer vision over the last couple of years, and once it's solved the "what's in the photo" part of the equation, well, it's Google. It's pretty good at search. At a more abstract level, Lens is a big turn for Google. Like so many other companies, it has figured out that the camera is the input of the future. It'll make searching the internet as fast, and natural, as looking around. It fuses the real and digital worlds in fun and useful ways. And, not to get ahead of ourselves here, it'd be a pretty great augmented-reality feature for Google Glass 2.0.