Google is making its camera a lot more smarter with a new feature called Google Lens, which will recognize images of items and be able to give users additional information about these images.
“We’re at an inflection point with vision,” said Google CEO Sundar Pichai, at the search giant’s developer conference, Google I/O, on Wednesday.
Google Lens (“GOOG”) is a feature of the company’s voice assistant, Google Assistant. Google Assistant is available on Android phones and iPhones as well as the company’s home automation device, Google Home. It can respond to questions about the weather and traffic, and can also connect to things like personal calendars and Gmail accounts. Pichai said that Lens will also be integrated into Google’s popular mobile photo sharing app, Google Photos, which has 500 million active users.
Get Data Sheet, Fortune’s technology newsletter.
Google Lens lets consumers take an image that is shown through the camera of a phone, and the app should understand what it is looking at and provide information based on the image. For example, Lens could look at an image of a flower, it will tell users what type of flower they are looking at.
Users can also use point the camera at a restaurant and it will give users reviews of the restaurant. One crowd pleasing use case was being able to point the camera at a Wi-Fi router’s password, which instantly connected the device to the Wi-Fi point.
“This is about computers understanding images,” Pichai added.