Google I/O 2017
What is a developers’ keynote supposed to be about? All that technical gobbledegook that you probably don’t want to get into on your first date lest someone mistake you for a geek. And while I agree that it is not best practice to let loose your inner nerd on your first tête-à-tête with your would-be soulmate, discussing a possible future overrun by sentient robots may just make up for a great conversation starter. At this year’s I/O over at Mountain View, California, however, I couldn’t help but feel less like a first date and more like an old crone celebrating a routine anniversary with dear old hubby. There was the sudden kindling of a slow-but-steady spark, sure, but most of the event felt so mundane that it lost the charm I expected to feel at a futuristic tech keynote addressing the influencers of 2017.
Artificial intelligence is going to play a great role in shaping the upcoming decade, and, companies like Google and Facebook are trying their best ahead in the race by pumping more and more machine learning into our everyday lives. Therefore, it came as no surprise that artificial intelligence played a rather important role at Google I/O 2017, as most of our first day was spent discussing the how the company has been working on improving the robot’s recognition of sound and visuals over the past one year.
The first step, as Google CEO Sundar Pichai put it, is turning your camera into a search box. The company accomplishes this feat with its latest application, Google Lens, which uses machine learning technology to understand the world around you through your smartphone camera, and help you take action based on that understanding. For the first few minutes, Pichai showed us how this new technology can be used to do everything from recognizing a specific flower or a restaurant on the street to connecting to a Wi-Fi network simply by looking at the router sticker. The Google Lens can perform an array of interesting functions, from giving you the names of a prominent building in a city to translating a billboard from Japanese to English by just looking at it. This is an important landmark in the quest for bringing artificial intelligence to people’s homes on a wider level, and shows that Google doesn’t intend to wait long for the future we so closely desire.
Next up was a variety of improvements concerning the versatility, functioning and efficiency of the Google Assistant. Not only has the AI been greatly improved, the Google Assistant has been tweaked to appear more conversational and human than its counterparts. Voice recognition for the virtual assistant has improved greatly, and, starting now, we can even type into the Google Assistant instead of merely relying on voice commands. The most important announcement in this junction, however, was perhaps the fact that Google Assistant is now available for download on the iTunes Store for iPhones. This move is remarkably contradictory to Google’s earlier decision of keeping the Assistant a Pixel exclusive and shows that the company has come a long way since trying to bridge the gap on universalization.
At last year’s I/O, the CEO of Google introduced us to the first generation of chips made especially for machine learning tasks. This time around, Google introduced its next generation of Cloud TPUs, bringing unparalleled speed and performance to the world of machine learning technology. Each model Cloud TPU consists of four 45-teraflop chips that collectively deliver 180 teraflops of speed. The Cloud TPU’s will be initially made available through the Google Cloud platform on the Google Compute Engine, hence the name. The Cloud TPU makes uses of the open-source tensor flow technology that helps create machine learning algorithms at remarkable speed.
In other news, Google CEO told us that its mobile platform had reached a new milestone with over two billion active devices running on the Android OS. With this came a flurry of announcements regarding plans for Android O, the latest addition to the Android family that is scheduled for release in summer this year. Google launched the first official beta of the Android O on May 17, and with it came a bunch of features including improved notifications, picture-in-picture and more. The Android O is also scheduled to be releasing a number of features in the vitals category, including an inbuilt virus scanner for apps and wise limits to prevent applications from overusing the device’s battery resource. While the beta is far from ready for installation on your regular smartphone, developers itching for a peak can download a copy of the software to their preferred Nexus or Pixel devices.
As part of the same announcement, we received word on an internal project by the name of Android Go, the initiative that Pichai says will help the company reach its next billion users on smartphone. Android Go is a lightweight version of the Android OS created especially for the developing world, with focus on performance rather than quality. The Android Go is said to run easily on devices with less than 512 MB of RAM, feature its own version of a lightweight Play Store and include a system that allows users to check how much data they have left on the device.
The I/O Developers Conference also introduced a set of new features for the Google Home, the voice-enabled speaker that brings artificial intelligence to your autonomous house. The first of these was proactive support, which means that Google Home will light up with notifications on its own without the user having to request them. A host of new services, including Spotify, Hulu and HBO Now are now coming to Google Home devices, which means that you will now be using your voice-enabled Google Assistant to issue commands to these apps throughout the house. The final feature was hands-free calling for Google Home, including free unlimited voice calls for the US and Canada.
Apart from its usual set of announcements, Google took a moment to step off the AI horse and address two other important technologies that will be playing a role in shaping the world of the future. The first was virtual reality, to which Google announced that its been working with Lenovo and HTC to develop its own standalone virtual reality headset for Google Daydream. As far as augmented reality is concerned, Google announced this new indoor navigation technology known as Virtual Positioning Service (VPS), which is built upon Project Tango and will help further the cause of AR via next-generation indoor experiences.
Google delved deeper into the realms of VR and AR on the second day of the developers keynote. "They allow us to experience computing more like the real world. It works more like we do.", said Clay Bavor, VP of Google’s Virtual Reality division. After spending what felt like a lifetime talking about older developments in augmented reality and Project Tango, Bavor showed attendees how Google has used Tango’s precise navigation technology to map the largest augmented reality experience in the world, a 10,000 square-feet virtual rainforest by the name of Into The Wild.
The company further elaborated on its initiation of augmented reality features into Expedition, the classroom educational app that was announced two years ago to help teachers take their pupils on virtual reality field trips. Mike Jazayeri, Director of Product Management at Google, also announced that Daydream will now be coming to the new LG and Samsung Galaxy S8 devices. The company also heralded the release of Daydream 2.0, Euphrates. With the new version of virtual reality software come features like chromecasting, web browsing and YouTube VR. Daydream 2.0 will be available later this year on all Daydream-supported phones and the upcoming standalone headset. It will run on a modified version of Android O.
What if you could achieve the high-quality graphical elements of the desktop on mobile virtual reality experiences? Google Seurat, a new technology named after a famous French painter, aims to do just that. This technology helps break down complex three-dimensional scenes on the desktop so that they may be rendered without a visual loss in fidelity on mobile devices. Next up, virtual reality experiences are coming to the immersive web via Google Chrome. "Our goal is to make web VR and web AR first class citizens in all browsers", said Andrey Doronichev, Director of Product Management at Google VR.
All in all, much of the announcements on the second day of Google’s I/O event seemed more exciting than the first. We are, of course, talking about the more important announcements here, as a variety of other smaller things were also announced, including a new way of arranging images on Google Photos and using the Google Assistant to make payments on websites. The developers' keynote did feel a bit too dumbed down and morose to be a tech keynote meant for seasoned developers, and I can’t help but wish that Google had gone into greater depth into the questions posed by AI, AR and VR in our daily lives. However, the conference was still a fascinating event, and I am looking forward to how Apple intends to match this with WWDC this June.
Want more technology in your feed? Follow me on Twitter!