This year’s Google I/O has come to conclusion, and Google has made some pretty big announcements on the software side of things involving many of their apps and services.
This year Google is betting big on Artificial Intelligence(AI) and Machine Learning(ML). They are using these two to improve many of their apps and services, to make them more user friendly and interactive.
Here are the top 5 things which will benefit from AI.
Last year, Google added their augmented reality feature Google Lens to Google Photos which helped users in extracting useful info from their photos, be it help with sign boards, language translation or text recognition.
This year, Google is using AI tech to study your photos and offer suggestions on how to fix any flaws or which effect will best suit a photo. AI is also used to improve the color pop effect to better identify which photo is suitable and which object in the photo to colorize.
Finally Google Lens is also improved and is now available directly in Camera app in supported devices.
While we still don’t know what Android P‘s official name will be (my personal favorite bet is Peda), it is for sure that AI is going to be deeply integrated in the next revision of Android. Starting with battery management, Google has partnered with the DeepMind to create Adaptive Battery which learns your usage and prioritizes which apps get battery power to run in background.
They have also devised Adaptive Brightness, again using AI, to study the surrounding and adjusting display brightness accordingly.
Finally, Smart Text Selection feature recognises the text you have selected and offers appropriate options. For example, selecting a location name will give you suggestion to search Google Maps.
Ever find a restaurant which is well reviewed on Google, has 4.5 star rating and still wasn’t good for you? Google is using AI to match a restaurant with you based on your food & drink preferences, your past visits and ratings. It’s shows a percent-based personalized match … pretty cool!
Finally, the For You tab and Explorer tabs are also redesigned to offer a better experience with customized suggestions.
Google Assistant is itself built on AI and Google is further improving that and giving it some more features. The main thing to know about here is Google Duplex. Duplex is a new capability of Google Assistant which allows it to make phone calls and have natural conversations. So you can now issue commands like “Hey Google, book me a dentist appointment at 1 PM on 25th.”
Google Assistant will actually call the dentist and talk with the other person. The above video from MKBHD explains it well in detail. Next, Google Assistant gets six new voices.
Finally, Google Assistant can have natural conversations with you as well with no need to say Hey Google after every sentence.
Google is using Deep Learning to study medical records of patients (in US for now) to develop a model which can predict answers to basic questions such as How much time it will take to recover? to more complex stuff relevant to doctors and nurses. This will help them to act more prudently and understand patients better.
Google is developing this in collaboration with various medical institutions and hospitals which also supply the patient data. And before you say privacy concerns, Google assures us that the data was de-identified to remove sensitive information. You can read about it in more detail here.
From self-driving cars to data processing to even our smartphones now, AI is the new innovative field and will be the main driving force in revolutionizing many fields in the next decade.
Just few years back, we watched Siri failing at answering the most basic questions. Now we have Google Duplex holding real-life conversations with humans, which is in some way AI passing the Turing Test. Are we going to see an uprising soon? What do you think?