Several weeks ago, Jefferies’ analyst James Kisner published a scathing report, shedding light onto the shortcomings of IBM Watson. Kisner focused on the $60 million disastrous Watson project for MD Anderson, and highlighted how much IBM is lagging behind Amazon and Apple.
As John Mannes pointed out on TechCrunch, “things would look much worse if Google, Microsoft and Facebook were added to this table.” He also eloquently summarized the common pitfall in our approach to AI: “Reality is that AI isn’t an amorphous black hole that sucks in unstructured data to produce insights. A solid data pipeline and a domain-specific understanding of the AI business problem at hand is table minimum.”
How then should we evaluate different platforms in the age of AI? This is the key question I want to address as I review the Big Five (Google, Amazon, Facebook, Apple, Microsoft) tech companies and their recent moves in AI offerings. Since Google is widely considered the frontrunner in the AI race, I will start with Google.
In “Google’s Answer to AWS” last December and in “Google I/O 2017 Major Takeaways — From Google.com to Google.ai” earlier in May, I noted that Google is envisioning its AI products to be equivalent of search/ads in an AI-first world. In other words, just as the platform-agnostic browsers allowed Google’s superior search engine to dominate the web, Google is betting on its superior AI offerings to render the underlying infrastructure irrelevant.
Google has always been a product/services company. Their product-focused approach has led to the rise of AWS and relinquishing some control in the mobile-first world to Facebook. Until recently, this has been their Achilles heel (you should read this epic rant written in 2011 about Google’s inability to think about platforms). But things are changing now: it’s once again leveraging what it does best to steer the conversation away from platforms (browser) to products (search).
Without further ado, let’s dive into what has happened so far:
Kubernetes is Google’s open-source container management system (similar to Docker Swarm). Containers build a level of abstraction that allow developers to deploy software on a standardized interface.
This makes the underlying hardware and the OS on the cloud or on premise irrelevant to a degree. A piece of software inside a container will run in the same manner on AWS, Azure, or GCP. Kubernetes draws the perfect parallel to the web/Chrome making the underlying OS (Windows, Mac, Android, iOS) irrelevant.
Backed by the massive data its been collecting and an impressive core machine learning team (Google Brain & DeepMind), Google has pushed out impressive suite of AI products including Cloud Machine Learning Engine, Cloud Natural Language, Cloud Speech, Cloud Translation, Cloud Vision, and Cloud Video Intelligence API.
If, for example, a home security company wants to augment their offering with Google’s Video Intelligence API that annotates videos and makes them searchable, Kubernetes allows the company to simply move its existing container to GCP and start using the API. In essence, Kubernetes reduces the switching cost to near zero, and becomes the means for its superior AI/ML products to morph into what search was for the web.
The brilliance of Kubernetes amplifies when you consider Google’s acquisition of the data science platform Kaggle. Google now has access not only to data science talent to recruit from, but also to the data and the audience to promote the use of its open-source deep learning framework TensorFlow.
The academic community has already embraced TensorFlow (as evidenced by the GitHub fork & star numbers along with how many times it is mentioned in papers), and nothing suggests that adoption numbers will slow down any time soon. Google Cloud Products that facilitate the training process of TensorFlow projects as well as the accompanying APIs to augment custom projects (e.g. combining a image classifier built using TensorFlow with Translate API) will likely become the de facto go to framework for those on Kaggle.
Even for those who have previously deployed TensorFlow models on AWS or Azure can now containerize it and switch over to GCP via Kubernetes if they wish to. Slowly, Google is abstracting away the details of machine learning and further removing the frictions and barriers to use its services.
Lastly, as mentioned in the Google I/O Takeaways article, with Google Assistant SDK now available on almost any device, Google is reaching the masses and challenging Alexa’s dominance in the voice industry. It also introduces more ways for Google to, one, collect data, which in turn makes its AI/ML products smarter, and two, introduce more AI/ML services to its users.
Regardless of how you interact with devices (voice, mobile, tablet, TV, or laptop), you’ll have access to Google in some way, shape, or form.
A host of other products such as Google Photos and Google Lens are working to win simply by the virtue of being better as it did with search and maps. In an effort to regain dominance in the cloud as the world transitions from mobile-first to AI-first, Google is leveraging its products to mask platform lock-in as well as working to make its platform the clear winner.
Google has indeed made great moves, but as of today, AWS and Azure still hold an advantage over GCP in the battle for the cloud. The good news is that Google’s cloud segment is reported to be growing fast (via Recode). Since GCP is grouped under Alphabet’s non-advertising business, it is hard to know how well GCP by itself is doing, but at the very least, Google’s second attempt at establishing itself in the enterprise cloud industry is doing much better than IBM Watson.
Stayed tuned for the next edition of Platform Wars where we will continue to review the top AI companies and look at Amazon’s approach to AI and examine Alexa’s success.
What type of use case are you building for? Whichever it is we are looking forward to learning more about your needs.
Our team of experts is here to help!