The Cloud Natural API is built on Google’s existing pre-trained API listing including Vision API and Translate API. It claims to provide developers the access to the Google-powered syntax and sentiment analysis, and entity recognition and enable users analyze text in languages including English, Spanish and Japanese (initially) with REST API integration.
- Sentiment Analysis: to understand the overall sentiment of a block of text.
- Entity Recognition: To identify the most relevant entities for a block of text and label them with types such as person, organization, location, events, products and media.
- Syntax Analysis: To identify parts of speech and create dependency parse trees for each sentence to reveal the structure and meaning of text.
Google added that, users can use it to understand sentiment about products on social media or parse intent from customer conversations happening in a call center or a messaging app. Also, analyze text uploaded in request or integrate with document storage on Google Cloud Storage.
In another post, Google revealed how Cloud Natural API can be used to analyze Harry Potter and articles from New York Times. After entity analysis of a story, the text and the entity data can be inserted into a Google BigQuery table. This allows the machine algorithm to run queries to analyze entity mentions in top news and visualize the results with Google Data Studio.
Dan Nelson, Head of Data, Ocado Technology, said:
“Google’s Cloud Natural Language API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case.”
While, Cloud Speech API, which was announced in March 2016, is devised to enable developers convert audio to text by applying powerful neural network models. And claims to recognize over 80 languages and variants for both apps and IoT devices. It seems to share the same DNA as the voice recognition technology which powers Google Search and boasts of a signing of over 5,000 companies in its alpha.
However, the beta launch comes with added features of Word hints for API calls to improve recognition, useful for both command scenarios (e.g., a smart TV listening for “rewind” and “fast-forward” when watching a movie) and Asynchronous calling to develop voice-enabled apps easier and faster.
Other features include Automatic Speech Recognition, Global Vocabulary, Streaming Recognition, Real-time or Pre-recorded Audio Support, Noise Robustness and Inappropriate Content Filtering.
In addition, Google is also expanding its public cloud capabilities and business with its new cloud platform regions in Oregon called, us-west1. At first, customers will be able to use the company’s Compute Engine, Cloud Storage, and Container Engine services, with more capabilities coming later to compete the likes of Microsoft (Azure) and Amazon (AWS).
Apoorv Saxena, Product Manager, Translate API and Cloud Natural Language, Dan Aharon, Product Manager, Cloud Speech API and Dave Stiver, Product Manager, Google Cloud Platform, in a joint statement said:
“Our initial testing shows that users in cities such as Vancouver, Seattle, Portland, San Francisco, and Los Angeles can expect to see a 30-80 percent reduction in latency for applications served from us-west1, compared to us-central1.”