Amazon Web Services (AWS) has opened up Amazon Lex, an artificial intelligence (AI) service for building applications that can have conversations using voice and text, to all customers.
Amazon Lex brings the sophisticated and proven deep learning algorithms that power Amazon Alexa to all developers as a fully managed service.
Until now, very few developers have been able to build, deploy, and broadly scale apps with automatic speech recognition (ASR) and natural language understanding (NLU) capabilities because doing so required training sophisticated deep learning algorithms on massive amounts of data and infrastructure.
Amazon Lex eliminates all of this heavy lifting, making it easy for developers to build apps that can have conversations using voice or text by offering the same ASR and NLU technologies that power Amazon Alexa, as a fully managed service.
How can developers leverage Amazon Lex?
With Amazon Lex, developers can build and test conversational apps that perform tasks such as checking the weather or latest news, booking travel, ordering food, getting the latest sales or marketing data from business software, or controlling a connecting device. To build a conversational app, developers provide Amazon Lex sample phrases that describe a user’s intent (e.g., “book a flight”) along with the corresponding information Amazon Lex needs to ask to fulfill the intent (e.g., travel date and destination), and any required questions Amazon Lex needs to ask to elicit additional information (e.g., “when do you want to travel?” and “where do you want to go?”). Amazon Lex takes care of the rest by building a machine learning model that parses the speech or text input from the user, understands the intent behind the conversation, and manages the conversation (e.g., if the travel date is already known, the app will skip that question and ask for the destination).
Developers can then publish the conversational app to mobile and Internet of Things (IoT) devices, web applications, and chat services such as Facebook Messenger, Slack or Twilio. Amazon Lex handles the authentication required by different platforms using the customer provided keys and scales automatically as traffic increases, so developers don’t have to worry about provisioning and managing infrastructure.
“Thousands of machine learning and deep learning experts across Amazon have been developing AI technologies for years, and Amazon Alexa includes some of the most sophisticated and powerful deep learning technologies in existence,” said Raju Gulabani, Vice President, Databases, Analytics, and AI, AWS.
“We thought customers might be excited to use the same technology that powers Alexa to build conversational apps, but we’ve been blown away by the customer response to our preview – as organizations in virtually every industry like Capital One, Freshdesk, Hubspot, Liberty Mutual, Ohio Health, and Vonage have mobilized quickly to build on top of Amazon Lex.”
It is integrated with AWS Lambda, an event-driven computing that lets developers run code without provisioning or managing servers (serverless). This means that developers can build Amazon Lex conversational apps that use Lambda functions to implement business logic and retrieve data from enterprise applications and AWS Services like Amazon DynamoDB. It also includes built-in connectors that make it easy for conversational apps to access data from popular SaaS applications like Salesforce, Marketo, Zendesk, and QuickBooks so apps built using Amazon Lex can answer questions like “what are my top 10 accounts in Salesforce” by fetching appropriate data from Salesforce. Developers can also easily access analytics provided by Amazon Lex to measure various application performance and accuracy metrics that they can use to improve their apps over time.
Customers can launch Amazon Lex using the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. It is available in the US East (N Virginia) Region.