Amazon launches Alexa Presentation Language in public beta

Amazon introduced the Alexa Presentation Language (APL), a suite of tools designed to make it easier for developers to create “visually rich” skills for Alexa devices with screens like Amazon’s Echo Show, Fire TV, Fire Tablet, and Echo Spot, in September alongside a bevy of new devices. Today, following on the heels of a public preview, it launched APL in beta.

Starting this week, developers can design skills with APL in all locales supported by Alexa and test them in the Alexa Developer Console. (Amazon warns they might display differently if deployed before a forthcoming software update.) After the initial rollout, Amazon says it’ll begin certifying and publishing APL skills in the Alexa Skills Store.

“With APL, you have the flexibility to enhance your skill experience for different device types, control your user experience by defining where visual elements are placed on screens, and choose a variety of components available with APL that are best suited for your content,” June Lee, a data engineer at Amazon, explained in a blog post.

Amazon Alexa Presentation Language (APL)

APL — a JSON-based HTML5 language — consists of five core elements:

  • Images, text, and lists
  • Layouts, styles, and conditional expressions
  • Speech synchronization
  • Slideshows
  • Built-in intents by ordinal

App designers can specify text color, size, and weight, and make text and image responsive, or use both vertical and horizontal scrollbars to show continuous lists of choices. APL ships with preconfigured headers, footers, and dialog boxes, and tailor app layouts, voice responses, and other visuals to the device shapes and types. Also in tow are commands that change the audio or visual presentation of on-screen content, built-in intents that enable selection by ordinal (for example, a user can say “Select the second one” when a list is on-screen), and slideshows of images and other content.

LIS  L'avenir de l'entrepreneuriat? Travailler pour Amazon et Uber

Here’s how it works: developers create JSON files — “documents,” in APL parlance — that get invoked by Alexa skills and downloaded to target devices. Those devices import images and other data from the document and render the experience.

Skills already using APL include a CNBC stock organizer, Big Sky’s weather forecast app, public transit schedule tracker, Kayak, and Food Network’s recipe sorter.

Notably, Facebook’s new Portal and Portal+ incorporate elements of APL for hands-free visual content; their weather forecasts, shopping lists, and calendar events screens were designed with Amazon’s toolkit. And Sony smart TVs and Lenovo tablets will support APL through the upcoming Alexa Smart Screen and TV Device SDK.

In October, Adobe introduced a new user interface kit for APL in Adobe XD.


Lien source

Comments

comments

Laisser un commentaire