LightBlog

mardi 24 janvier 2017

XDA Spotlight: Connect Third-Party APIs to Google Assistant using the Voice Assistant Webhook

Some owners of the Google Home may feel a bit disappointed at its lack of native features, but others such as myself are holding onto hope that third-party developers will be able to plug any holes in its functionality. We're excited to see the work some developers such as João Dias have put into supporting Google Assistant, but unfortunately the project is left in limbo while Google takes their sweet time inspecting it for approval.

Fortunately, though, Mr. Dias has something else to share that should cause some of you to start salivating. Recently, he has created an easy way to build a webhook to API.AI to handle third-party APIs – dubbed the Voice Assistant WebhookIf you'll recall, API.AI is the service that powers natural language voice interactions for any third-party services integrating with Google Assistant. This allows developers to respond to user queries in a rich, conversational manner. Thanks to the Voice Assistant Webhook however, any developer can easily start integrating any available API with Google Assistant.

In the video shown above, Mr. Dias asks his Google Home about information related to his Spotify account, YouTube channel, and his Google Fit data. None of the commands he sent to the Google Home are natively supported on the device, but he was able to hook each service's publicly available APIs to extract the information that he wanted. How this is possible is thanks to the power of one of Mr. Dias's more popular Tasker plug-ins: AutoVoice.

The AutoVoice application (which requires you to join the beta version here before you can access Google Home related features) allows you to create voice actions to react to complex voice queries through either its Google Now intercepting accessibility service or the Natural Language API (powered by API.AI). But now, Mr. Dias is further extending AutoVoice's capabilities by allowing you to send any voice data intercepted from Google Now (or captured via any voice dialog prompt from AutoVoice) straight to your backend server running a python script which will ping the third-party API and send the response back to AutoVoice.


 

Voice Assistant Webhook – in Summary

Let's break down the general setup process so things make more sense. Setup is fairly simple, provided you are able to follow all of the instructions outlined on the Github page, but do remember this is still beta software and that the plug-in structure is not final.

When you activate Google Now or start an AutoVoice prompt, AutoVoice recognizes your speech and sends them to API.AI for matching. The power of API.AI is that it translates the everyday language of your speech into the precise command with parameters that is required by the web service. The command and any parameters that were setup in API.AI are then sent to the web service and are executed by a python web application. The web application responds to the command with the results of the query which are converted into natural language text through API.AI and sent back to your device. Finally, the output is spoken using AutoVoice on your device.

The process sounds much more complicated than it really is, and although I had a few hiccups getting my own web hook set up, the developer João Dias was very quick to respond to my inquiries. I will try to walk through the steps to set this up yourself at the end of the article for those that want to try.

What does this mean overall though? It means developers have an easy way to integrate Google Now/Assistant with any third-party API that they would like. This was already possible before, but Mr. Dias has made this whole process a lot simpler and easier to develop.


Voice Assistant Webhook – Uses

Basically any existing API can be hooked into this existing framework with minimal coding – an exciting prospect! You could, for example, get your stock updates or the latest sports results, hook into Marvel Comics, get information on Star Wars ships and characters with its API, or hook into one of the online craft beer APIs to get beer recipes! On a more practical note, both Fitbit and Jawbone have existing APIs so you could hook into those and get your fitness data read. The possible uses of this are only limited by your imagination and a sprinkling of work.

After talking to Mr. Dias about the potential of this software, he mentioned that he has already submitted his application plugins to both Amazon and Google which will allow AutoVoice to hook directly into Google Assistant and Alexa. Mr. Dias said he is waiting on both companies to approve his plugins, so unfortunately until that happens you won't be able to enjoy running your own commands through such convenient mediums. But once the approval is received you can get started on making your own real world 'Jarvis' home automation system.


Voice Assistant Webhook – Tutorial

The following is an explanation on how to get the project up and running if you would like to try this out yourself. For this walk-through we will use a basic flow in which we say "Hello I am (your name)" as the command and in turn the response will say "hello" and your return your name.

Setting up Heroku

The first thing you must do is to setup a backend server (a free Heroku account will work, or your own local machine). The fastest way to set this all up is to go to the Github project page and clicking to deploy the project directly to Heroku. Make sure that you install PostgreSQL as well as all other dependencies that are linked in the instructions on Heroku!

Setting up API.AI

Then, create an account with API.AI. You will need to test that all of the backend python code is functioning properly before we mess with AutoVoice. Go to API.AI and add in your webhook URL. This allows API.AI to communicate with the Heroku app we just deployed. Once your have created your "Agent" as API.AI calls it, go to the settings of the agent and note the Client Access Keys and the Developer Access Keys. Then, go to the intents part and create a new intent called "Hello World". Under the "User says" section you can type anything but I suggest "Hello World" as this is the command you will speak to your device. Next, under "Action" type EXACTLY helloworld – this is the action that is called on our Heroku application.

Mr. Dias has already created an action for us to use that will respond with "Hello world" and this text must match the Heroku application exactly. Finally at the bottom of the page under the "Fulfillment" heading there is a checkbox called "Use Webhook." Make sure this is checked as this option tells API.AI to pass the action to your Heroku app and not try to resolve our command itself. Remember to "Save" the new intent using the save button.

Now we can test this by using the "Try it Now…" panel on the right. You can either click the microphone and say "Hello World" or type hello world in. You should see under the response portion "Hello World!" – this is coming from our Heroku application. I have noticed that free Heroku account put the web service to sleep after 30 minutes of inactivity, so I have sometimes had to send commands twice to get the correct response.

Setting up AutoVoice

On your phone, you will need to install the latest beta version of AutoVoice (and enable its Accessibility Service, if you want commands to be intercepted from Google Now). Open the application and tap on "Natural Language" and then "Setup Natural Language." This will take you to a screen where you need to enter the Client Access and Developer Access Keys you saved from API.AI. Enter both of those and follow the prompts that are displayed. The application will verify your tokens and then return you to the first screen.

Tap on "Commands" and you will be able to create a new command. Note that AutoVoice will use your access tokens and download any intents that you have already created, so you should see our "Hello World" example we just setup. AutoVoice may also prompt you to import some basic commands if you want; you can play with these just to see how it all works. Moving on, we are going to create a command that will speak our name back to us when we say the phrase "Hello I am xxx" where xxx is your name.

Click on the big "+" in the "Natural Language Intents" screen and the Build AutoVoice Commands screen is displayed. First, type in what command you want to say to execute the backend script we set up. In this case, "Hello I am xxx". Next, long press on the word "xxx" (your name) and in the popup box you will see an option to "Create Variable." A Google voice prompt appears where you can speak your variable name, which in this case should be just "name".  You will see a $name is added where the name used to be. There is no need to enter a response here as this part is handled by the Heroku web service. Click "finished" and give your Intent a name. Lastly, an Action prompt is displayed where you must enter the command EXACTLY as it is defined on your web app (helloname).

This matches how we tested API.AI. AutoVoice will update API.AI for you so you do not have to use API.AI for creating any new commands in the future. There is a small problem that I have noticed on the version I tested –  the check box we ticked under fulfillment is not checked automatically when we create a command, so we need to go back to API.AI and make sure that the"Use Webhook" checkbox is marked. This will likely be fixed very shortly, though, as Mr. Dias is prompt in responding to feedback.

Now you can try out your new command. Start up the Google Now voice prompt (or create a shortcut to the AutoVoice Natural Language voice prompt) and say "Hello I am xxx" which should shortly return "Hello to you too xxx!"


I know the entire setup is a bit awkward (and likely out of reach for any non-developers), but Mr. Dias states he is working on streamlining this process as much as possible. I personally feel, though, that it's a great start and quite polished for beta software. As I noted earlier, Mr. Dias is awaiting both Google and Amazon to approve his plugin so that this will work seamlessly with Google Assistant and Amazon Alexa as well. Soon, you will be able to connect these two Home Assistant interfaces with any publicly available third-party API on the Internet!

Many thanks to João Dias for helping us throughout the article!



from xda-developers http://ift.tt/2jt7yK8
via IFTTT

Five Icon Packs You Can’t Find on the Play Store

In this video we check out some of the best icon packs that you can't find on the Google Play store. While most of the icon packs on the Play store focus more on the size of their catalog, these icons packs are focused on detail and design. Here are the icons mentioned in this video.

M'Flat

Download

Metrix

Download

Cordyceps

Download

Convy

Download

Compacticons

Download



from xda-developers http://ift.tt/2j9vHV3
via IFTTT

lundi 23 janvier 2017

LG G6 Picture Leaks Before February 26th Launch

The Verge has reportedly obtained photos of LG's upcoming and highly anticipated flagship, the LG G6. Following a spate of recent leaks about the device, as well as confirmation from LG executives that the G6 would feature an 18:9 1440p LCD Panel, we now may have the first official render of the device.

If the render is indeed official, it would appear that LG has taken a sharp turn away from the design missteps that haunted the G5. The device appears to feature an aluminum frame and beveled edges, as well as a display that may sport minimal bezels. In the fashion of Xiaomi's Mi Mix and its Sharp-manufactured display, the G6 may also include a display with the Mix's eye-catching rounded corners and impressive screen-to-body ratio.

Given LG's numerous missteps in the mobile realm and its general inability to produce a profit, the G6 marks an important step for the future of the company's mobile business. With The Verge also reporting that it will now be released on February 26th, rather than March 11th, we wont have to wait long to find out where LG has taken their G-series. Moreover, the early release date could put this device at an advantage against its yearly South Korean competitor, as the Galaxy S8 is reportedly launching at a later date than usual.


Source: The Verge

Source:

 



from xda-developers http://ift.tt/2jkBmeh
via IFTTT

Hugo Barra to Depart Xiaomi, Return to Silicon Valley

Hugo Barra, Vice President of Xiaomi's International division, announced in a Facebook post that he was moving on from Xiaomi and heading back to Silicon Valley "to embark on future adventures."

Before his work at Xiaomi, Mr. Barra began his own voice recognition company with fellow graduates from MIT. It was soon acquired, and he eventually found himself working for Google as a Product Manager for Google's Mobile team, later being promoted to Vice President. In his role as Product Manager, he worked from 2010 to 2013 to guide Android's development from Honeycomb to KitKat, while also successfully releasing the generally-beloved Nexus 4, 5, and 7. He was also a central figure in the development of Google Now, which has now evolved into Google Assistant.

While at Xiaomi, Mr. Barra adopted a similar role, also working to develop and release high quality devices. However, he also focused heavily on affordability and also strove to break into new markets. In his role as Vice President, he worked with Xiaomi to release devices like the MiPad and Redmi lines, and also experienced impressive successes in coordinating Xiaomi's expansion into India and Latin America, stating that the company had transformed the dream of an India branch into the country's fastest growing member of the mobile marketplace. Mr. Barra has also acted as the company's main presenter and has been the face of nearly all Xiaomi product releases, espousing a characteristic charm that many have grown to enjoy in a realm of Chinese product reveals that can often walk a fine line between technology demonstration and magical realism.

He will undoubtedly be missed by the burgeoning tech company, having provided it an air of legitimacy when he came aboard in 2013, in some ways even becoming the face of Xiaomi to Western Android enthusiasts. Nevertheless, executives like co-founder Bin Lin expressed support and will still retain Mr. Barra as an adviser to Xiaomi. Moving forward, Senior Vice President of Strategic Cooperation Xiang Wang will take over Mr. Barra's role. Previously a senior executive at Qualcomm, Xiaomi's International division will undoubtedly be in good hands.

Given Mr. Barra's cryptic reference to future adventures in Silicon Valley, many – XDA included – will be eagerly waiting to see what future ventures he has in store.


Source: Barra's Facebook



from xda-developers http://ift.tt/2jVlpg0
via IFTTT

Battery Failures May Delay Release of Samsung Galaxy S8

As previously explained, Samsung revealed the exact causes of the battery failures that ultimately led to the Note 7's global recall in a press conference on January 22nd. While it was likely one of the most expensive consumer recalls in history, Samsung is still expecting Q4 2016's gross income to be more than a three year high, defying expectations. Given the impressive transparency Samsung demonstrated in their press conference and their likely continuing profitability, the company is clearly prepared to shrug off the Note 7 failure and move forward.

Following the press conference, Samsung's Mobile division President Koh Dong-jin answered several questions about the company's near future. Intriguingly, he revealed that Samsung was not currently planning on unveling the Galaxy S8 at Barcelona's Mobile World Congress, set to commence in just over a month. With a wounded reputation and soaring expectations for their follow-up devices, President Dong-jin acknowledged that Samsung was taking a deeply introspective look at their culture and practices. Furthermore, he went on to say that at the moment Samsung is more focused on repairing the damage that the Note 7 has caused the company, as well as the internal aspects that may have led to its consecutive failures.

On top of internal changes, Samsung has also stated that it has yet to decide if it will reuse any parts from the recalled Note 7 devices. With nearly 3 million devices recovered in an undertaking that will likely cost upwards of $5.3 billion, there are hard choices to be made. While the thought of nearly 3 million highly capable Exynos 8890s being tossed aside might leave us at XDA wiping away our tears, the consequences for consumer perception that reusing any parts from the Note 7 may end up being a powerful deterrent.

Time will only tell, and it looks like those of us eagerly anticipating a potentially bezel-less Galaxy S8 will have to wait a bit longer than normal for it to be revealed – arguably a worthy trade off if it gives Samsung the time it needs to ensure that the Note 7 remains an isolated incident of the past.



from xda-developers http://ift.tt/2j6FjQq
via IFTTT

Android Instant Apps Starts Initial (Limited) Live Testing

At last year's CES 2016, Google previewed Android's "Instant Apps", a project that would allow users to effectively "stream" applications via partial code downloads. With Instant Apps, Google attempted to minimize the installation friction and allow developers to reach wider audiences through the web.

Instant Apps are essentially a deeplink you can trigger from, for example, a Google Search result — instead of linking you to the specific website, though, it can instead take you into the company's application, right into the instance matching the web result. Instant Apps can deeplink straight to the relevant Android activities as your phone only downloads the necessary code to display such activity, which is compartmentalized in a module dictated by Google's guidelines. After the app is split into modules, only the relevant components get downloaded and executed, allowing the user to accomplish their task – say, looking at a recipe or purchasing a product – in fewer taps and with the better UX a polished application can offer.

Moreover, Google's demos showed that users would be able to use proper seamless payments and authentication via Android Pay and Google services, including access to location, identity, and Firebase. The Instant App instance also offers a shortcut for the user to download the full application if he or she is pleased with the experience, too, and Instant Apps were said to work with Android versions ranging back to Jelly Bean.

While the Instant Apps feature is a very interesting proposition by Google, the company did say that they'd open up testing in 2017, as they had decided to start with a small set of developers and show those who were interested how to set up their app to work with Instant Apps (Google claimed it could be less than a day's work, but they haven't released the SDK yet). We haven't heard much else about Android Instant apps since then, but today we finally hear news regarding the ambitious feature: according to the Android Developers Blog, Android Instant Apps has begun initial live testing!

Google tells us that starting today, a small number of applications via Instant Apps will be available to Android users in a limited test, including many of the sites we saw demoed in videos and screenshots — Buzzfeed, Wish, Periscope, and Viki. Google is hoping to collect user feedback and iterate upon the product to expand the service to more apps and more users.

Here are some important steps to prepare your app for Instant Apps support. You'll need to modularize your app for it to be downloaded and run on-the-fly, using the same Android APIs and Android Studip Project, but Google says the full SDK will become available in the coming months.


Source: Android Developers Blog



from xda-developers http://ift.tt/2jiNRXT
via IFTTT

Google Voice Finally Gets Updated with New App Design, Crucial IM Features

Google Voice was an ambitious project through which Google aimed to give people a phone number that's their own, anywhere and any time, on any device. The actual apps have stagnated for nearly 5 years, though, a contributing factor leading many to think it was slowly being abandoned.

Today, Google is bringing a fresh coat of paint and a much-needed set of features to Google Voice for their apps on Android, iOS and their web client as well. The new Google Voice apps are much cleaner and they've been refreshed for 2017 with a new aesthetic to match the rest of Google's IM repertoire. The design is more intuitive as well, with separate tabs for text messages in properly labelled threads, calls and voicemails. Conversations stay in one continuous thread and the messaging experience supports group and photo MMS and in-notification quick replies. There's also voicemail transcription for Spanish, with accuracy said to improve with time.

Going forward, Google says they are committed to supporting Google Voice and providing new updates and features to these apps, including RCS messaging according to The Verge. While there is no reason for you to switch from Hangouts, the dedicated apps aren't gimped in a way to make you necessitate Hangouts for the IM functionality you'd expect out of any 2017 messaging app.

It's nice to see that Google Voice, a service which many people at XDA are fond of, is making a return with a facelift in a dedicated app that was in dire need of an update. It's certainly a very strong and flexible alternative to traditional phone numbers, and while this strengthens yet another one of Google's IM apps (thus perpetuating the messaging mess the company got itself in) it at least gives Voice users a better way to communicate.

Check out Google Voice on the Play Store; according to Google, the update should be available to anyone in the coming weeks.

Via: The Verge Source: Google Blog



from xda-developers http://ift.tt/2jpB7fz
via IFTTT