AndroidAppsGood and EVO

How to use the AutoVoice Tasker plug-in

Screenshot 2013 05 09 01 20 21 - for some reason we don't have an alt tag hereUpdate: This guide is no longer relevant as I have released a more extensive, more up-to-date guide for AutoVoice, located here

It’s not that long since AutoVoice came out, but I think the home automation video that came out a couple of days ago really opened many people’s eyes as to what the plug-in can do. AutoVoice has a lot of advantages over the stock Get Voice action in Tasker, but it also uses a completely different system for getting things done, which can make it confusing to use. As such, here’s a quick guide.

Get Voice vs AutoVoice

Setting aside AutoVoice features like Bluetooth headset support for a second, it’s important to understand how AutoVoice is different than the Get Voice action that comes with Tasker.

Get Voice works on a very simply principle: use speech to text, shove the entire result in the %VOICE variable, and move on. That’s all it does. That means that it’s up to the user to take that variable and do something with it, which normally means a lot of If conditions and Variable Splits to try to get to the information you actually need. My Nelly voice assistant is a perfect example, and it has a ton of If conditions to make it work.

AutoVoice, on the other hand, is built from the ground up to be used for voice control. You have an action that initiates voice recognition (AutoVoice Recognize action), and then you have profile contexts that actually trigger based on the response (AutoVoice Recognized context). This means that in most cases, AutoVoice Recognize is the last action used in a task that uses voice control, and instead the task “continues” in a new task tied to a profile, where the profile triggers based on the response.

Furthermore, AutoVoice gives you a lot more options for accessing the data it got from speech recognition. First off, there’s %avcomm, containing everything, the same way %VOICE does. Then you have %avcommnofilter, which is everything except for the trigger phrase/word. Finally, you have %avword1, %avword2, %avword3, and so on, which contains each separate word.

As an example, let’s say you speak the phrase “hello how are you today”. You have a profile set up with Command Filter “hello”, which means it will trigger when it hears the word “hello”. The above variables will then be as follows:

%avcomm: hello how are you today
%avcommnofilter: how are you today
%avword1: hello
%avword2: how
%avword3: are
%avword4: you
%avword5: today

Command ID

One of the big advantages of AutoVoice is the Command ID system. This allows you to limit profiles to only triggering after other profiles, allowing you to create chains of commands.

For instance, you might have a task that ends with asking you a yes/no question, and then triggers an AutoVoice Recognize action to let you respond. The problem with this is that “yes” and “no” are very non-specific, and you might end up having a dozen tasks that asks for a yes or no. If you just added profiles that activated based on “yes” and “no”, all of the ones for “yes” would trigger at once!

This is what Command ID is there to prevent. When adding the AutoVoice Recognized context, you can specify Command ID and Last Command ID. If you specify something in the Last Command ID field, the profile only activates if the last profile that activated had that same thing specified in Command ID. The video below shows this, and more, in action:

It’s also important to be aware of the AutoVoice Set Cmd Id action. This can be used to manipulate Command IDs without doing it directly in the profile. There are two options:

Clear Last Command ID

This clears the Command ID for the last profile that ran. If profile 1 has the Command ID “hello”, and profile 2 has Last Command ID “hello”, that means that profile 2 will only run after profile 1. However, if you use the Clear Last Command ID action after running profile 1, profile 2 won’t run, because the Command ID set by profile 1 will have been cleared.

Set Last Command Id

This allows you to set a Command ID without having a profile do it. Using the above example, if you used this option with “hello”, it would allow profile 2 to run, regardless of whether profile 1 has.

Home automation example

The home automation video posted by Doug Gregory is very impressive, but actually not hard to set up. I haven’t seen his exact profiles, but here’s how I assume he did it.

In the video, you can see him use all sorts of highly dynamic commands. He uses different phrases, mentions several appliances in one command, and so on and so forth. It looks difficult to set up, but it actually uses very simple logic.

The system assumes that the user is not brain dead. If you mention a specific lamp, it’s extremely likely that you want to do the exact opposite of what’s currently happening. For instance, if the lamp is on, it’s fairly likely that you’ll tell the system to turn it off, as few people would stand there saying “turn on the lamp” when it’s on. As such, it doesn’t actually need to pay attention to whether or not you say “on”, “off”, or any synonyms (kill, disable, enable, activate). It only needs to toggle the lamp whenever a reference to it is being made.

So, let’s say you want to control the “bar lights”. All you then do is create an AutoVoice Recognized profile, and specify “bar lights” in the Command Filter field. This is then tied to an action to toggle the bar lights using whatever home automation system is being used.

The result is that the bar lights will seem to react to extremely dynamic commands, like “I don’t want to see the bar lights anymore, please make them go away”. In reality, it simply picks up “bar lights”, and sends a command to toggle those. Unless you do something like tell it to turn them on when they’re actually on, or for instance mention that you need to buy new bar lights, it will seem like the system is more intelligent than it is.

By creating profiles like that for more appliances, you can mention multiple in one sentence and have them trigger. If you create similar profiles for the kitchen and living room lights, you could for instance say “turn off the bar lights and the living room lights, and turn on the kitchen lights”, and all three individual profiles would trigger, send toggle commands, and it would look like magic.

Of course, there are examples in the video that are a bit more complicated, like telling the system he’s home or to shut down. Those are then essentially standalone profiles with tasks that do everything in one go, rather than activate lots of individual profiles. By simply using Off commands rather than Toggle commands, it’s then possible to make sure that it all turns off regardless of current configuration, instead of essentially just inverting it.

The point is that the video makes it appear as though the voice assistant used is beyond intelligent, understanding absolutely everything he says perfectly, no matter what phrasing he uses. Instead, the system has simply been simplified so that it reacts to very specific keywords, and ignores everything else.

Other features of AutoVoice

AutoVoice has more features than I’ve covered in detail above, but I think the basic system of how everything works is what is most difficult. As such, I’ll just briefly mention the rest.

AutoVoice is designed to work with headsets, which Get Voice isn’t. You have a lot of different features relating to this, from options here and there to separate contexts and actions. You have two contexts called AutoVoice BT Pressed, which can be used to react to button presses on a Bluetooth headset. You basically put whatever you want the button to do in the task attached to the profile, and off you go. You can of course combine this with other contexts to create situation-aware button functionality.

You also have a context called AutoVoice Rec Failed. This allows you to specify a task that runs when the voice recognition fails, so that it doesn’t just ruin your entire chain by mishearing a word. In most cases, it’s logical to tie this to a simply re-triggering of the AutoVoice Recognize action, giving you another chance.

Finally, you have the AutoVoice Ctrl BT action. This makes it so that all sound goes to your Bluetooth headset, and you could for instance create a profile that activates this when the Bluetooth device is connected.

AutoVoice is a great plug-in, and it’s essentially the Get Voice Extended Edition. Some might get by with just Get Voice, while others truly put AutoVoice’s additional features to good use. It’s possible to do a lot of things with this, from creating home automation systems, to controlling music, to even doing things like telling XBMC to play the latest episode of a show. The sky’s the limit.

autovoice qr - for some reason we don't have an alt tag here

Download: Google Play

tasker banner - for some reason we don't have an alt tag here

Pocketables does not accept targeted advertising, phony guest posts, paid reviews, etc. Help us keep this way with support on Patreon!
Become a patron at Patreon!

Andreas Ødegård

Andreas Ødegård is more interested in aftermarket (and user created) software and hardware than chasing the latest gadgets. His day job as a teacher keeps him interested in education tech and takes up most of his time.

Avatar of Andreas Ødegård