Philips Hue Alexa Naming Scheme

Philips Hue Alexa Naming Scheme

Yes, how you name your Philips Hue lights is highly important when you are connecting them to a Smart Assistant (with voice commands). Get things wrong and you end up with your Amazon Echo constantly spouting off “I’m sorry the Master Bedroom doesn’t support that.” You could also get plagued with your devices randomly becoming unresponsive with Alexa. Oh fun. Nothing to piss you off more than your lights just not working. I know first world problems right?

If you are just using the Hue app with no voice assistant you can get away with naming them whatever you want because it doesn’t really matter. When the voice commands are added the names of the rooms or groups you have them in as well as the names of the individual lights matter greatly. Not only will your smart assistant constantly just not fucking work with your lights, you may also get tired saying a particular phrase for a light or it may turn out you never actually called that light that name to begin with. Spend some time thinking about light names, seriously.

In my previous home I had a shit ton of Hue lights. When you have that many lights they end up becoming numbered. “Master Bedroom One,” “Living Room 3,” etc. Then when you have the rooms/groups of “Master Bedroom” or “Office” the lights with numbers in their names wont ever conflict with each other and their groups/rooms. Now that I am solo and in an apartment I only have a small handful of lights so I don’t need to number them anymore. Now I was getting conflicts with my names.

With my most recent struggles I have noticed, and eventually found the cure with the problem and the names I had chosen. I thought I picked them simply and perfectly because I honestly do not have that many lights in my apartment. As is turns out the biggest thing that was causing constant strife with my Echos and lights was the names I had chosen for the Echos themselves, along with the same conflicting name for the rooms they are in. For example, I have an Echo Dot in my master bedroom which I had named simply “Master Bedroom” because I was only thinking of my Echos at the time and not in conjunction with the light names; “Master Bedroom,” “Office,” and “Kitchen.” Well I had also made some rooms with the Hue app with the same names of “Master Bedroom,” “Office” and “Kitchen.” Same names as the Echo names, damn it. The lights worked initially when I set them up but after a few houros or day or so they all stopped and my Alexa started rebelling.

 

The Simple Fix

So there is a simple fix, which took me hours to figure out in the middle of the night I hate to admit. I ended up renaming the Echos first. “Master Bedroom Echo,” “Office Echo,” and “Kitchen Echo” to separate them from the name of the groups/rooms for the lights. I also figured out that I was getting multiple devices with the same names in my Alexa app because of the scheme I had originally chosen. So I removed all groups and rooms from the Hue app. I don’t use the Hue app much as it is, primarily just the Echos with the lights or Apple Homekit. So my Hue app has just one group now called “All Hue Lights” so they are easy to get to in the Hue app if I ever need to use it. Then in the Amazon Alexa app I did a device discovery and it pulled the one group and all the lights and Echos. Then in the Echo app I created the groups of “Master Bedroom,” “Office” and “Kitchen.” No more conflicts now, the lights all work and no more unresponsive remarks from Alexa or the apps.

Finally the shit works.

 

 

https://www.reddit.com/r/amazonecho/comments/v98igr/naming_convention_for_rooms_hue_and_just_making/

https://www.homeagenius.sg/blog/tips-on-naming-your-smart-devices-for-better-voice-control

Alexa, turn this shit off

The title pretty much says it all. We were sitting in bed late one night and the wife yelled “Alexa, turn this shit off!”. Yet nothing happened. Well thats just not good enough. The next morning I opened Node-Red on the house server. I added a new Wemo Emulator node and named it “this shit”. I then connected the new node to the house lights via Hue nodes. Bam! Now whenever we yell at the echo “Alexa, turn this shit off!” you get greeted with “OK” and all the lights shut off! I couldn’t be happier. Now the wife laughs every night when I say it, go figure.

(on the flip side I can now tell her to “turn this shit on” and all the lights will turn on)

No fancy scripts needed, just install the Wemo Emulation node, and then connect it to whatever you want to control. Then tell Alexa to “discover devices” and you should be set.

Custom Alexa Node-Red Skill (revisited)

alexa-nodered

Finally! I am surprised it didn’t even take as long as it usually does for me to figure shit out. Took a few weeks. As usual, I did not come up with the solution on my own but found it out on the web and slapped it all together. I wanted a custom Alexa Node-Red skill, to be able to take a command given to Alexa and have it read back data from one of my sensors. Things like temperature sensors, water level, etc. I wanted to be able to ask Alexa what the values are. What I got: exactly what I wanted. It all works. There is two parts to this: the Node-Red flow and the Alexa skill.

Alexa Node-Red flow
Alexa Node-Red flow

First off, to get any of this working you must have your Node-Red server accessible from the outside world. That means port-forwarding, DNS, domains, SSL, all that. It’s fun getting it all working. Not. Just like my previous post, I happened to have it already setup. Once your Node-Red install is available from the web you are good to go. Now you don’t need the entire NR setup opened up either. I just allowed a few NR served pages to be available. Not the entire NR itself.

Update: I made a new post about Node-Red behind a reverse-proxy/SSL

Let’s Begin

It starts with a regular HTTP node to a switch node. That switch node splits up Alexa’s requests to NR; LaunchRequest, IntentRequest, SessionEndedRequest. LaunchRequest gets invoked when the skill starts. You could have Alexa say “Hello what do you want?” for example. IntentRequest is the goods. Then theres SessionEndedRequest, I’m assuming this gets called at the end. Haven’t toyed with it. Then you pass those requests off to do other stuff, like the DoCommand where it grabs your intent. Then a function node to extract the commands, which gets passed off to another switch node to split up the possible commands you can give Alexa. Give her as many commands as you want, then there is a “device doesn’t exist” at the bottom. This is used if she didn’t hear you right or the device doesn’t exist. All that data gets passed to a template to format what Alexa will say and sticks the data in JSON. Bam! That wasn’t so hard right?

Here’s the whole flow (all standard nodes used):

That’s the Node-Red half. You are not done yet. On to the Alexa skills half. This part is easy don’t worry. Login to your Amazon Dashboard and click Alexa. Choose “Get Started” with the Alexa Skills Kit, click add a new skill. Under Skill Information give it a name and choose the invocation word, what you will say to Alexa to start your skill. I chose “Node Red”, so I have to say “Alexa, ask Node Red….”. These can be changed at any time it seems. You won’t be publishing this skill, it stays beta for only you to use. For the Global Fields section, no you will not be using an audio player. Well, maybe you will but I didn’t, and it will probably change things for you.

Note about the flow: The NR flow works (for me) just fine however I noticed it throws an error in the debug tab whenever a command is called. If it is an unrecognized command response it doesn’t throw the error though. It complains about headers already being sent. I will update the flow if I find a fix for it.

Interaction Model

Intent Schema

Intent Schema

This is the part of the Alexa skill where you tell it what to do. It is pretty straight forward. Just copy this to your “Intent Schema”. There are no custom slot types and no values to enter.

Sample Utterances

Sample Utterances

This is where you list the invocation phrases that will activate Alexa. Normally (and in other online tutorials for Alexa skills) this is where you add a ton of different phrases. But we are not. Node-Red is going to handle that side for us. This box just gets one line of text.

Configuration

Global Fields/Endpoint

For a service endpoint you are going to pick “HTTPS”. In a lot of other tutorials you will usually choose AWS Lambda but we are doing all of our own heavy lifting with NR. We don’t need no stinking Lambda. Choose your closest location and enter the URL that your Node-Red is accessible from (via the web remember). Say no to account linking and you can also leave Permissions alone.

SSL Certificate

Certificate for Endpoint

Choose the option that bests describes you. Most likely it will be the first option. For me I am using a subdomain that is already SSL’d with Let’s Encrypt so I choose the second option.

Test

Basically just leave the toggle flipped to enable the skill for you to use. you don’t need to do anything else on this page.

Publishing Information

Nothing to do here, you won’t be publishing this skill. Why? Because it requires too much setup on the users behalf. I don’t think Amazon would approve a half functioning skill that requires advanced user setup to get working. You could always try. Good luck.

Privacy & Compliance

Three no’s and one box to check, I mean as long as it all applies to you right 😉

Done

That should be it. With Node-Red available to the web and the flow implemented the new Alexa skill you just made you should be good to go. I hope you found this useful, I sure wish I had found a blog post like this. Now go test it out with your Amazon Echo/Dot!

At the time of this writing a beta product appeared in the Amazon Dashboard for a the “Skill Builder”, looks to be a new UI for building Alexa skills. If this gets implemented for everyone in the future things may be different than they are described in this blog post.

 

Revisited

Originally posted April 26, 2017 @ 17:56

I decided to come back to this post. I was adding and modifying some things on my flow and I was using this post as a reference and decided it was not cutting the mustard, it felt unfinished. So here we are.

The above section contains the main flow you will need and it steps you through the Amazon Developer side of things that needs to be setup. Once that is all finished you should have a working flow with Alexa responding accordingly. What I feel I left out was how to configure the flow itself. Many of you may already have figured it out or can see whats going on and thats cool. Heres for the ones that need the help (myself included).

The flow

Voice requests
Request (your questions)
Question function
Question function (global request)
Alexas response
Alexas verbal response

HTTP node

The first node in the flow is the HTTP request. This is the page you will use to point your reverse proxy to. This is the page that Node-Red will server, that Amazon/Alexa will look for. This page needs to be accessible from outside your network.

Request Type

This contains the types of requests we can send to Alexa. We are only worried about IntentRequest  right now. Play with the other later.

Intents & Extract Commands

There is no need to modify these nodes. They contain the code needed to send requests.

Request

Add your verbal question here. Add it how Alexa hears it. This may take a little trial and error depending on how you talk and how she hears you. Sometimes you can simply put exactly what you are going to say in there, like “garage temperature” works fine for me but if I ask Alexa what the outside temperature is she doesn’t know and I get the unrecognized question response. But this is ok, pay attention to the output of the unrecognized responses. It will spit out what Alexa heard and how she heard it. So when I ask what the outside temperature is she hears “i outside temperature” for some unknown reason. So if I modify my request to be the same, and it works.

Function node

This function is very simple. It just grabs the global variable name you put in it. So when I ask for the “garage temperature” it looks for the global I have specified, in this case contex.global.garagetemperature. Add whatever global you want. Just make sure it is initialized first (has some data to report).

Formatting Alexas response

This template node contains the information for Alexas verbal reply. Once the basic structure is there all you do is edit the “text” to the response you want to receive, in plain english. She will respond with “exactly” what you type in there.

JSON & HTTP Response

Move along, noting to see here. The data gets formatted as JSON and the HTTP node completes the whole flow.

Done.

 

There you go. I feel better now. I at least explained WTF is going on here rather than dumping the flow on you and walking away. Sorry about that. Now you can run off and play with Alexa and Node-Red to your hearts content. The only thing I have noticed with the whole flow is that after Alexa responds Node-Red throws an error in the debug log. It all works 100% and works well but it always sends this error, and I haven’t figured it out. But I also haven’t been looking for it. Just a little FYI.

"Error: Can't set headers after they are sent."

 

This is where I found the goodness, buried deep in comments on a (awesome) blog.

Alexa, turn on Red Alert!

Red Alert

I love Star Trek (any good SciFi really) and who doesn’t? I also have an Amazon echo and I have been making my home smarter and adding automation where I can, very SciFi’ish yeah? The other day I thought to myself how cool would it be if I could activate a red alert”? Well I do have the Dot and I do have Philips Hue bulbs. I do have a home server and I do have Raspberry Pies. So I decided I wanted to have this feature and set out to get it done. As it turns out I am not the only one that wanted to be able to do this. After Googling I found a few other people that went through a similar process. Although I have not seen anyone do it the way I did. The ones I found mostly accomplished it with a Google Home Assistant and some used Node-Red. Hey, I have Node-Red. I guess the Home Assistant can play audio files. I have found a few pages on Alexa doing this recently but I have not gotten into making skills, yet. All the pages I saw also seemed to rely on an outside service of some kind (minus the voice assistant). They pulled the audio from the web or used IFTTT (which I hate) to do something. I don’t want that. I like to be as self contained as possible. Here’s what I did.

I have my home server setup with NR and it takes it all the MQTT in the house and does all the NR handling for the house. Then I have a Raspberry Pi (that also sits on top of the server), this has a temperature sensor on it and it handles the audio portion of the red alert. Since it is sitting there I also have it monitor the server, and the server monitor the Pi.

 

Yes I have a server and a Pi, yes they both also run NR. Why do I not use just the server instead of the Pi? The server doesn’t have a sound card and I don’t have an extra one. So yeah.

 

Flow 1
Flow 1

This flow is where I make the color changes. I use the Wemo Emulator node to create a device Alexa can discover, that also allows me to choose my own trigger word. That node gives a 1 or a 0 (on or off). I pass that to a function that contains the hue bulb color and activates the alert pulsing, this all flows to the hue bulb and out via MQTT. In the flow I only have one hue bulb connected. I have since connected the red alert to all the bulbs I have (currently 4 colored ones).

Flow 2
Flow 2

Here is where I activate the audio. I initially tried omxplayer. I found a shell script that looped the audio but it gave me issues when trying to kill the process. It only worked the first time. The processes didn’t die completely. I want to be able to stop the red alert also. So I continued searching. I came across a post in the NR Google groups and they had created a flow for playing a sound off of motion detection. I was able to take the kill (killall, duh) command he used to stop my flow. I also used the player (mpg123) he used because it has looping options builtin. Sweetness.

I am now able to tell Alexa to turn on a red alert and have her stop it as well. This doesn’t used IFTTT or rely on any other outside source (besides Alexa). It does not require an Alexa skill either. I have future plans to make a custom skill for this so I can change the phrase from “turn on red alert” to something more comfortable like “activate red alert”.

Onward to the flows!

Flow for the red alert lights:

Flow for the red alert audio:

(You may notice that the flows are different from the images, I cleaned up the flow before exporting and pasting the code.)

UPDATE 8/29/2017: I added updated code/flows so that you can disable the Red Alert lights. Previously a “stop node red” command would silence the sounds but not cancel the flashing lights. With the new update the sounds stop and the lights stop flashing and turn white. Still working on getting the lights to default to white after the Red Alert times out. 

 

UPDATED RED ALERT STOP FLOW 8/29/2017

Stop Red Alert

 

Some pages I found helpful.

Where I got the red alert sound
http://trekcore.com/audio/

Play Ambient audio on motion detection
https://groups.google.com/forum/#!searchin/node-red/audio|sort:relevance/node-red/vwQq8Plk0Zg/6DV5ZYMRCAAJ

The code for the ambient audio
https://github.com/natcl/exporail_video_player

Google Home Assitant and an RPi with video
https://www.youtube.com/watch?v=7j3QQlc_efY

Automated Keyboard Light with Alexa

Since I have been fiddling with Alexa I was able to get a light working with Wemo emulation. Both on the Raspberry Pi and on the ESP itself. I am mostly using the ESP with Fauxmo to act as physical devices. The Wemo emulation being done on the Pi is for running a bunch of scripts with MQTT or (hopefully in the future) gettin’ data from sensors and such. Still trying to find a way to get Alexa to read whatever I give her from MQTT, that would be righteous. But for now I have an automated keyboard light with Alexa.

(TLDR; Made a keyboard light on an ESP with a relay that emulates a Wemo plug and is voice activated by Alexa. Skip to the bottom for the code I used.

Any who, I replaced my old keyboard light switch made out of an old telephone biscuit jack with a toggle switch. I upgraded. I can now voice activate my keyboard light with Alexa. Man I’m lazy, and man that is cool. Not the lazy part the keyboard light. I do have to admit this was not my first attempt at this build. I tried two times before I finally got it right. The first two times I was trying to use 2N2222 and 2N3904 transistors and neither would work right for me. I was able to get it all working on the breadboard just fine but as soon as I transferred it to a PCB it failed. I think the problem is with the transistor. From my measurements it keeps leaking 12v back through the base and I don’t know enough about electronics to be able to figure it out yet, obviously, I tried twice.

So the third time I used the pre-made modules I have; 5v relay module. I put together a small PCB for the ESP and a DC-DC converter and added some pins to use jumper wires to attach to the relay. Soldered the power to a barrel jack and hooked up a toggle switch and connected it to the relay. So if I flip the switch it bypasses the relay and I get light manually. Always good to have a backup. The switch will work with or without the ESP plugged in. I plugged it in and bam! It worked. I gave Alexa a few commands and on and off the relay clicked. Beautiful.

Then……it failed, it started flickering the relay. It took me a minute to figure it out. I forgot the current limiting resistor on GPIO2 for the relay. Oops. That’s an easy fix luckily. The green jumper wire in the pictures goes to the pin header from GPIO2, so all I had to do was remove the jumper wire and replace it with a 1K ohm resistor. Easy. It was getting late so I turned it off and removed the ESP. The next day I go over to my computer and I can smell the lovely aroma of burnt electronics. Fuck. I look down and I can see the DC-DC converter sparking on the underside of the PCB. Turned out to be a bad solder job on my part. Since my liver transplant I have to take a shit load of pills, and some of these pills cause my hands to shake. Sometimes it’s not so bad and other times it’s ridiculous. I guess they were shaking more than I thought that night.

So I had to rebuild the whole thing. Again. Live and learn. This time I was sure to leave extra space in my solder routing just in case. The Mark IV has been up and running with zero problems for two days now. I think I worked out the kinks. And it is awesome to be able to sit down and tell Alexa to turn on my computer room and keyboard lights. Hell with Node-Red I could even WOL my computer!

Now behold, pictures…

View Post