Conclusions

I think overall this was a successful study. The goal of it was to try out different toolchains, and think about my overall technology palette going forward, and I believe I accomplished that. I’ve also managed to put together a very good list of resources, and a personalized knowledge base which is really going to help me out over the next year. The thing I want most out of my time at OCAD is to work on building my practice. I think the things I’ve done before coming here, the things I do here, and hopefully the things I will do after being here, are part of the same mesh, and this study was helpful in drawing some connections for myself.

One of the things I found I’m very interested in, is how the canned / marketed experience of using IoT products and dev toolchains is VS the roll-your-own world, and where they cross. How are people mix and matching them? How are they working around implemented limits? What are they making?

There’s still a lot of things that can be explored, but narrowing down to focusing on the Personal Assistants, was also pretty helpful. I’m starting to think more about how these things work in a system, and what kind of toolchain I can make for myself over the next year to help with my own projects. This study also gave me a view about what I can realistically accomplish in just a week.

I’m interested to see where this will go in the upcoming year.

Week 11 – Alexa Blender

Resources

Overview: I am not going to mince words here, having Alexa turn something with blades in it on and off is quite scary. Namely because you tend to wonder if its going to actually stop when you want it to stop. My first experiments were with an arduino library that can mimic a WeMo smart plug. This means you can use something like a feather natively with Alexa’s smart home skill set. It does however limit you to certain phrases, actions, and responses. But if you are just doing straight up switches, its pretty handy. For this scenario, I built on top of week 5 and rolled my own program / server / polling situation.

In this case, Alexa won’t just make you a smoothie, it has to be in the right mood. I made a base mood from a random number, which was then augmented by the weather condition. I myself get the SADs, so giving Alexa some SADs was a relatable thing. Alexa will sometimes make you a smoothie, and sometimes not, but will offer up alternative scenarios.

https://youtu.be/eqV6y5zxrbk

Most of the challenges I faced were around wading through Amazon’s giant pile of services to find the right one to use, or to find documentation that helped answer questions I had. If you’re not used to dealing with AWS (and I’m not), its like a tangle of brambles trying to just figure out simple things. The forums aren’t much help either, but are at least searchable for information. i find a lot of these devices are also pushing very canned user development tool chains.

Components: Alexa Dot, Feather Huzzah, Neopixel Feather Wing, Power Switch Tail, A Blender

Things I Experimented With: protocols, networks, aggregation.

Things I Learned: Integrating audio clips as responses is very rigid. You have to still reference localhost in ngrok using 0.0.0.0 if you want to expose your computer to the network.

Future Iterations: I feel like this project was an iteration off of Tiny Oracle from week 5. I’d like to keep working on top of that base into something similar, except with some Auth on the API.