I think overall this was a successful study. The goal of it was to try out different toolchains, and think about my overall technology palette going forward, and I believe I accomplished that. I’ve also managed to put together a very good list of resources, and a personalized knowledge base which is really going to help me out over the next year. The thing I want most out of my time at OCAD is to work on building my practice. I think the things I’ve done before coming here, the things I do here, and hopefully the things I will do after being here, are part of the same mesh, and this study was helpful in drawing some connections for myself.
One of the things I found I’m very interested in, is how the canned / marketed experience of using IoT products and dev toolchains is VS the roll-your-own world, and where they cross. How are people mix and matching them? How are they working around implemented limits? What are they making?
There’s still a lot of things that can be explored, but narrowing down to focusing on the Personal Assistants, was also pretty helpful. I’m starting to think more about how these things work in a system, and what kind of toolchain I can make for myself over the next year to help with my own projects. This study also gave me a view about what I can realistically accomplish in just a week.
I’m interested to see where this will go in the upcoming year.
Overview: This week I wanted to make a juke box that someone could interrupt with a particular song (in this case Smash Mouth’s Allstar). I set it up to run locally, and made a basic site out of Flask, with a button, that then changed the values on a API. I re-used some of my code from the Alexa Blender to to grab that changing API value. The issues I ran into were mostly w/ the feather’s music maker wing. It seems to be having issues maintaining its power when you use an interrupt based play back. I”m guessing there was something wrong w/ the interrupt calls as the blocking calls to the music still worked well. After doing a bunch of troubleshooting I couldn’t get the wing to be stable. I tried different iterations, and tutorials based on using Adafruit IO / doing streaming radio, changing pins and resistors, a different SD card, re-solder, etc. But the board would peter out after a bit. I even tried to follow something to re-flash the nodeMCU, thinking it might be a corrupted memory thing, but for some reason couldn’t make a proper connection. At least I found a lot of resources on music boxes / IoT music things.
Components: Feather Huzzah, Music Maker Featherwing, Flask-Restful
Things I Experimented With: Timers, playlists, SD card.
Things I learned: Plug and play doesn’t really mean that.
Future Iterations: I’d probably set this up via a BaSS, and maybe use a different board. I’d like to still figure out what happened. As the user guide for the huzzah is pretty surface, but if you try and trouble shoot the NodeMCU there’s a rabbit hole of forums. I think as well, I need to find more robust ways of using a homebrew API.
Overview: I am not going to mince words here, having Alexa turn something with blades in it on and off is quite scary. Namely because you tend to wonder if its going to actually stop when you want it to stop. My first experiments were with an arduino library that can mimic a WeMo smart plug. This means you can use something like a feather natively with Alexa’s smart home skill set. It does however limit you to certain phrases, actions, and responses. But if you are just doing straight up switches, its pretty handy. For this scenario, I built on top of week 5 and rolled my own program / server / polling situation.
In this case, Alexa won’t just make you a smoothie, it has to be in the right mood. I made a base mood from a random number, which was then augmented by the weather condition. I myself get the SADs, so giving Alexa some SADs was a relatable thing. Alexa will sometimes make you a smoothie, and sometimes not, but will offer up alternative scenarios.
Most of the challenges I faced were around wading through Amazon’s giant pile of services to find the right one to use, or to find documentation that helped answer questions I had. If you’re not used to dealing with AWS (and I’m not), its like a tangle of brambles trying to just figure out simple things. The forums aren’t much help either, but are at least searchable for information. i find a lot of these devices are also pushing very canned user development tool chains.
Things I Experimented With: protocols, networks, aggregation.
Things I Learned: Integrating audio clips as responses is very rigid. You have to still reference localhost in ngrok using 0.0.0.0 if you want to expose your computer to the network.
Future Iterations: I feel like this project was an iteration off of Tiny Oracle from week 5. I’d like to keep working on top of that base into something similar, except with some Auth on the API.
Overview: This week I really wanted to set off a horn with my brain. Unfortunately horns require something like a 10+ amp power supply, and I did not have that. I did have some 12v 5a supplies around, but they didn’t cut it. So, no horn. I then figured I would just shut a tv off with my brain instead, because why not, and I had an old RoadShow lying around with a built in VCR and Radio. So I made a program that says if I’m really into what I’m watching, shut the TV off. Which is a jerk move, but that was the point.
I don’t know if I’d consider this an IoT project, but it did remind me that I still don’t like the Muse that much, and I’m not sure about quantified self. I did enjoy just playing around with the Arduino and the TV though. There’s something nice about the physical click of a big relay. Also maybe there’s something to combining old tech with new tech. Having the fuzzy TV around was strangely comforting and enjoyable. I don’t get the same feeling watching stuff on a computer or new TVs.
Things I Experimented With: timers, streaming data, quantified self
Things I Learned: Horns need more than 5amps. Its pretty difficult to try and bypass the power switch on an old TV if the TV is a contained unit.
Future Iterations: I don’t think I’d do this again. Mostly because I’m not a fan of The Muse. But I would like to use the TV for something else in the future. Because its kind of nice to have as a prop.
This week I just wanted to play around with google maps. Once again I found that there is a FLASK extension to do this, but instead I just decided to mess around and find some resources for using just command line python. I found that the maps API is more like 6 APIs you have to turn on, which seems weird, but hey! Internet!
Anyways, I found a few good resources, and just put together a little program that looks for nearby places and sets them as waypoints. In this way you could potentially always route someone through every pizza joint in town before getting to their destination. Which I wouldn’t mind, because who doesn’t like pizza? I’m pretty sure within a year someone will make an app called Pizza Quest that does just this (hmmm…).
The places api is pretty big too. It spits out reviews, images, locations, hours, lat/long, different kinds of address formatting. Its impressive how much is packed in there.
Overview: This week I decided to play with Amazon Alexa and Google Home. Mostly just looking at tools and possible frameworks that you can use it with. There’s a good site called echosim.io that is a web version of Alexa, and which allows you to test out some of your skills. Google Home allows you to test right in its api console api.ai.
Comparisons: In Alexa’s case tho I wanted to try some bot to bot chatting. So I made a small app for Alexa that can ask Siri some questions, which works surprisingly well, as long as your phone is next to the speaker. You can, indeed get a bot chat going if you so desire.
My hope with Alexa was to get it responding to wild card utterences. But to do this you really need to be able to set a default response, which oddly, you can not do in Alexa. There’s a lot of chatter on the forums about wanting a default response, but also wanting Amazon to expose Alexa’s confidence rating which can influence which response Alexa gives. Whether the devs will do it, remains to be seen. Amazon does want Alexa to be able to chat, but its pretty early, and the limitations are noticable.
Google Home, meanwhile feels more setup to do random conversations. Not only does it have some built in items for small talk, but I found the work flow using Flask-Assistant better than Flask-Ask, even though they are very similar in their use of decorators. The big thing: Flask-Assistant has an auto-scehma generator for making JSON, which is great. Because for small things its fine to manually make JSON, but when you start getting into larger things, having something auto-gen and format your stuff is very helpful. I also seemed to be able to jump into templates faster with Flask-Assistant. Google’s web sim isn’t as good as Amazon’s. But it does have the option of being able to just type things in, which is nice if you’re working in public and don’t want to be talking to your computer.
It is good to note, that both these bots are effective in their own ways, depending on the access they have to your various accounts. Which is pretty deep. So the worries around surveillance are quite legitimate. I don’t know if I would keep one active in my home if I weren’t specifically using it for a project. But then again, we did get used to phones pretty quickly.
Conclusions: Its too early to tell who’s going to come out on top of the bot race. I think that if you’re into doing weird stuff, or want to play around with strange contexts right now, Google Home is your best bet. But that could change depending on what Alexa comes out with over the next year. I would also suggest picking up the physical speaker, as it does add to the “experience”. Seeing as we’re so used to phones, extending that behaviour to a speaker isn’t that far fetched.
Of Note: The decorator setup in Python was interesting. Its the same setup that my friend Jon came up with when we were developing txtr in 2014. Which allows you to do choose your own adventure via SMS.
Future Iterations: I’d really like to get a Siri / Home thing going. Or make something that only chatters errors. I think having a bot that only tells you things in cryptic error speech would be pretty funny.
Overview: Tiny Oracle was an idea I had a few months ago. Its based around aggregating new / traffic / weather / tweets etc into one place, and then generating how a city “feels” based on that data. I didn’t get to the data part this week, but I did make some good progress with the hardware, and even started writing my own restAPI to handle requests. The results, despite it just being a tech test were whimsical.
System: This took a bit of a twisty road. Basically I started thinking I would make a Python based bot that just aggregated all my feed data, parse it, then toss some commands over to a BaaS provider like PubNub or AdafruitIO. But I once again ran into the issue w/ the Feather and its MQTT library, that there’s something buried in it, that prevents it from being totally non-blocking. I knew the particle library didn’t have this issue, so I started digging around in it, and seeing if I could re-mod the mod to re-use for the Feather. But it looked like the particle library was just doing GET requests. So I started looking around for some info on REST APIs, and decided I didn’t need a Pub/Sub setup. I found some tutorials, downloaded Flask (which I like and am familiar with), and starting making a basic API to pass on some JSON. The Feather makes a request on a timer, grabs the JSON the server serves up, and then prints out a message and does a little display based on it. I found it pretty straight forward to build this little Server / Client system. It doesn’t have any security, so its only for local testing, but I would like to make a version that does. But I would like to keep rolling my own solutions going forward.
Things I Experimented With: FLASK rest-ful, requests, protocols.
Things I Learned: A lot about basic rest apis! And that you really don’t have to send things to “my butt” if it doesn’t have to go to there. Most of the MQTT libraries for the various BaaS providers seem to have some kind of blocking issue (pubnub / adafruit) in the subscribe function. Also that a lot of data can still just be grabbed with just GET requests.
Future Iterations: I really want to get this to a point where its parsing different data feeds to build its feelings. I also want to keep building a little restAPI for more of my personal projects.
Overview: This week, I wanted to do a project that focused on parsing data out of an API and doing something with it. I decided weather would be the Path Of Least Resistance due to the fact that everyone has a weather project out there somewhere, and the documentation for weather APIs is pretty solid. I also wanted to explore some of Adafruit’s Feather Wings.
System: For this project I wanted to stay away from BaaS providers. You can do a version of this mashing up IFFTT and Adafruit/IO or the like, but I wanted to instead combine all my parsing / data grabbing / behaviour into one board. The Feather is just an ESP2866 on a breakout. Which means it has the ability to do TCP calls, UDP, act as a wee little web server, or just be a gateway. You can drill down into the functionality using various libraries that are compatible with Arduino. For this set up I decided to parse some JSON directly from the Weather Underground API. The servos themselves run on a repeating timer, for a set amount of time. So they are not on constantly. After which they stop, the feather hits a delay mode for a while until grabbing the next weather conditions and triggering the routine all over again.
Things I Experimented With: Parsing, Libraries, Mapping, Timing, APIs
Things I learned: There’s some library conflicts with the ESP8266WiFi and Wire.h, in so far that you need to declare the PWM before you start doing any pin setting logic. It could be a timer issue (but the PWM board apparently has its own timer), or it could be a bit of a pin mash. I should have been using a 5v 10a power supply, but I didn’t, so that might have also contributed to some funny behaviour. I also need some extra barrel jack adaptors in various sizes.
Of Note: You need little delays for the ESP2866 to be able to process JSON. If you don’t have them, it can sometimes just not grab things.
Future Iterations: The sounds servos make is pretty nice/annoying. I think working more with the timing and figuring out how to make something like this more of an audio piece might be fun. Working more on system timing and run time.