Robot voice control

I’ve set up a new mic and used cvoicecontrol (with some bug fixes) to perform voice control. I’ve integrated cvoicecontrol into my C HAL layer. Each voice model needs training and saving, however once done, they can be reused in the code. For example; if (listen(“yesno”)) { … } is all that’s required to listen for a yes or a no, assuming that “yesno.cvc” has been trained in advance. I’ve also integrated the clap switch across the one of the Phidgets digital inputs. The software requires two toggles within 8 seconds of each other, and the hardware configuration requires two claps to generate one output toggle. This seems the best way to filter out other noise from triggering it. The result is two sets of two claps are required to activate the voice control. Here’s an example:

There are other potential methods of triggering voice control. With a bluetooth enabled phone’s MAC address, it’s possible to l2ping it without any pairing needed. We can then tell it’s signal strength. I’ve already written a sample tool to speak “hello” and “goodbye” as a person['s mobile phone] gets closer or further respectively to the robot. Perhaps slightly too annoying on an ongoing basis, hence the 2 sets of 2 claps.

Tagged with: , , ,
Posted in Projects, Robot

Leave a Reply