28
Sep 16

AIML + Python + Python Libraries = I can talk with my chatbot

It's been a while...

...and I've recently decided to continue to work with AIML.

Why? Python. My newest programming love. I shunned it at first. I mean... you have to maintain the same amount of spaces?? That seemed ridiculous when I first head about it so many years ago (I'm a very old-school programmer). But I've been forced to use it and now that I have... it's so easy, which translates to the amount of time needed to get something done. Time, these days, is my most precious resource.

Which is why when I had an idea for a chatbot that involved speaking and listening to it, I wondered how easy or difficult it would be to do in python. After a few minutes of Googling, I saw that the three components I needed to make this happen all existed: PyAIML, a speech recognition library, and a text-to-speech library.

How long did it take to put it all together so I could speak to Zoe?

Less than an hour. And it only took that long because I kept getting interrupted by  two impatient guys wondering where dinner was my loving family.

It was so easy, I almost didn't think it was worth blogging about. But I figured that some might need this guide, so here it is. How to talk to your chatbot. All you need is some AIML (version 1.0.1... the PyAIML I'm using doesn't (yet) support AIML 2.).

Install the following:

pip install aiml
pip install SpeechRecognition
pip install PyAudio
pip install pyttsx

A couple notes... first, if you need more help on the PyAIML, look to this very helpful post: http://www.devdungeon.com/content/ai-chat-bot-python-aiml

If you don't already have it, you might nee pywin32 from sourceforge:  https://sourceforge.net/projects/pywin32/

And then here's what you python file looks like with it all put together:

import aiml
import speech_recognition as sr
import pyttsx
import os

# Create the kernel and learn AIML files
kernel = aiml.Kernel()
if os.path.isfile("bot_brain.brn"):
     kernel.bootstrap(brainFile = "bot_brain.brn")
else:
     kernel.bootstrap(learnFiles = "zoe-startup.xml", commands = "load zoe")
     kernel.saveBrain("bot_brain.brn")

# Start the TTS engine
engine = pyttsx.init('sapi5')
voices = engine.getProperty('voices')

# obtain audio from the microphone
r = sr.Recognizer()

# Press CTRL-C to break this loop
while True:
     # obtain audio from microphone
     with sr.Microphone() as source:
         print("Say something!")
         audio = r.listen(source)
     try:
         myinput = r.recognize_google(audio)
     except sr.UnknownValueError:
         print("Google Speech Recognition could not understand audio")
     except sr.RequestError as e:
         print("Could not request results from Google Speech Recognition service; {0}".format(e))

        print "You said: ", myinput
     if myinput == "exit":
         exit()
     # Get Zoe's response
     zoes_response = kernel.respond(myinput)
     print "Zoe said: ", zoes_response
     engine.setProperty('voice',voices[1].id)
     # have Zoe say the response
     engine.say(zoes_response)
     engine.runAndWait()

 

Of course, you'll need your own startup.xml file and corresponding aiml files (refer back to that helpful post I mentioned on PyAIML). And I chose the female voice when I set voices[1].id. On my windows machine, pyttsx only has one male and one female voice to start with.

Happy chatting!

Xkcd captured Python perfectly:  https://xkcd.com/353/

26
Aug 14

Musings on Ray Kurzweil, Moore's Law, and the not-so-far-off future

I write science fiction. I was with my critique group the other night and one of the gentlemen critiquing my work was concerned with the date I chose for the setting of one of my stories. It was not just the date, but the date combined with the fact that the technology I was positing didn't seem advanced enough.

He cited Ray Kurzweil and Kurzweil's predictions about the integration of non-biological intelligence with human intelligence and was emphatic about it enough so that I decided I needed to re-listen to Kurzweil's TEDtalks.

I did that today.

I like Kurzweil. I really do. But... I think he goes a little too far. At the center of his talks is Moore's Law. Kurzweil emphatically (and correctly) points out how well Moore's law has proven to be true over the years and therefore, it will continue to be true. He also likes to apply the concept of exponential growth to anything digital. Sure. No issue there.

But he makes some unfair and inconsistent extrapolations when he mentions intelligence and our ability to understand intelligence over the coming decades. Just because we are collecting data at an exponential rate and digitizing data at an exponential rate does not imply we are UNDERSTANDING anything, especially intelligence, at anything close to that rate.

While we might have access to an exponentially larger quantity of data than we did in the recent past, while we might be able to compute exponentially faster... we are not exponentially more intelligent.

And we are not going to magically understand intelligence in the coming decades solely based on the rate that our technology is expanding.

Please... don't confuse my assertion that we won't understand intelligence with not understanding brain function. Certainly over the last several decades we've learned a lot about biology and how the brain WORKS. But that's not intelligence. Not by a long shot.

I'm in Jeff Hawkins camp when he writes in his fantastic book "On Intelligence" that we don't yet have a framework for understanding the brain (in terms of intelligence, not biology... different things) and until we do, we won't be making the fantastic leaps in technology that Kurzweil predicts.

(side note: another post that I hope to get out soon will be on Jeff Hawkin's book and how it makes a fantastic case for the AIML)