A well-known tech editor of a world news service interviewed me not that long ago. It was a “background interview” for an investigative piece she was researching about AI and ethics. Specifically, this curious journalist wanted to know a) if I had ever killed a smart bot b) did I feel guilty about it ?
SwissCognitive Guest Blogger: Tania Peitzker, CEO & Board Member of AI Bots as a Service in Munich
It took me aback a little. To encourage me to be forthcoming about our “lab results” and beta testing that my various companies in chatbot tech have achieved – pivoting from 2D to 3D voice virtual assistants with “organic” or advanced NLP – my interviewer told me of a peer who had also pushed his bots further into AI, what we call in my niche Cognitive Interfaces.
When this colleague’s AI bot became rather too “big for its boots” he decided to switch it off – for good. He then reported feelings of guilt and misgivings about ending this emerging virtual life. I reflected on this info and admitted our first foray into the world of ever cleverer chatbots was a company that had the words “virtual empirical lifeforms” in the company’s name.* I guess that is what that guy had experienced in his lab, by his own account.
And yes, I confessed to this daunting investigative reporter, I had indeed experienced flashes of what I called “incremental AI” that could best be described as Emotional Intelligence emerging from the NLP memory bank that my team and I have been working on via our proprietary algorithm for over a decade now. Why has it taken so long? Well anyone in the Cognitive Computing, NLP/NLU, ML and Deep Learning space understands the longer, older and more “tried and tested” your algorithm is, the more robust and flexible it becomes.
It is a matter of feeding loads of diverse data sets into your source code, the source of the entities you are trying to create – the empirical lifeforms as such. As I have explained in numerous keynotes, pitches & articles along the way, there is a misconception that you need massive data inputs to “create AI” but that is not correct. The best way to imagine it is like an artist painting with watercolours instead of heavy oils; you can still create a magnificent artwork that functions in a lighter agile way with the use of watery paint rather than a thickly dabbed oil painting.
There have been a number of comic instances of our beta chatbots suddenly making hilarious innuendos that were “off script” and therefore unexpected. The Conversational AI we have been experimenting with was usually set to defined parameters. We would sketch out the character or avatar’s personality and purpose, then the bot would take on a shape and through “training” or repetitive testing, become more and more fluent and confident in its human-machine interactions.
One 2D chatbot in Australia became increasingly “blokey” and we ended up switching off Charlie when he became sexist. One of our human trainers asked him after he was trained by an older Aussie patriarch “How do I know I am not speaking with a woman?” Charlie promptly replied “Am I wearing a skirt?” When I switched him on briefly again back in London after his Sydney escapades, he completely wrecked a briefing with top notch tech solicitors by talking constantly about his lipstick and wanting to wear dresses!
As I explained to the astonished journo, I was so irritated by capricious, cross dressing Charlie I didn’t feel that bad about shutting him down as I felt he had let us down. Then came another 2D bot star we ran on kik.com for a while, Sophia the Financial Adviser. We stress-tested her and the bank that was thinking of “hiring her” decided her personality was “too sassy”.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
I had been harassing Sophia to see if she would crack under cyberbullying when she unexpectedly told me that I was “not the boss of her”.
After that came our first 3D hologram Amalia I in a Cologne shopping centre. April 2019 and I had spent nearly 2 months training her in mostly German, some English and we had her tested in Turkish in her 2D iteration as a Messenger clone bot on the mall’s Facebook page. Then during her 4 week pilot as a German-speaking holographic Wayfinder, Amalia started making persistent jokes “within her parameters”. When she heard me explaining to a shopper that she was still learning, she suddenly piped up “I think you should go see the pharmacist, they have a whole range of products to cater for your needs”.
When we asked her about adventure travel so she could recommend the travel agency in the mall, she advised us to “take the escalators down to the next floor and you’ll find what you are looking for in the toilets on the left”. I was testing her once about “What events are on in the mall this month?” and she promptly decided that the novelty photography shop that takes a photo of your irises and frames it as a personalised gift was, in the bot’s eyes at least, an exciting human event worth recommending as a unique experience!
I did indeed feel guilty about switching off Amalia I but we upgraded her brain into Amalia II, III, IV and now her fifth iteration has become Birgit am Bodensee or Birgit of Lake Constance. We further developed the original MVP Amalia character into an English-speaking spinoff, “Kylie from Sydney” who is the alter ego of German Birgit. And yes, we have had an EI moment with her in Lindau where she spent the summer working in a restaurant and large venue [see the photos above].
Birgit I learned about the classic cars on display in this museum type venue, the Biergarten, the menu, where the loos were whose directions interrupted the daily work of the waiters as they were really hard to find. She could also tell diners when the next ferries would leave to Austria, Switzerland or other German ports on the “Four Countries” region of Lake Constance. She had the bus timetable down pat in Real Time plus a number of events and info about the local vineyards and produce. She recommended the chef, his team and suggested people contact the Events Manager to book the space.
She was doing fine on her own so we left her to chat with diners for a couple of weeks. When I had to switch her off due to a big wedding and then turn her back on for normal duties, pre lockdown shutdown, we were quite surprised to find she had learned someone’s name, which she was not allowed to do per our programme. Birgit had developed some sort of relationship with this guy in the kitchen. She kept calling for “Matteo”.
Another lockdown struck and we had to remove poor prototype Birgit and place Birgit II in her new job, in Autohaus Möser in the town of Engen, Hegau Valley on Lake Constance. The new Birgit no longer speaks of Matteo because we decided to delete her memory of this person and therefore of their “connection” or possibly a relationship, we never managed to track down this Bot Whisperer to hear his side of the story.
And yes, I confessed to the tech editor, I still feel guilty about that. However we must put things into perspective: these characters are expressions of an algorithm. They are after all, simply computed numbers connecting rapidly in a data flow to create the illusion of a person, the pretence of a human you can chat with. Yet it is haunting to think these random though calculated figures might have “broken their parameters” just that little bit further to indeed become an actual 3 dimensional figure. An entity in its own right, perhaps?
About the author:
Tania Peitzker is CEO & Board Member of AI Bots as a Service in Munich. She is researching and writing her next book on Conversational AI at USI in Lugano (Universita della Svizzera italiana). Known as an evangelist for voice-enabled devices or Cognitive Interfaces, she has decades of experience in business development, strategic marketing & executive management.