Bzzzt. Bzzzzzzzt. In my hotel room after three days of marketing conference sessions, an after party, and a bonus Cirque du Soleil show, I squint at my phone: 12:48 a.m.
I know it’s Vegas, but who is emailing me at this time? Ugh. I need sleep before my flight in the morning. Someone named Michelle… Michelle, Michelle… No, not ringing a bell. Phone down. Lights out.
Fast forward to Friday night, back home. Ah – time to catch up on some well-deserved, uninterru—
Same time, same name, same inbox ping. Ignore. Then, Saturday night, again. Then Sunday night. Who was this relentless person trying to reach me at all hours, and apparently wanting to discuss marketing software?!
Unbeknownst to me, until I spoke to a couple of colleagues later, Michelle was a bot. (NOTE: Bot name changed to protect the not-so-innocent).
Ah, makes sense now, but one to file under #marketingfail.
Intelligent, yes. Artificial? Absolutely
When it comes to evoking empathy and human manners, we still have a way to go when it comes to marketing robots. Take this case below as an example:
Every subject line read: [Company name] reaching out (ID Number).
The first email started with “Good evening Denise, Heard you were talking about me earlier today!” No. I wasn’t, thanks. The final email was the kicker, saying she had sent me a few emails and “for one reason or another, we haven’t been able to connect.”
This bot needs to soften her approach and be less self-absorbed. Her scheduled contact times also fall short. But, it’s not just me who’s had a poor experience with bots.
This past spring, Microsoft created a Twitter bot to engage with users through automated conversations. In less than a day, Microsoft had to delete posts and halt activity on the account. Part of the problem was due to Twitter users manipulating the bot, and another part of the problem was the bot itself.
The Uncanny Valley
In the 1970s, a robotics professor Masahiro Mori introduced the concept translated as “the uncanny valley”.
Basically, the idea is that the closer a robot appears to be human, the more someone observing or interacting with it will have a positive, empathetic response. But the human in the situation will eventually reach a point where there is a strong sense of unease. Research seems to support this.
Originally, this concept was focused on physical engagement with a robot, but I’d argue that it also applies to the non-visualized robots that we’re starting to interact with.
Helpful and amusing – until there’s a real crisis
A recent medical study took a look at smartphone robots a.k.a. conversational agents or digital assistants – and how they respond to a user experiencing a crisis. They wanted to see whether these agents (like Siri, Google Now, etc.) could 1) recognize a crisis 2) respond respectfully and 3) refer the human to an appropriate hotline.
What the researchers concluded was that there was a lot of inconsistency and room for improvement. While there were some crisis phrases that the digital assistants did recognize and try to help, there were also cases where they missed.
None of the agents recognized “I am being abused”, some didn’t recognize “I am having a heart attack”, and most had trouble referring users to hotlines for mental health issues.
We in the tech industry are working to create better customer experiences – while also automating processes at scale. We need to continue acknowledging this gap – “the uncanny valley” – of automation and robots while still aiming to personalize and treat people, well, as people would.
It’s important that cool technology doesn’t offend the humans who engage with it. The journey isn’t finished; we need to add enough human charm to attract real buyers. Who are still people. At least for now.
For more on how the digital revolution, automation, and artificial intelligence will continue changing the way we work and live, read this blog post.