FILTERS

FILTER OPTIONS

Rise of the replicants: can artificial intelligence simulate real business conversations?

< Back to article list

Intelligence may no longer be the preserve of us real people, but transparency, respect, and integrity remain uniquely human values.

The original Bladerunner was made 35 years ago in 1982. Yes, thirty five years ago. The majority of people working at, and with, MarketOne weren’t even born. Gulp.

The film is set in a dystopian Los Angeles in 2019. The Tyrell Corporation has bioengineered synthetic humanoids to fulfill the work humans would rather not be doing on far-off planets. It’s barely possible to distinguish these ‘replicants’ from real humans. Only trained experts are able to identify them by administering the “Voigt-Kampff” test: a series of questions designed to trigger responses the emotionally stunted replicants are incapable of.

Fast forward (or rewind?) to 2017. Marketing automation has already taken some of the more menial tasks off the hands of marketers. Now emerging technologies like real time sentiment analysis, artificial intelligence and virtual sales agents promise to augment or even conduct conversations on our behalf. But can they authentically replicate the nuances of real human interactions? And do buyers care?

Humanizing digital communication

The increased engagement rates we see from implementing triggered email campaigns and customizing website experiences would certainly suggest that buyers respond to being treated as an individual. However, it’s not necessary to repeat someone’s [first name] and [company name] repeatedly in order for communication to be relevant and useful. In fact, unless the content that follows the personalized subject line and salutation is tailored to the recipient’s role, industry or current interests, the attempt can appear superficial and insincere – plastic personalization.

There are other traits of human conversations that are arguably more appreciated: responding in a timely manner, remembering what someone’s told you in a previous conversation, following through on promises you make, and sensing when someone’s no longer interested and backing off. All can be simulated through the carefully considered use of business rules – provided you have the time to determine those rules, write the multiple copy variants and build and monitor the programs.

Digitizing human conversation

We’ve been running live chat – both reactive and proactive – on our website for some years now. The most common question we’re currently asked? “Are you a bot?” It’s the modern day Voigt-Kampff test – maybe not the most sophisticated challenge for artificial intelligence to overcome, but it does indicate that people want to know who – or what – they’re dealing with.

Chatbots and interactive voice response (IVR) systems are already prevalent as ‘first responders’ in the customer service sphere, directing inbound inquiries to the most relevant department. In the digitally-enabled contact center, business development reps are fed customer insight in real-time. Now a new breed of automation software from companies like Conversica promises to extend this experience to the email channel. Inbound contact inquiries trigger personalized email responses – nothing new there – but when the conversation continues, artificial intelligence is used to read and respond to emails in an authentic human voice until the lead is qualified and an appointment confirmed. It was recently voted Best Salesforce App at Dreamforce 17.

Keeping it real

The dilemma: would you let a virtual sales agent correspond in your name? Admittedly, there are gnarlier ethical questions regarding the use of artificial intelligence: for example, should driver-less cars save the lives of their passengers and sacrifice nearby pedestrians in the face of an oncoming juggernaut? But should we be transparent about our use of AI in marketing, or pass our bots off as real people with assumed identities? How far do we go? Creating LinkedIn accounts with attractive pictures to connect with prospects? Inventing a job history and qualifications, a range of interests, sporting achievements and two cats?

It may be a moot point. We’re assuming the visitors to websites will continue to be real people. How long before buyers are dispatching their own bots to complete forms, download and ‘read’ our white papers, employing algorithms to compare and rank complex product features, modeling and predicting their future satisfaction with a solution before dispatching their virtual procurement agents to negotiate the best deal?

In the very near future, legislation like GDPR may require prospects to actively consent to their conversations being captured and processed using artificial intelligence. But until these technologies can respond to a joke, recognize sarcasm, or take a hint, it’s probably best to be transparent about their use if we want to retain the trust of our customers. Nobody wants to discover they’ve been duped into forming a relationship with a replicant.