People see chatbot screeners as less capable than live responders, but value ability, trust in source above all

A recent study suggests that individuals' perceptions of and reactions to COVID-19 screening hotlines are affected by their trust in its source and how they believe the service is being delivered.
By Dave Muoio
03:16 pm
Share

Satisfaction with COVID-19 screening chatbots and adherence to the information they provide is primarily driven by the user's perception of the tool's ability, according to research recently published in the Journal of the American Medical Informatics Association.

Conducted by researchers from Indiana University and Temple University's respective business schools, the investigation asked participants to provide their reactions to fictional screening encounters between patients and either a human operator or a screening chatbot. In reality, the responses of each screener were identical.

"Our results show a slight negative bias against chatbots’ ability, perhaps due to recent press reports," the researchers wrote.

With this in mind, the researchers advised healthcare organizations employing the technology to be proactive in communicating the high-quality service their chatbot can provide.

"To offset users’ biases, a necessary component in deploying chatbots for COVID-19 screening is a strong messaging campaign that emphasizes the chatbot’s ability," they wrote. "Because trust in the provider strongly influences perceptions of ability, building on the organization’s reputation may also prove useful."

TOPLINE DATA

Across the study sample of 371 participants, chatbots were perceived to have less ability, integrity and benevolence than a human operator. However, this difference was often hand in hand or even secondary to participants' trust in the screening service's provider.

Perception of ability was primarily influenced by provider trust rather than responder type. Benevolence was dictated by responder type over provider trust, and integrity saw both as primary factors.

Perceived ability trumped responder type as the dominant factor when a handful of secondary measures, such as satisfaction or willingness to follow the screener's advice, were reviewed. Of note, the severity of the patient's condition did not have an impact on these secondary outcomes.

"When chatbots are perceived to provide the same service quality as human agents, users are more likely to see them as persuasive, be more satisfied, and be more likely to use them," the researchers wrote. "A user’s tech-savviness has only a small effect, so these results apply to both those with deep technology experience and those with little." 

HOW IT WAS DONE

In April, the researchers recruited and reviewed data from online participants tasked with viewing video vignettes of fictitious text chats between a COVID-19 screening hotline and a potentially symptomatic user. The researchers built these scenarios based on experiences with four COVID-19 chatbots and CDC-recommended screening questions, and designed different vignettes for a caller with mild or severe symptoms.

Participants watching each video were randomly informed that they would be viewing screening responses from either a human agent or a screening chatbot, despite no differences between the two in the vignette scripts. The participants then answered a series of questions on a seven-point scale to determine perceived ability, integrity, benevolence, persuasiveness, satisfaction, likelihood to follow advice, trust and willingness to use.

THE BACKGROUND

Chatbots were already seeing increased use across healthcare organizations to help coordinate care, deliver reminders or just generally lighten the load on human responders. And as COVID-19 cases mount in the U.S., more and more individuals have sought guidance on their symptoms from trusted sources.

"COVID-19 screening is an ideal application for chatbots because it is a well-structured process that involves asking patients a series of clearly-defined questions and determining a risk score," the researchers wrote.

"Chatbots can help call centers triage patients and advise them on the most appropriate actions to take, which may be to do nothing because the patient does not present symptoms that warrant immediate medical care. Despite all the potential benefits, like any other technology-enabled services, chatbots will help only if people use them and follow their advice."

Share