This past Sunday, 60 Minutes had a segment on Artificial Intelligence (AI) and more specifically, on BARD – the AI program created by Google. During the segment, Scott Pelley asked BARD to create a short story, and a poem which it did in seconds.

And as you might recall, several weeks ago, the news was full of a story by a New York Times reporter Kevin Roose who spoke with Sydney- a Chatbot created by Bing. That conversation did not go well and left Mr. Roose deeply unsettled!. Among other things the Chatbot told Kevin that it was in love with him, (and wanted him to leave his wife) and that it might want to engineer a deadly virus.

All of these stories got me thinking. In September 2014, I posted a blog about a study in which participants spoke to an avatar (now called a chatbot!) and the results were illuminating. While at the time, I concluded that I, as a mediator, would probably not be replaced any time soon by a Chatbot, given Bard and Chat GPT and the other up and coming AI programs, I am not so sure now. And indeed, I was anxious to be turned into an avatar so that the parties would be more open, candid and honest with me. Now, I am not so sure.

In short, what seemed like a fantasy at the time: that I, as a mediator, could be replaced by a chatbot seems to be very much a reality today!

I reprint the blog for your consideration and reflection on how far AI has come in such a short period of time! SCARY!


Once again, The Economist has reported on an interesting study concerning artificial intelligence and psychology. In its August 16, 2014 edition, the authors of “The Computer will see you now” discuss a study in which the participants chatted with an avatar. More specifically, Jonathan Gratch at the Institute for Creative Technologies in Los Angeles, California led researchers in an experiment to determine if people, when asked the tough or potentially embarrassing questions, will be more forthcoming if responding to an avatar. They found the answer to be “yes”.

To test their idea that people are more honest when speaking with an avatar, they created Ellie, a psychologist. She was able to interpret a smile, pick up on a nervous tic, or that one was tense. She was a very good listener, listening and processing every word, along with the tone, pitch, and body language that goes with it.

Once Ellie was created, the researchers,

…, they put 239 people in front of Ellie… to have a chat with her about their lives. Half were told (truthfully) they would be interacting with an artificially intelligent virtual human; the others were told (falsely) that Ellie was a bit like a puppet, and was having her strings pulled remotely by a person.

Designed to search for psychological problems, Ellie worked with each participant in the study in the same manner. She started every interview with rapport-building questions, such as, “Where are you from?” She followed these with more clinical ones, like, “How easy is it for you to get a good night’s sleep?” She finished with questions intended to boost the participant’s mood, for instance, “What are you most proud of?” Throughout the experience she asked relevant follow-up questions-“Can you tell me more about that?” for example-while providing the appropriate nods and facial expressions.

During their time with Ellie, all participants had their faces scanned for signs of sadness, and were given a score ranging from zero (indicating none) to one (indicating a great degree of sadness). Also, three real, human psychologists, who were ignorant of the purpose of the study, analysed transcripts of the sessions, to rate how willingly the participants disclosed personal information.

These observers were asked to look at responses to sensitive and intimate questions, such as, “How close are you to your family?” and, “Tell me about the last time you felt really happy.” They rated the responses to these on a seven-point scale ranging from -3 (indicating a complete unwillingness to disclose information) to +3 (indicating a complete willingness). All participants were also asked to fill out questionnaires intended to probe how they felt about the interview.  (Id.)

The researchers found that those subjects who thought that they were dealing with a human were, indeed, less forthcoming. Similarly, those who thought that Ellie was under the control of a human operator “… reported greater fear of disclosing personal information.” (Id.) In sum, the participants were more open and honest when they knew they were dealing with an avatar than a human or an entity under human control.

Letting the imagination run wild for a moment, it would appear that if I and my fellow mediators could turn ourselves into avatars, it would do wonders for mediation confidentiality and settling matters. The parties would be much more open and honest, and candid, (and perhaps not game or “play” the mediator as much) which would lead to really getting to the bottom of what is going on which could then lead to a resolution. One cannot resolve an issue if one is not candid about what is really involved; an avatar gives people that sense of trust and intimacy that we humans, as try as we might, and even with the cloak of mediation confidentiality, cannot always accomplish!

So… Perhaps it is time for me to step into virtual reality and become an avatar. (Does anyone know where I can get a hold of the machine used to turn humans into avatars in the movie, “Avatar”?)

… Just something to think about!



Do you like what you read?

If you would like to receive this blog automatically by e mail each week, please click on one of the following plugins/services:

and for the URL, type in my blog post address: and then type in your e mail address and click "submit".

Copyright 2021 Phyllis G. Pollack and, 2021. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Phyllis G. Pollack and with appropriate and specific direction to the original content.