Designing An Empathetic Chatbot đŁ With the Use of CBT đ§
Introduction
Chatbots are handy tools used in everyday life, and have, in a sense, become ubiquitous in this technology-rich age. Amazon Alexa, Siri, Google Home, and the Xfinity Voice Remote are just several of the commercial chatbots that help humans expedite processes like scheduling an appointment or searching up specific information. With just a few words, you can tell these products to do these tasks for you instantaneously.
The use of chatbots has opened up a discussion revolving around the artificial intelligence and empathy of this technology. This is where the question of design comes into view. The act of talking to any of the commercial chatbots listed above and accepting their response is a designed experience within itself. The syntactic and semantic details of conversation can easily divide the determination of what is considered humanistic or not.
My team and I worked on creating a chatbot that delved deep into this idea by designing our bot within a therapy framework.
We were essentially interested in creating a chatbot that helped the user navigate any uncomfortable feelings that may have had during the day.
From this point, we knew that our chatbot had to be empathetic to make this sensitive relationship run smoothly.
Identifying the Users
When approaching the desired therapy framework of our chatbot, we knew that tending to a specific pool of users would significantly aid our construction of knowing what kind of problems our chatbot should be prepared for.
As college students, we are very familiar with the difficulties that come with assimilating to college life, coursework, and more. This results in mind-boggling rates of mental illnesses within college students. In fact, according to Statista, 3/4 students feel âoverwhelming anxietyâ one time or another. And this makes sense. College can feel like a âpressure cookerâ most of the time, testing the abilities of students juggle hard coursework, jobs on campus, career preparation, and social lives within a brand new environment. Anxiety is an endemic in college students today. So, we wanted to create a counseling bot to tackle this idea.
We narrowed our desired targeted users to be first-gen first-year students. This decision stemmed from our desire to have a specific pool of users that could share common problems of getting used to college life.
Identifying the Chatbotâs Persona
As a chatbot, we all knew that it would intrinsically give off a robotic vibe. Considering how sensitive therapy relationships come across, we wanted to attain an empathetic persona for the chatbot right off the bat.
We decided on being an O-team leader utilizing Cognitive Behavioral Therapy. O-team leaders are essential resources to students in their week of orientation, and noticing that one of our group members was familiar with the communicative frameworks that O-team leaders were trained with, we gave it a go. We decided on using our O-team leader chatbotâs dialogue to be of the nature of CBT (Cognitive Behavioral Therapy) sessions. CBT is different from the usual assumptions of advice-heavy conversations in therapy. It rather promotes the model of how:
âpeopleâs emotions and behaviors are influenced by their perceptions of events.â
In other words, it followed the framework of:
By following this therapy framework, we had a clearer scope of how to construct appropriate dialogues from our chatbot.
Essentially, we wanted a relationship built between
An O-team leader and a first-gen first-year student through CBT.
This relationship would be fostered through dialogue that will tend to the problem of helping the student navigate distressing thoughts they may have.
Now that we have the desired face of our users and our chatbot, letâs move on to our journey of designing this chatbot!
Design Research
Researching Appropriate Dialogue
Hold your horses! Did you think that I was going to instantly dive into how we started coding this chatbot? Nope. Beforehand, my group and I understood that we all didnât have a sufficient enough grasp of what a CBT session looked like from the perspective of the patient and the perspective of the therapist. This process of understanding is necessary for creating careful empathy within our chatbotâs dialogue.
Understanding the Therapist Perspective: By reading various professional journals and articles on Cognitive Behavioral Therapy, we were able to adopt a comprehension of what a CBT session should include.
For the sake of citing our sources, we exceptionally used:
- A Therapistâs Guide to Brief Cognitive Behavioral Therapy by Jeffrey A. Cully, PH.D. and Andra L. Teten, PH.D.
- R.. L. Leahy, Cognitive Therapy: Basic Principles and Applications. Jason Aronson Publishing Co., 1996
- CBT Techniques
From this research, we understood that key factors of a CBT session included:
- âtargeting emotions by changing thoughts and behaviors that are contributing to distressing emotionsâ
- Creating a strong therapeutic relationship through empathy (validating the patientâs experience), genuiness (being authentic), positive regard (respect)
- Collaborative work of skill acquisition and homework to mold positive thought patterns
- Teaching skills rather than just talking
Most importantly, the therapist must have the collaborative, assertive, and nonjudgemental.
Understanding the Studentâs Perspective: In order to build a chatbot on this certain topic of helping first-gen students through distressing thoughts in their first year of college, we needed scenarios and emotions to build off of.
In order to tackle this, we sent out a Google Forms to students that identified as first-generation students.
The questions we asked included:
- What were some prominent feelings you felt during your first year?
- What challenges have you experienced? (Framed within categories of âAcademicâ, âSocialâ, and âPersonalâ)
- As a first gen student, were there specific disadvantages that played a role in these challenges?
- Could you explain a specific example of dealing with these challenges?
- Please list the emotions that you encountered when dealing with these challenges (e.g. anxious, empty, unprepared)
- If these feelings were overcome during your first year, what helped you the most? If not, why?
While creating this survey, we wanted to make sure that our questions were directed, but also open ended enough so that we could receive a wide range of responses.
We made it known that responses to this survey will remain anonymous and confidential due to the sensitivity of some experiences.
From these responses, we were able to construct a list of common emotions and scenarios felt by these students.
Here is a word cloud that we generated to visualize the most common emotions expressed in our 20+ survey responses:
Many of the experiences stated by the participants of the survey included admitting that they felt very new and unprepared to the notion of attending an American college and dealing with situations such as:
accepting financial aid loans, assimilating to new cultures, getting used to college-level courses, and socially adjusting to the social circles on campus.
This survey was very helpful in predicting what scenarios and emotions our bot should help with.
Understanding the Relationship of Chatbot and Student: As I have stated, we selected the persona of our chatbot to be from an O-team leader, to bring peer-familiarity with Oxyâs campus to conversation. Therefore, our chatbot could supplement the sensitive exchange of experience with the redirection to attainable, professional resources on campus such as the Emmons Health Center or SLICE. With knowledge of the O-team leader training, we were able to assume the persona through communicative rules in the O-team leader handbook. The role as an O-team leader is as a model peer that is expected to mindful, open, and respectful in how they communicate. Some sentence and attitude framing in the handbook proved useful in constructing our chatbotâs appropriate dialogue.
Some of this framing of constructive dialogue included:
Understanding the CBT Approach: As I may have briefly explained above, Cognitive Behavioral Therapy focuses on the idea that the way that we construct our thought patterns when a situation occurs has a grand effect on how we emote and behave. CBT emphasizes the recognition of negative thought patterns and the modification of these thoughts through evidence and logic. The goal is to get at unconscious beliefs, attitudes, and behavior of the person through reflecting on internal dialogue and emotion-filled thoughts. Thus, the person receiving CBT can greatly reflect on and become aware of the wellbeing of how they respond to everyday situations. By becoming aware of these thoughts and emotions, CBT prioritizes attaining skills to mold thought patterns into healthier ones by using evidence and logic to debunk demoralizing thought patterns.
Now, how the heck are we going to implement all that we learned into our chatbot?
Let us officially dive into that journey.
Identifying the Need
If Iâve learned one thing from the practice of design, it is that identifying the need of your design is CRUCIAL. I hope you understand how crucial it is.
By staying true to the CBT approach of our bot, our need unraveled quite clearly. Our need is to reveal/bring awareness to the distressing thought patterns of our user. Therefore, they can learn to adopt different and more beneficial thought patterns.
Since our targeted users are first-gen first-year students, we focused the context of distressing thoughts around navigating the college environment. Our chatbot is focused around dealing with certain scenarios that erupt from having trouble with roommates, understanding the new culture, and more on a college campus.
Design Architecture
Due to the fact that chatbots are intangible, our âprototypesâ came in the form mocked up flow charts of how we wanted the conversation to carry between the chatbot and user.
First Mockup of Conversational Flow + Analysis
From here, you can see, that we wanted the core states of our chatbot to be identifying beliefs and then challenging them through the CBT approach of debunking negative thoughts through evidence and logic.
We wanted to make sure that we introduced ourselves to the user, which is different from what is expected of a purely reactive chatbot. Upon researching up on cognitive behavioral therapy, it was made obvious that patients were often expected to be told the outline of cognitive behavioral therapy. Since it is a certain mode of treatment within the setting of counseling, it is helpful for the patient to be aware of how they will be navigating their emotions. We also noticed that the sensitive sharing of emotions can be difficult to start when the patient is met with silence. In order to foster the trusted relationship between O-team leader and student, we wanted to make sure that an introduction was made to initialize this trust in the chatbotâs persona.
Initially, we were thinking of tackling all thoughts of the user as indicated through the âbadâ, âneutralâ, and âgoodâ pathways. We thought that this would be a decent idea because if a user is feeling particularly âgoodâ, we could lead them to recognize that thought pattern and make note of how a positive thought pattern could benefit them.
However, after evaluating the possible pathways from this, we deduced that we would be going well beyond our threshold. We brought ourselves back to how our need more aligned with making negative thoughts made aware of rather than all thoughts. Working within the bounds of our Slack Bot, we wanted to focus on building an empathetic relationship through dialogue rather than covering every emotion of our user.
{Selected Conversational Flow of Chatbot} Second Mockup of Conversational Flow + Analysis
After thinking through our first mockup, we narrowed down our expected user input to be within the negative emotions they may be feeling.
We also narrowed down the overall conversational flow of the chatbot to be of the nature of the flowcharts below:
The re-evaluation of our conversational flow embraced the techniques of CBT more smoothly. When creating this conversational flow, we had to make sure that we were keeping the progression of awareness in mind. In other words, we wanted our user to take a journey through what they were feeling and to be opened up to a new perspective of what they may have been feeling.
This goal of ours depended on making sure that as we were maintaining our persona of being retrospective and open therapists, we also wanted our users to do some reflecting on their own. This resulted in several states of introspecting their feelings. First, users would flat out say what they were feeling in the moment. Then, we would ask them to describe a time that they were feeling this way. This allows the user to hone in on when they were feeling that certain feeling. In other words, it is a way to conceptualize their emotions, and realize the underlying motivations that may have made them feel this way such as i.e. roommate problems, missing home, cultural differences. From there, we would delve into the harder questions of reflection like what core beliefs are instilled into this negative feeling? These core beliefs include not feeling enough and believing to be unlovable. This is a big turning point in the CBT process, as this pushes the user to reflect on core beliefs that may be the source of the many negative thoughts they have. From there, the bot is to help the user realize this reflection of a thought pattern and tell them to challenge it. Out of the core beliefs that we listed in our project and commonly found in our research were beliefs that contrasted the beliefs of feeling loved within the scope of the personâs social, academic, and overall capabilities. Finally, we wanted to provide our user with homework, another key part of CBT. In order to continually develop the skills to mold positive thought patterns in our lives, it takes a lot of practice and simple de-stressing. This homework came in light bits of suggestions such as going out for a walk or journaling their thoughts.
Some possible cons of this conversational flow would be dealing with cost of our chatbot expecting every possible response. For example, when responding to a specific scenario, it is tough to create responses for every unique occurrence that a person experiences.
Now that weâve conceptualized how we want our chatbot to work, letâs get coding!
Chatbot Dialogue Prototype 1
Once weâve coded up the conversational flow we decided on, we were able to extract our first dialogue between the user and the chatbot. What a beauty!:
Challenging Core Beliefs: When creating the state of challenging core beliefs, we implemented a different way of âprimingâ our userâs responses. When asked to challenge their negative beliefs, we asked them to start their response with âI amâ. This wasnât for the sake of matching any of our tags, but for psychologically framing the thoughts of the user. By saying âI amâ they can claim their positive beliefs close to them, and only them.
Constructing Dialogue Sentence Structures: When creating the sentence structures of what we wanted our bot to say, we wanted to create a ârealistic bufferâ. In other words, instead of constantly impeding our user with questions, we wanted to say a sentence of validation or introspection. This way, the user could get something out of what the bot says. It isnât necessarily advice but in the dialogue like,
âIt is helpful to take in moments in our life in a positive light. And to remember self-compassion. Do you feel better?â,
we wanted the chatbot to be direct in challenging the negative thoughts of the user through redirecting their thoughts towards an identifiably healthier thought pattern like self-compassion.
We liked how our dialogue flowed, providing transparent insight into CBT and positive thought patterns while challenging the user to reflect.
But, we wanted to be more purposeful in what energy we directed towards creating dialogue. Therefore, we switched up some pathways that I will explain in the next section.
Chatbot Iteration 2: Modifying Core Belief, Cognitive Distortion Pathway
When the user started reflecting on their core beliefs, the bot would provide feedback through identifying the belief within a known cognitive distortion. This process is a way to generalize the userâs core belief into thought patterns that are identified as a common phenomena among many people. This way they can view their thought process from a perspective that conceptualizes their way of thinking into a way of thinking that isnât being beneficial to them.
Here is a list of cognitive distortions often identified in CBT:
Based on the Chatbot Dialogue Prototype 1, our pathway at the stage of asking the user to identify their core belief looked like this:
A core belief wasnât an identifiable state. Instead, core beliefs were only identifiable in tags. For example, the core belief, ânot enoughâ was identifiable as the cognitive distortion, emotional reasoning.
We did this because we thought that categorizing the core beliefs would be easier to operate around when telling our bot to respond within our code. However, we found this dialogue exchange to be too quick and too robotic. From the flowchart, you can see that when an unknown cognitive distortion was entered, the chatbot would ask the user: Identifying our core beliefs is helpful in recognizing our thought patterns. What was your core belief in that scenario?.
We wanted to elongate this process to have more focus on identifying the core beliefs through âprimingâ, a process that was highlighted when we met with a TA. In order to make our code run within âknown statesâ more, âprimingâ or giving context in vying for an answer would allow us to constrain the user to construct a response that our bot can identify. We implemented this âprimingâ into our âunknown emotionâ state as well.
For example, in this code, you can see that we added identifiable core beliefs that the user can use to guide their answer.
In addition, we wanted to add more reflective dialogue on the chatbotâs end. Instead of generating a response from connecting a core belief to a cognitive distortion, we wanted to generate a response directly recognizing the core belief entered by the user. Thus, our new flowchart for this part of our code was born:
We implemented this by new dialogue by implementing this new sentence structure:
As you can see, we added the fill in blank of the entered cognitive belief to be reflected on by the chatbot. Then, it transitions into identifying the cognitive distortion associated with the cognitive belief. This process deemed to be more true to maintaining a reciprocative relationship between bot and user.
This part was crucial in creating the transition from reflection of core belief to challenging the negative thought pattern.
We viewed this change as crucial as restating the entered core belief within the sentenced structure provided more humanistic personalization in the dialogue. Instead of connecting with a response by asking a consequent question, we are connecting with the user by recognizing their response within our own dialogue.
Chatbot Iteration 3: Adding More Tags
Another notable iteration of our chatbot was adding more tags. This meant becoming more inclusive of keywords that the user may use. This process forced us to look back at the responses that we collected during our survey. This process was interesting, as we realized that several cognitive distortions overlapped each other. We implemented many negative keywords to mitigate occurrences of going through the âunknownâ state. Why? We want our users to felt heard rather than having to repetitively explain themselves. Itâs analogous to the frustration in the lack of comprehension Siri has when we ask a simple question.
Here is a snapshot of how we tried to implement many variations of possible core beliefs that may arise when the user is speaking within the context of school:
As much inclusivity of responses is key!
Chatbot Iteration 4: Lessening Response to Specific Scenarios
In the initial flowchart, we wanted to address the responses of the user when it was an identifiable scenario within our specific tags. However, after much thought, we realized that this wasnât reflective of our need. Our need is to help the user become aware of their negative thought patterns and to challenge them. By providing comfort at every step of the way, we found that some of our dialogue seemed to be entering the space âtry hardâ bot trying to empathize like an overly empathetic human. Through practicing CBT, it is important to maintain a demeanor that is direct but also respectful of the user. This does not mean padding every response with a response on our end. In a framework like this, responding is enough in providing self-reflection. Since the utmost goal of using the userâs scenario response was to focus their thought process into a core belief that they hold, we chose to go straight to core belief dialogue within the core belief state when a scenario was shared by the user.
Chatbot Iteration 5: Modifying Finished States
The last iteration! Woo!
After asking users how they would want to end a conversation within this certain relationship of O-team leader and student, it was suggested that redirecting users to actual campus resources would be useful. Would this pertain to our need? Not really. But would it help maintain a trust in the relationship between O-team leader and student? Yes. Not all users are expected to breeze through a conversation with our chatbot. Some users may not be satisfied at all with the reflections that they have on their feelings. So, in order to maintain a trustworthy persona of the chatbot, we wanted to provide help where help wasnât found. If the user was not satisfied, we wanted to provide direction to campus professionals and resources that could tend to the userâs specific desires of figuring out their emotions.
Especially when considering the background of our user, we thought that directing them to campus resources would be very beneficial. Being a first-generation student, we needed to expect that users were not usually used to the pre-college exposure to the college experience or support from families when navigating collegiate resources. So, we provided the following dialogue:
In addition, we made sure to make our finishing dialogue to be respectful in recognizing the vulnerability of the user. Being true to your feelings and sharing them (let alone with a bot!), takes courage!
So, we ended our dialogue with this statement,
Reflecting on our feelings can help us understand unhealthy thought patterns and change them for the better. If you want to talk some more, O-team leaders, the Emmons center, and RAs are confidential sources of help. Your feelings are valid. For now, I suggest " + (
random.choice(tuple(self.ASSIGNMENTS))) + "I appreciate your vulnerability."
The random.choice(tuple(self.ASSIGNMENTS)) is another touch of humanistic personalization to our dialogue. In order to lessen the robotic feel of our chatbot, making our responses different every time (in addition to it being different for different pathways), it makes the bot more unpredictable â like conversations with humans are!
self.ASSIGNMENTS pertains to a list of assignments that are common with ending a CBT session. For example, journaling thoughts or talking with friends and family appropriately direct the user to activities that distract them from negative thought patterns and/or push for further reflection to progress the development of positive thought patterns.
Final Chatbot Dialogue
If youâve reached this part of the blog post, I appreciate the time you took to read my thoughts. It means a lot!
At this point, we should ask ourselves, was the need met? In case youâve forgotten the need, it is for the user to be exposed to their negative thought patterns and to challenge it. Well, yes. I do think that our need was met. Based on our dialogue by our bot, we do complete necessary dialogue to reach this need. This includes dissecting an emotion into a core belief, bringing awareness to the negativity of this belief through connecting it to a cognitive distortion and then challenging the user to challenge this belief. However, whether the need is met can be contingent on how willing the user is to reply to the bot comprehensively.
I can say that some improvements could be made in extending dialogue to dive deeper into core beliefs and other times they felt this way. This way, a sensitive conversation as this bot provokes, stages of the conversation could be more engaging, providing more connections to how their negative belief affects their daily lives â a feature that would work well with semantic abilities.
After implementing the iterations above, we have reached the final dialogue:
Hello! Iâm an O-team leader and I will be helping you through this rough patch. How are you feeling?
> anxious
CBTBot: Iâm sorry that you feel this way. Can you give us an example of when you felt anxious?
> roommates
CBTBot: Ok. It is good to understand a specific time that you felt this way. Lets dive into what makes you feel this way. What core beliefs are instilled in these times in your life?
> not enough
CBTBot: Feeling like you are not enough seems like a negative core belief. Lets dive deeper. A key part of Cognitive Behavioral Therapy is shaping your thoughts to healthily affect your actions and behavior. Do you often feel like you negatively assume others intentions?
> yes
CBTBot: Think of a time of where you felt the opposite. You felt loved, confident, and you felt others saw you in this light. Describe any time where you felt smart, capable and loved. Or describe a time where you felt loved and respected by others. Start with âI amâ.
> i am loved
CBTBot: It is helpful to take in moments in our life in a positive light. And to remember to practice self-compassion. Do you feel better?
> yes
CBTBot: Thatâs great to hear! Would you like to reach out to an Oxy resource about religious, cultural, personal, residential academic, or financial matters? If so, type âyesâ to continue.
> yes
CBTBot: Thank you for sharing your feelings. This is the first step towards molding more positive thought patterns for your day to day life. Resources on campus will monumentally help you work through these feelings. Would you like to someone about religious, cultural, personal, residential, academic, or financial matters?
> cultural
CBTBot: If you head over to SLICE, Student Leadership, Involvement, & Community Engagement, they can redirect you to clubs on campus that foster communities within cultural groups. You can also reach out to Oxy students of an interest you dont find represented. The Intercultural Community Center is also a great hangout place at 1501 Campus Place that facilitates community-oriented discussions among diverse groups on campus.. Did you find what youâre looking for?
> yes
CBTBot: Reflecting on our feelings can help us understand unhealthy thought patterns and change them for the better. If you want to talk some more, O-team leaders, the Emmons center, and RAs are confidential sources of help. Your feelings are valid. For now, I suggest going for a walk! I appreciate your vulnerability.
In order to further understand this final flow of conversation, you can view the states that are recognized in our code:
Before implementing this onto Slack, I would want to let the user know that there is an intro statement. However, due to the strictly reactive nature of Slackbots, this cannot happen.
Summary
And just like that, we created our very first chatbot!
In summary, we created a chatbot that fostered a relationship with the user through the empathetic means of reflection and validation: key parts of cognitive behavioral therapy.
On the surface, the dialogue of the bot may look extensive, but in order to create this kind of relationship that also teaches the user of the positive ways of thinking that exist, the dialogue needed to be responsive and informative.
Through our chatbot, you can also trace the works of cognitive behavioral therapy that begs the collaboration of user and bot in sharing the depths of negative thoughts. This thought process goes from
negative emotion â specific scenario of feeling this way to â identifying core beliefs used in scenario (that show general, negative thought pattern) â reflecting on associated cognitive distortion â challenging thought pattern â homework in developing less room for negativity in oneâs life
Takeaways
To conclude, this process of creating a chatbot was one of my favorite projects of this Human Computer Interaction class. As a person that has been through therapy herself, diving deep into the intricacies of the empathy within the relationship between patient and therapist was interesting. It was like being a fly on the wall with my own experiences with my therapist. This project challenged how I thought of chatbots and how I thought of empathy.
If I were to further develop this chatbot, I would want to make it more intelligent. This chatbot doesnât utilize advanced NLP processes by any means, so it doesnât operate at the level of human conversations that include semantic AND syntactic evaluations. This chatbot only uses syntactic ways of talking.
While we were teaching our chatbot empathy, this project urged us to be empathetic as well! We needed to sit in the seats of O-team leader and first-gen first-year student to get to the nitty gritty of what emotions and scenarios could be discussed.
This project taught me the importance of open empathy. Donât assume why a person may feel a certain way. There are infinite possibilities of what ~tags~ they may end up using in their response.
And with that, I leave you with a quote: