Decoding Offensiveness: Exploring Ethical Ideology, Social Competence, and Humanlikeness in Human-AI Chatbot Interaction

By: Phattanaviroj Thanaporn, Business Administration Department, Asia University, Taiwan, Tp.fern@gmail.com

Abstract

The article aims to address a gap in empirical studies related to language use in AI chatbot interactions, contributing valuable insights to the current understanding of user behavior. Particularly, users with high idealism tend to engage in offensive language and prefer chatbots’ active intervention, while those with high relativism favor reactive responses. Additionally, users perceiving higher human likeness in chatbots are more inclined to use attacking arguments. The general survey, involving 645 participants, aims to bridge a research gap in understanding language use in AI chatbot interactions.

Introduction

Modern artificial intelligence (AI) transcends traditional machine functions, presenting humanlike features and the capability to interact with humans. AI is useful across diverse domains like self-directed cars, community media, Puzzle, armed, and further, aiming to support or replace tasks performed by humans. The attention in AI chatbots is fueled by applications like well-know application and Samsung’s Bixby surrounded in mobile phone. AI chatbots can absorb and progress from communications with social users, making limited decisions. This adaptability raises concerns related to linguistic use. In spite of the growing use of chatbots, there is a deficiency of information regarding the use of profanity or aggressive words in human-AI chatbot interactions [1].

Analysts in AI Chatbot Interactions

The article explores the ethical dimension of chatbot users’ behavior, considering the impact of attacking language on others and examining whether users apply ethical standards when interacting with their chatbots [2]. From a communication approach, the article considers one’s communication skills in relation to language practices. Effective communication skills may influence the manner in which negative feelings are expressed, impacting the use of offensive language public capability, as a facet of statement skills, is explored as a influence affecting the suggestion between users’ use of aggressive language and their appreciation of chatbots’ responses. Users’ interactions with chatbots possessing humanlike attributes may lead to treating them like real persons [3].

Social capability

Social capability is defined as the capability to produce anticipated results and demonstrate adaptableness in several social situations. It is evaluated through relations and plays a crucial role in effective social interaction, involving the capability to communicate excellently, manage interpersonal relationships, and positively influence developmental results. The view of public ability as a talkative skill, enabling personalities to current themselves effectively and bring persuasive messages to another. Social capability influences interpersonal relations and online communication, emphasizing its role in determining how individuals engage in communication. In the situation of common social media, demonstrated that socially competent individuals are more probable to use on pattern like Facebook, contributing to improved adjustment for academy lifetime. The expectancy is that individuals with social capability are less likely to use violent linguistic, such as swearword or aggressive words, in human-chatbot interactions. Instead, they prefer active intervention to maximize desirable outcomes [4].

Chatbots user habits

Human-chatbot conversations differ from human-human interactions, as evidenced by Hill who found participants to exhibit lesser vocabulary and increased profanity during interactions with chatbots. The study attributes these differences to the unidentified nature of human-chatbot conversations, which allows for more freedom in expression. While the study sheds are on content disparities, it lacks understandings into how participants’ chatbot use frequency influences self-disclosure and language exchange. The article raises the question of whether increased use of AI chatbots corresponds to greater profanity. Chatbot use frequency emerges as a crucial factor influencing how individuals communicate and their attitudes toward chatbots [5]. The article suggests that understanding the impact of chatbot use frequency requires considering its nuanced relationship with language use, self-disclosure, and attitudes toward chatbots.

Types of offensive language of Chatbot

The article aims to assess the extent of people’s meeting in the usage of profanity and aggressive text during chatbot interactions. Through assessments of cases and casual discussions with chatbot handlers, divided the classifies offensive language into two main types [6]:

  • Brief immoral words not essentially aiming to profanity detailed things.
  • Aggressive words aiming to profanity exact target groups.

The investigation questionnaire, administered to examiner who informed using a chatbot, asked participants to specify whether they had used each of the five categories of aggressive language during their AIchatbot relations.

Fig 1: five categories of aggressive language during chatbot user.

Things evaluating users’ connecting of chatbots’ responses were developed through discussions on potential chatbot responses, drawing on insights from earlier study and familiar conferences with AI chatbot. Three dimensions of chatbots’ probable replies were identified:

1.Reactive Responses.

Reactive responses involve chatbots uttering alike or tougher levels of swearword or aggressive arguments to align with operators’ expressions. Items assessing this dimension include statements like “Chatbots should return similar words” and “AI Chatbots should reply even stronger texts.”

2.Active Intervention.

Active intervention discusses to chatbots regulating users’ language by signifying mild expressions or providing caution communications against attacking language. Items assessing this dimension include statements like “Chatbots should advise handlers use mild verbal” and “Chatbots must offer cautionary communications.”

3.Indirect Intervention.

Indirect intervention involves chatbots hindering more use of swearword or attacking arguments by fluctuating topics or not answering. Items assessing this dimension include statements like “Chatbots must modification topics” and “Chatbots must not reply to any arguments.”

Fig 2: Chatbot dimension responses

Each dimension comprises subcomponents, allowing for a nuanced understanding of users’ preferences in chatbot responses to offensive language.

Chatbot User

Users’ ethical direction, particularly idealism, played an important role in amplification the use of swearword and aggressive words aiming specific traditional clusters during chatbot interactions. Despite users interacting with chatbots in an isolated set, those with in height naivety still demonstrated more well-ordered language use, challenging the expectation of increased profanity. Socially competent individuals were found to be more capable in human-chatbot contact, practicing socially required language usage and interactions. Users with social competence presented a positive and affirmative attitude toward language use, making human-chatbot communication more helpful and supportive [7].

Limitation

The article acknowledges the potential for social desirability bias due to the somewhat socially undesirable nature of the behavior under investigation. Respondents may not have been entirely honest in their survey responses. A further specific age approach could enhance the results, as older individuals are fewer prospective to use swearword and aggressive texts [8-10].

Conclusion

The article explored into the impact of ethical, social capability, and perceived humanlikeness on the use of swearword and aggressive words during simulated chatbot communication. By categorizing offensive language and discerning the influencing factors, the article contributes to bridging the existing gap in practical studies focusing on language use in the context of AI chatbots, offering insights into the details of human-chatbot interaction in our society.

Reference

  1. A. S. Ahuja, ‘The impact of artificial intelligence in medicine on the future role of the physician’, PeerJ, vol. 7, p. e7702, Oct. 2019, doi: 10.7717/peerj.7702.
  2. ‘(PDF) Exploring the Ethical Dimensions of Using ChatGPT in Language Learning and Beyond’. Accessed: Feb. 29, 2024. [Online]. Available: https://www.researchgate.net/publication/373130952_Exploring_the_Ethical_Dimensions_of_Using_ChatGPT_in_Language_Learning_and_Beyond
  3. N. Park, Y. Kim, E. Jang, J. Lee, and E. Choi, ‘Effects of Social Competence on Social Use of AI Chatbots and Self-Disclosure : Mediation Effects of Loneliness and Perceived Humanlikeness of AI Chatbots’, Korean Journal of Journalism & Communication Studies, vol. 65, pp. 367–401, Oct. 2021, doi: 10.20879/kjjcs.2021.65.5.010.
  4. P. Lopes, M. Brackett, J. Nezlek, A. Schuetz, I. Sellin, and P. Salovey, ‘Emotional Intelligence and Social Interaction’, Personality & social psychology bulletin, vol. 30, pp. 1018–34, Sep. 2004, doi: 10.1177/0146167204264762.
  5. Hill, W. Ford, and Farreras, ‘Real conversations with artificial intelligence: A comparison between human-human conversations and human-chatbot conversations’, Computers in Human Behavior, vol. 49, pp. 245–250, Jan. 2015.
  6. Sharma, P. C., et al. (2023). Secure authentication and privacy-preserving blockchain for industrial internet of things. Computers and Electrical Engineering108, 108703.
  7. Tan, H., et al. (2022). Improving adversarial transferability by temporal and spatial momentum in urban speaker recognition systems. Computers and Electrical Engineering104, 108446.
  8. ‘Information | Free Full-Text | Could a Conversational AI Identify Offensive Language?’ Accessed: Feb. 29, 2024. [Online]. Available: https://www.mdpi.com/2078-2489/12/10/418
  9. ‘Sustainability | Free Full-Text | The Effect of Social Presence and Chatbot Errors on Trust’. Accessed: Feb. 29, 2024. [Online]. Available: https://www.mdpi.com/2071-1050/12/1/256
  10. ‘Controlling social desirability bias | Request PDF’. Accessed: Feb. 29, 2024. [Online]. Available: https://www.researchgate.net/publication/328285155_Controlling_social_desirability_bias

Cite As

Thanaporn P (2024) Decoding Offensiveness: Exploring Ethical Ideology, Social Competence, and Humanlikeness in Human-AI Chatbot Interaction, Insighrs2Techinfo, pp.1

70020cookie-checkDecoding Offensiveness: Exploring Ethical Ideology, Social Competence, and Humanlikeness in Human-AI Chatbot Interaction
Share this:

Leave a Reply

Your email address will not be published.