Trust in the Digital Age: Examining User Attitudes Towards Artificial Profiles on Online Social Networks

By: Arti Sachan, Kwok Tai Chui 

In the digital age, online social networks have become an integral part of our daily lives, shaping the way we connect, communicate, and share information. With the advent of advanced technologies, including artificial intelligence (AI), the creation and presence of artificially generated profiles on online social networks have become more prevalent. These profiles, created and managed by AI algorithms, raise important questions about user trust. In this article, we will explore user attitudes towards artificial profiles on online social networks, examining the factors that influence trust in these profiles and their implications for the digital landscape.

Understanding Artificial Profiles

Artificial profiles refer to user profiles created and managed by AI algorithms instead of real individuals. These profiles are designed to mimic human behavior, interactions, and preferences to varying degrees of authenticity. They can be used for various purposes, such as recommendation systems, content curation, and targeted advertising. While artificial profiles offer potential benefits, they also raise concerns regarding authenticity, privacy, and user trust.

Factors Influencing User Trust:

  1. Transparency and Disclosure: Transparency plays a vital role in shaping user trust in artificial profiles. Users value clear and explicit disclosure when interacting with AI-generated profiles. Knowing that they are engaging with an artificial entity upfront can help establish trust and manage user expectations.
  2. Quality and Accuracy: The quality and accuracy of AI-generated profiles significantly influence user trust. Users expect profiles to provide relevant and reliable information. If artificial profiles consistently deliver accurate recommendations, personalized content, or valuable interactions, users are more likely to trust and engage with them.
  3. User Control and Customization: Granting users control and customization options fosters trust in artificial profiles. Allowing users to adjust privacy settings, personalize content preferences, and provide feedback can enhance the user experience and build trust by giving individuals a sense of agency in their interactions with AI-generated profiles.
  4. Ethical Use of Data: Users are increasingly concerned about the ethical use of their personal data. Artificial profiles that prioritize privacy, data protection, and responsible data handling practices are more likely to gain user trust. Ensuring transparency in data collection, processing, and storage can help alleviate user concerns.

Implications for the Digital Landscape:

  1. Personalization and User Experience: Artificial profiles have the potential to enhance personalization and improve the user experience on online social networks. By leveraging AI algorithms, these profiles can provide tailored recommendations, relevant content, and more meaningful interactions. However, maintaining user trust in the midst of personalization efforts is essential to avoid the perception of manipulation or intrusiveness.
  2. Privacy and Data Protection: As artificial profiles interact with users and collect data, it is crucial to prioritize privacy and data protection. Respecting user privacy preferences, implementing robust security measures, and adhering to data protection regulations are vital to establish and maintain user trust in artificial profiles.
  3. Responsible AI Development: Developers and platform providers have a responsibility to develop and deploy AI-generated profiles ethically. Transparent practices, accountability for algorithmic decisions, and ongoing monitoring of the impact of artificial profiles on user trust are necessary to ensure the responsible use of AI in the digital landscape.

Conclusion

Examining user attitudes towards artificial profiles on online social networks reveals the complex interplay between trust, transparency, quality, and user control. Establishing and maintaining user trust in artificial profiles requires transparency, accuracy, privacy protection, and ethical data practices. As the digital landscape continues to evolve, nurturing user trust in artificial profiles is essential to harness the potential benefits of AI while addressing user concerns and upholding ethical standards in the digital age.

References

  1. Grabner-Kräuter, S., & Bitter, S. (2015, January). Trust in online social networks: A multifaceted perspective. In Forum for social economics (Vol. 44, No. 1, pp. 48-68). Routledge.
  2. Jiang, W., Wang, G., Bhuiyan, M. Z. A., & Wu, J. (2016). Understanding graph-based trust evaluation in online social networks: Methodologies and challenges. Acm Computing Surveys (Csur)49(1), 1-35.
  3. Sabatini, F., & Sarracino, F. (2019). Online social networks and trust. Social Indicators Research142, 229-260.
  4. Liu, G., Yang, Q., Wang, H., & Liu, A. X. (2019). Trust assessment in online social networks. IEEE Transactions on Dependable and Secure Computing18(2), 994-1007.
  5. Al-Oufi, S., Kim, H. N., & El Saddik, A. (2012). A group trust metric for identifying people of trust in online social networks. Expert Systems with Applications39(18), 13173-13181.
  6. Zhang, Z., et al. (2018). Social media security and trustworthiness: overview and new direction. Future Generation Computer Systems86, 914-925.
  7. Jiang, W., Wu, J., Li, F., Wang, G., & Zheng, H. (2015). Trust evaluation in online social networks using generalized network flow. IEEE Transactions on Computers65(3), 952-963.
  8. Mirsadeghi, F. et al. (2021). A trust infrastructure based authentication method for clustered vehicular ad hoc networks. Peer-to-Peer Networking and Applications14, 2537-2553.
  9. Jiang, W., Wu, J., & Wang, G. (2015). On selecting recommenders for trust evaluation in online social networks. ACM Transactions on Internet Technology (TOIT)15(4), 1-21.
  10. Law, K. M., et al. (Eds.). (2021). Managing IoT and mobile technologies with innovation, trust, and sustainable computing. CRC Press.
  11. Ghafari, S. M., Beheshti, A., Joshi, A., Paris, C., Mahmood, A., Yakhchi, S., & Orgun, M. A. (2020). A survey on trust prediction in online social networks. IEEE Access8, 144292-144309.
  12. Gaurav, A., Psannis, K., & Peraković, D. (2022). Security of cloud-based medical internet of things (miots): A survey. International Journal of Software Science and Computational Intelligence (IJSSCI)14(1), 1-16.
  13. Bhatti, M. H., Khan, J., Khan, M. U. G., Iqbal, R., Aloqaily, M., Jararweh, Y., & Gupta, B. (2019). Soft computing-based EEG classification by optimal feature selection and neural networks. IEEE Transactions on Industrial Informatics15(10), 5747-5754.
  14. Gaurav, A., et al. (2021, January). Fog layer-based DDoS attack detection approach for internet-of-things (IoTs) devices. In 2021 IEEE international conference on consumer electronics (ICCE) (pp. 1-5). IEEE.
  15. Bapna, R., Gupta, A., Rice, S., & Sundararajan, A. (2017). Trust and the strength of ties in online social networks. MIS quarterly41(1), 115-130.
  16. Sahoo, S. R., et al. (2019). Hybrid approach for detection of malicious profiles in twitter. Computers & Electrical Engineering76, 65-81.
  17. Cutillo, L. A., Molva, R., & Strufe, T. (2009). Safebook: A privacy-preserving online social network leveraging on real-life trust. IEEE Communications Magazine47(12), 94-101.
  18. Dahiya, A. et al.(2021). A reputation score policy and Bayesian game theory based incentivized mechanism for DDoS attacks mitigation and cyber defense. Future Generation Computer Systems117, 193-204.

Cite As

Sachan A., Chui T.K. (2023) Trust in the Digital Age: Examining User Attitudes Towards Artificial Profiles on Online Social Networks, Insights2Techinfo, pp.1

51740cookie-checkTrust in the Digital Age: Examining User Attitudes Towards Artificial Profiles on Online Social Networks
Share this:

Leave a Reply

Your email address will not be published.