Resourceful Regulation: Development of Accountable AI in Corporate Boardrooms

By: Ankita Walawalkar, Business Administration Department, Asia University Taiwan. ankitamw@ieee.com

Abstract

AI in the boardroom means that use of computers to assist, support, cooperate, or duplicate directors’ behaviours. There is a rising demand for boards of directors to have oversight accountability at the connection of AI, regulatory inferences, as well as corporate governance. The article highlights the legal risks related with AI in corporate decision-making and underlines the significance of accountability in addressing these hazards. Currently, there is no compromise on the most suitable regulatory framework for attaining responsible AI in corporate environments. The article suggests the participation of multi-disciplinary teams, each with elected roles, to vigorously participate in the regulation of AI, concentrating on safe and effective deployment.

Introduction

Artificial intelligence (AI), originally termed by AI innovator John McCarthy in 1956, is defined as the science and engineering focused on creating intelligent machines, particularly through the development of intelligent computer programs. In a corporate arena, AI is defined as the use of computers to assist, support, cooperate, or even mimic directors’ behaviours. AI administration in the boardroom is a multifaceted matter, with both benefits and challenges. While AI can decrease costs, advance efficiency, and drive novelty. However, it can also disrupt human rights and lead to improved calls for board oversight in sustainability and accountability [1] [2].

Accountable AI

The European commission’s High-level expert group on artificial intelligence describes AI systems as human-designed systems that act in the physical or digital dimension, seeing their environment, interpreting data, reasoning, and deciding the best action to achieve a complex goal, using symbolic rules or numeric models [3]. Accountability includes numerous ideas in political science, public policy, corporate governance, law, and financial accounting. In corporate settings with AI, process accountability is crucial for ethical AI application and usage. It comprises data collection, algorithm design, and strategic options. Passable redress is vital, as accountability reports risks and responsibilities in corporate performers[4] .

Fig 1: Accountability of AI

Smart Regulation in Boardroom AI

The fast growth of AI dissimilarities with the gentler development of regulatory policies and governance frameworks, accenting the need for alignment to harness AI efficiently. The article underlines the importance of smart regulation in enhancing the functionality of AI, admitting its central role in bridging the gap between technological progressions and regulatory frameworks [1].

a. Implementation of Smart Regulation for Boardroom:

There is necessity of innovative regulatory tactics to keep pace with the lively landscape of AI. Industry and stakeholder participation in setting standards and contributing to the regulatory framework is vital, ensuring accountable AI applications. The alliance among governments, industries, and experts is tinted for developing detailed direction on AI in the boardroom. Smart regulation, encompassing a range of policy instruments, identifies the diverse threats and impacts of AI, nurturing a partaking approach that involves policymakers, industry, civil society, academia, and stakeholders. The flexible and inclusive nature of smart regulation purposes to address social challenges, inspire transparency, and create instruments for algorithmic accountability, surpassing traditional regulatory models [5].

b. Stakeholder participation

It highlights the vital role of diverse shareholders, including stakeholders, in shaping an effective regulatory environment that put up various capacities and competences. Admitting the potential of AI to improve decision-making, stakeholder involvement is essential, particularly those with technical expertise, to contribute to operative regulatory mechanisms [6]. The anticipate smart regulation, developed by stakeholder participation, aims to establish a community-owned and dynamic framework, nurturing long-term success in administration policies. Also, it imagines stakeholder participation as a means to address social challenges, promoting inclusiveness and answerability. The introduction of stakeholder directors and independent sub-committees is suggested to improve board effectiveness and incorporate a wider range of standpoints [7].

c. Whistleblowers as AI Substitute Regulators

The article emphasizes the importance of comprehensive governance in AI development. Previous literature suggested that authorizing whistleblowers, including independent directors and NGOs, as substitute regulators to prevent AI malpractice and corporate scandals. Obligatory insurance against algorithmic failures is anticipated. The participation of stakeholder, including whistleblowers, is crucial for ethical AI in the boardroom. Porters, such as regulatory agencies, AI committees, and professional associations, play a vibrant role in imposing compliance. The engagement of the AI community and skilled groups is tinted for shaping truthful AI regulation [1].

d. corporate governance, board diversity and ai regulation

The article examines the evolving influence of AI on corporate governance and board diversity. It highlights the dual roles of boards in leadership, highlighting the need to mitigate ethical risks related with AI, such as transparency, accountability, and fairness. The outline of Chief Risk Officers (CROs) is suggested to bring about risks related matters to AI deployment, safeguarding transparency and accountability [8]. Moreover, the presence of a Chief Sustainability Officer (CSO) and a sustainability committee is suggested to support CSR procedures for accountable pronouncements. Legal viewpoints on directors’ duties are explored, signifying existing duties as a model for instigating AI ethics in the absence of strong AI governance principles. The role of AI in indorsing accountability, transparency, and strategic dexterity in the boardroom is underlined, emphasizing the importance of aligning AI competences with directors’ assistances for informed decision-making [9].

Fig 2: Smart Regulation in Boardroom AI

Need for Accountable AI

Accountability in the use of Artificial Intelligence (AI) is crucial for various reasons encompassing ethical, legal, and social dimensions. The ethical challenges posed by AI necessitate the establishment of frameworks to regulate its applications for the benefit of society, protection of human rights, privacy, and autonomy [10]. Ensuring accountability in AI systems is essential for building trust, given the lack of a universally agreed definition of accountability, making it a challenging task [11]. In the legal realm, the use of AI in public administration, such as asylum adjudications, underscores the importance of ethical frameworks and accountability to maintain fairness and transparency [12]. Moreover, addressing the reliability and ethical usage of AI is critical to ensure its ethical implementation [13].

From a social perspective, the risks of discriminatory outcomes and perpetuation of existing socioeconomic disparities due to AI underscore the importance of accountability and transparency in AI systems [14]. Instances like the legal debates surrounding systems like COMPAS, which assess offenders’ criminogenic needs, highlight the necessity for transparency and accountability in AI systems [15]. Additionally, the rapid proliferation of AI-assisted technologies raises ethical concerns, particularly regarding privacy, necessitating accountability mechanisms to address these issues [16].

In the context of AI ethics, the EU’s Ethics Guidelines for Trustworthy AI lists explicability as one of the key ethical principles, emphasizing the importance of accountability in AI systems . Integrating fairness into AI design processes is crucial for creating fair and ethical AI systems, further underlining the significance of accountability in AI applications. Furthermore, explainable AI techniques play a vital role in ensuring accountability and transparency in AI systems, especially in high-stakes decision scenarios like healthcare and legal applications.

Conclusion:

The article underlines the transformative influence of AI on board decision-making, highlighting its potential assistances in efficacy while admitting concerns related to privacy, bias, and socioeconomic inequality. It supports for a balanced regulatory framework, including smart regulation to address accountability challenges in the boardroom without stifling AI innovation. The necessity of keeping humans in the circle for effective risk governance is tinted, accenting the need for diverse stakeholder involvement in shaping regulatory receptiveness. The article suggests knowing AI as a legal person before the law to ensure accountability and liability, calling for further research in this area. Inclusive, it suggests a theoretical model for AI regulation through smart regulation, directing to inform legislative deliberations and indorse trustworthy AI.

References

  1. J. Zhao, “Promoting more accountable AI in the boardroom through smart regulation,” Comput. Law Secur. Rev., vol. 52, p. 105939, Apr. 2024, doi: 10.1016/j.clsr.2024.105939.
  2. “Engineering Applications of Artificial Intelligence | Journal | ScienceDirect.com by Elsevier.” Accessed: Mar. 02, 2024. [Online]. Available: https://www.sciencedirect.com/journal/engineering-applications-of-artificial-intelligence
  3. “Futurium | European AI Alliance – AI HLEG – Definition of AI.” Accessed: Mar. 02, 2024. [Online]. Available: https://futurium.ec.europa.eu/en/european-ai-alliance/open-library/ai-hleg-definition-ai
  4. Pérez-Durán, I. (2023). Twenty-five years of accountability research in public administration: Authorship, themes, methods, and future trends. International Review of Administrative Sciences, 00208523231211751.
  5. O. J. Erdélyi and J. Goldsmith, “Regulating artificial intelligence: Proposal for a global solution,” Gov. Inf. Q., vol. 39, no. 4, p. 101748, Oct. 2022, doi: 10.1016/j.giq.2022.101748.
  6. “Stakeholder Engagement for Sustainable Communities | SpringerLink.” Accessed: Mar. 02, 2024. [Online]. Available: https://link.springer.com/referenceworkentry/10.1007/978-3-030-38948-2_10-1
  7. Kujala, J., Sachs, S., Leinonen, H., Heikkinen, A., & Laude, D. (2022). Stakeholder Engagement: Past, Present, and Future. Business & Society, 61(5), 1136-1196. https://doi.org/10.1177/00076503211066595
  8. L. Xue and Z. Pang, “Ethical governance of artificial intelligence: An integrated analytical framework,” J. Digit. Econ., vol. 1, no. 1, pp. 44–52, Jun. 2022, doi: 10.1016/j.jdec.2022.08.003.
  9. M. Hilb, “Toward artificial governance? The role of artificial intelligence in shaping the future of corporate governance,” J. Manag. Gov., vol. 24, no. 4, pp. 851–870, 2020.
  10. Milossi, M., Alexandropoulou-Egyptiadou, E., & Psannis, K. (2021). Ai ethics: algorithmic determinism or self-determination? the gpdr approach. Ieee Access, 9, 58455-58466. https://doi.org/10.1109/access.2021.3072782
  11. Naja, I., Marković, M., Edwards, P., Pang, W., Cottrill, C., & Williams, R. (2022). Using knowledge graphs to unlock practical collection, integration, and audit of ai accountability information. Ieee Access, 10, 74383-74411. https://doi.org/10.1109/access.2022.3188967
  12. Katsikouli, P., Byrne, W., Gammeltoft‐Hansen, T., Hogenhaug, A., Møller, N., Nielsen, T., … & Slaats, T. (2022). Machine learning and asylum adjudications: from analysis of variations to outcome predictions. Ieee Access, 10, 130955-130967. https://doi.org/10.1109/access.2022.3229053
  13. Tsumura, T. and Yamada, S. (2023). Influence of anthropomorphic agent on human empathy through games. Ieee Access, 11, 40412-40429. https://doi.org/10.1109/access.2023.3269301
  14. Vajrobol V. (2024) Factors Influencing the Acceptance of Artificial Intelligence, Insights2Techinfo, pp.1 https://insights2techinfo.com/factors-influencing-the-acceptance-of-artificial-intelligence/
  15. Hasan A. (2023) Machine Learning and Artificial Intelligence in Cybersecurity, Insights2Techinfo, pp.1 https://insights2techinfo.com/machine-learning-and-artificial-intelligence-in-cybersecurity-2/
  16. Casillo, M., Colace, F., Gupta, B. B., Lorusso, A., Santaniello, D., & Valentino, C. (2024). The Role of AI in Improving Interaction With Cultural Heritage: An Overview. Handbook of Research on AI and ML for Intelligent Machines and Systems, 107-136.

Cite As

Walawalkar A (2024) Resourceful Regulation: Development of Accountable AI in Corporate Boardrooms, Insights2Techinfo, pp.1

68890cookie-checkResourceful Regulation: Development of Accountable AI in Corporate Boardrooms
Share this:

Leave a Reply

Your email address will not be published.