Ethical AI: Can We Trust Our Intelligent Machines to Do the Right Thing?

Many of you reading this will recall an animated television series from the mid-1960s called The Jetsons. The family Jetson — George, his wife Jane, daughter Judy, their boy Elroy, an overly affectionate giant-sized dog, Astro (full disclosure here, we once adopted a rambunctious rescue and named him Astro), and their robotic, live-in housekeeper, Rosie — became the de facto fictional version of a future just now becoming a reality.

In an episode where Rosie’s “microchip master cylinder” malfunctions she begins to function erratically. Her mechanical AI leaves her so distraught by her misdeeds that, rather than risk further harm to her employer, Rosie decides to leave her long-time employer. In other words, Rosie had a conscience and ethics.

AI has everyone excited and everyone worried

As AI begins to infiltrate and influence virtually every aspect of our lives, many people are asking themselves the same questions about ethics. What should happen when AI goes awry? Government regulators, educators, attorneys, artists around the globe, even the World Health Organization (WHO) are grappling with the role AI can or can’t, should or shouldn’t, play in their work.

In the U.S., National Public Radio’s (NPR’s) Morning Edition spoke with Andrew Miller, the director of the Private Law Clinic at Yale University. In the Middle East, Israel’s Ministry of Innovation, Science and Technology further shaped the nation’s policies on artificial intelligence, regulation, and ethics.

In Australia, a coalition of artists is debating the the ethics of using AI in creating art — who gets the credit, who gets the rights, who gets the financial gain and glory when a machine creates something as exclusively, intrinsically human as art?

How many lawyers does it take to…

When you have a legal brief that’s written by AI,” Miller explained, You’ve really delegated that lawyerly duty to someone else,” in this case, a computer. That’s OK, says Miller. “But the key is that the buck stops with the lawyer who files the brief. It’s OK to use AI, but only as a tool, not as a substitute for lawyering.

Miller likens a lawyer’s use of AI to a doctor’s use of an MRI. The physician doesn’t know the inner workings of the MRI machine, but he knows what it should be doing and what can’t do. “With ChatGPT we don’t have – at least not yet – particularly well developed understanding of how our inputs relate to the outputs, Miller explained to NPR’s Inskeep.

Israel Weighs in on governmental use of AI

In Jerusalem in late 2023, the Israeli government unveiled it approach in a statement made by Israel’s Ministry of Innovation, Science and Technology. The policy outlines four key principles underpinned by seven guidelines. It was published as follows:

“Key Highlights of Israel’s AI Policy:
1.   Comprehensive approach: The AI Policy identifies seven main challenges arising from private sector AI use: discrimination, human oversight, explainability, disclosure of AI interactions, safety, accountability, and privacy.
2.   Collaborative development: The AI Policy is formulated through collaboration with various stakeholders, including the Ministry of Innovation, Science and Technology, the Ministry of Justice, the Israel Innovation Authority, the Privacy Protection Authority, the Israeli National Cyber Directorate, the Israel National Digital Agency, leading AI companies in Israel and academia.
3.   Policy principles: Aligned with the OECD AI Recommendations, Israel’s AI Policy outlines common policy principles and practical recommendations to address challenges and foster responsible AI innovation.
4.   Responsible innovation concept: Emphasizing the concept of “Responsible Innovation,” the AI Policy aims to balance innovation and ethics, viewing them as synergistic rather than conflicting goals.
Main Policy Recommendations:
–   Government Policy Framework: The AI Policy recommends empowering sector-specific regulators, fostering international interoperability, adopting a risk-based approach, encouraging incremental development, using “soft” regulation, and promoting multistakeholder cooperation.
–   Ethical AI Principles: The AI Policy suggests adopting a common set of ethical AI principles based on the OECD AI Principles, focusing on human-centric innovation, equality, transparency, reliability, accountability, and promoting sustainable development.
–   AI Policy Coordination Center: It proposes to establish an expert-based inter-agency body to advise sectoral regulators, promote coordination, update government AI policy, advise on AI regulation, and represent Israel in international forums.
–   Mapping AI Uses and Challenges: The AI Policy urges government entities and regulators to identify and map the uses of AI systems and associated challenges in regulated sectors.
–   Forums for Regulators and Public Participation: It recommends establishing forums for regulators and multistakeholder participation to promote coordination, coherence, and open discussions.
–   International Collaboration: It encourages active involvement in developing international regulation and standards to promote global interoperability.
–   Tools for Responsible AI: The policy calls for the development of tools, including a risk management tool, to enable responsible AI development and use, with the AI Policy Coordination Center leading this effort.
Israel’s AI Policy marks a significant step towards ensuring responsible and ethical AI innovation, setting a precedent for other nations grappling with similar challenges. The government’s commitment to collaboration, transparency, and adaptability reflects its dedication to fostering a thriving AI ecosystem while safeguarding societal values.”

WHO cares

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist, in a January 2024 statement. “We need transparent information and policies to manage the design, development, and use of  large multimodal models (LMMs) to achieve better health outcomes and overcome persisting health inequities.”

An agency of the United Nations, WHO identified five key factors healthcare providers should consider when using LLMs:

  • “Diagnosis and clinical care, such as responding to patients’ written queries;
  • Patient-guided use, such as for investigating symptoms and treatment;
  • Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
  • Medical and nursing education, including providing trainees with simulated patient encounters, and;
  • Scientific research and drug development, including to identify new compounds.”

WHO also cautioned the global healthcare community that AI has been known to produce inaccurate, prejudiced, and incomplete diagnoses raising concerns healthcare professionals should be aware of in their practice.

Artists of the World, Unite — Ethical AI = Ethical Art

The Department of Local Government Sport and Cultural Issues in Western Australia also found AI to be concerning enough that it invested in developing protocols to help artists use AI responsibly.

Key to their recommendations was distinguishing between “art made by humans” versus “art generated using AI.” The agency encouraged artists to consider another tool to express themselves. Artists will remain the visionaries holding the digital paintbrush or chisel but transparency will be central to preserving the value in the product of their labors.

The agency summed up AI in the creative arts this way: “As the industry navigates this uncharted territory, it’s important that we are guided by our own ethical considerations and hold ourselves accountable when working with this new, exciting, and yes – daunting – medium.”

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links