Psychosomatics: What We Don’t Know About AI (and What AI Doesn’t Know About Us)

Donald Rumsfeld, who served as U.S. Secretary of Defense under two U.S. presidents, responded to a reporter’s question at a news briefing he gave in February 2002:

“…there are known knowns; there are things we know we know,” Rumsfeld said. “We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.”

Those unknown unknowns about AI pose perhaps the greatest risk to humanity as the world plunges headfirst into the implementation of AI in literally aspect of our lives. Simply put, we don’t fully understand how this quantum technological leap forward works.

A mashup of ‘psychology’ and ‘informatics,’ psychosomatics  was introduced by a cohort of European and American researchers working under the auspices of the Human Technology Lab at Catholic University of Sacred Heart, Milan. The project sought to synthesize the knowledge gap between human and artificial intelligence. This comparison might then be used to better understand how Large Language Models (LLMs) absorb, process, and retain their knowledge to produce outputs.

How we do what we do and how AI does what it does

For humans, learning and processing information is the synthesis of social, emotional, and linguistic internal exchanges which begins in infancy and continues throughout our lives. By comparison, LLMs are provided gargantuan datasets and internalize them quickly. It’s that distinction that may provide insight into how the two systems – human and machine – comprehend and use language. LLMs lack bodies, senses, and emotions – they use the data they’ve been give to literally compute meaning.

Our emotions, our experiences, language, and even our ability to conceptualize new ideas to understand a situation differentiate us from a world filled with HAL-9000s. Simply put, AI can only synthesize the data it’s given and that doesn’t always work out well. That constraint can result in the machine “hallucinating” – a fancy way of saying it just makes stuff up. More concerning still, it does so in all seriousness, unaware of the nonsense it’s just produced.

Imagine that – AI can’t

LLMs can combine the information they’ve been fed in a seemingly infinite number of ways. What they can’t do is produce an original thought. Think of it this way: Given an artist’s palette (virtually, of course) which holds blue and yellow, an LLM can’t ponder what happens if it blends them to create green.

There are subtleties and nuances people use to understand complex human experiences. An LLM doesn’t have that tool in its tool belt. Sarcasm, for example, is beyond the grasp of contemporary LLMs.

People use the whole of their experiences, knowledge, and senses to sort fact from fiction. An LLM only has the probabilities it calculates to reach a conclusion.

People + Machines might produce better medical outcomes

By understanding how people and computers process information, AI could potentially produce more trustworthy outcomes from the situations it’s been asked to evaluate. Surgery, and the myriad decisions it requires, might attain a whole new level of success. Applying psychosomatics might help guide the people using AI to ensure ethical applications of the technology. Ultimately, the result of this sort of recombinant human/machine DNA might make AI better equipped to produce outcomes that are more in tune with human values. Then, we might truly know what we don’t know.

Subscribe to AI Today

#AI Today #AI #psychosomatics #LLMs #machine learning

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links