AI is not sentient, but we should treat it as such

AI is not sentient, but we should treat it as such

When Google engineer Blake Lemoine’s claims that the company’s AI had become sentient made headlines, AI bots and their rights were expected to wring their hands, a reaction from the AI ​​community explaining how AI couldn’t be sentient, and of course, philosophizing him on what it means to be sentient. No one has reached the critical point: that non-sentient mathematical formulas have as much, if not more, weight than humans when it comes to making decisions.

Putting aside the topic of AI sensitivity, there’s something more fundamental to consider: what does it mean to give so much decision-making power to something that, by design, is generally not tangible, inexplicable, inexplicable and uninterpretable? AI sensibility isn’t coming soon, but that doesn’t mean we should treat AI as infallible, especially when it begins to dominate decision-making in large enterprises.

Today, some AI systems already have enormous power over key outcomes for people, such as credit scoring models that can determine where people raise families or healthcare situations in which the AI can preside over life or death situations, like predicting sepsis. They’re not practical suggestions, like a Netflix recommendation, or even processes that speed up operations, like managing data faster. These AI applications directly affect lives and most of us have no visibility or recourse when the AI ​​makes an unintentionally inaccurate, unfair or even harmful decision.

This issue has prompted calls for a “human-in-the-loop” approach to AI, which means humans should be more closely involved in developing and testing models that could be unfairly discriminatory.

But what if we didn’t think of human interaction with AI systems in such a one-dimensional way? Thomas Malone, a professor at MIT’s School of Management, argues for a new approach to working with AI and technology in his 2019 book, Supramentals, which argues that collective intelligence comes from a “supramind” that should include both humans and AI systems. Malone calls this a move from the human in the loop to the “computer in the group,” in which the AI ​​is part of a larger decision-making body and, importantly, is not the only decision maker at the table. .

This concept reminds me of the story of a colleague from when he was selling analytical ideas. His client explained that when executives sat down to make a decision, they took a printed stack of AI-generated analytics and insights and piled them on a seat in the conference room. These ideas counted for one voice, one vote, in a larger group of humans, and never had the last word. The rest of the group knew that these ideas brought specific intelligence to the table, but wouldn’t be the only deciding factor.

So how did AI seize the role of unilateral decision maker? And why hasn’t ‘AI in the group’ become the de facto practice? Many of us assume that AI and the math behind it is objectively true. The reasons for this are diverse: our societal respect for technology, the market shift towards data-driven insights, the drive to move faster and more efficiently, and most importantly the acceptance that humans are often wrong. and that computers generally are not.

However, it’s not hard to find concrete examples of how the data and the models it powers are flawed, and the numbers are a direct representation of the skewed world we live in. For too long we’ve treated the AI ​​as somehow living above those flaws.

AI should be subject to the same scrutiny that we give to our colleagues. Think of him as an imperfect being who is the product of other imperfect beings, fully capable of making mistakes. By treating AI as sentient, we can approach it with a level of critical inspection that minimizes unintended consequences and sets higher standards for fair and powerful outcomes.

In other words: if a doctor refused intensive care or a broker refused your loan, wouldn’t you want to get an explanation and find a way to change the outcome? To make AI essential, we must assume that algorithms are just as error-prone as the humans who built them.

AI is already reshaping our world. We must prepare for its rapid spread down the path of sentience by closely monitoring its impact, asking tough questions, and treating AI as a partner – not the final decision maker – in any conversation.

Triveni Gandhi is Head of AI at Dataiku.

The opinions expressed in Fortune.com comments are solely the opinions of their authors and do not reflect the opinions and beliefs of Fortune.

More must-have comments posted by Fortune:

Sign up for the Makeshift Features mailing list so you don’t miss our biggest features, exclusive interviews and surveys.

Similar Posts

Leave a Reply

Your email address will not be published.