AI should be less perfect, more human

Artificial intelligence is fragile. Faced with the ambiguity of the world, it breaks. And when it breaks, our untenable solution is to erase the ambiguity. It means erasing our humanity, which in turn shatters we.

This is the problem Angus Flecher and Erik J. Larson address in their room published this week in Wired.

AI can malfunction at the slightest hint of data slippage, so its architects do everything they can to mitigate ambiguity and volatility. And since the main source of ambiguity and volatility in the world is man, we found ourselves aggressively smothered. We’ve been forced into metric assessments at school, standard flow patterns at work, and regularized sets in hospitals, gyms, and social media hangouts. In the process, we lost much of the independence, creativity, and boldness that our biology evolved to keep us resilient, making us more anxious, angry, and exhausted.

Angus Fletcher and Erik J. Larson, “Optimization of machines is perilous. Consider “creatively adequate” AI.” at Wired

Fletcher is a professor at Project Narrative at Ohio State, with a Ph.D. from Yale in literature and an earlier educational history in neuroscience that he now uniquely blends to understand the science of stories. He also works with artificial intelligence, including his running plan to design an AI “smart enough to know it’s stupid”.

Larson is a technology entrepreneur, computer scientist, and the author of The Myth of Artificial Intelligence: Why Computers Can’t Think Like Us, a book that William Dembski called “by far the best refutation of Kurzweil’s over-promising…”.

Erik J. Larson

In their Wired article, Fletcher and Larson argue that we need to rethink artificial intelligence (AI) and change our approach to programming and using it. Since ambiguities cannot be removed from reality, AI should be designed to handle ambiguities better. And how AI can handle ambiguity better if it were a little less perfectionist and a little more… human.

Or, in their own words: “Instead of remaking ourselves in the fragile image of AI, we should do the opposite. We should remake AI in the image of our antifragility.

To do that would be to reprogram the AI ​​away from one of its greatest strengths: optimization. But optimization also turns out to be one of AI’s biggest weaknesses. Fletcher and Larson go even further: not only is the AI ​​optimization a weakness for the system itself, but it is “undemocratic”. Optimization requires the AI ​​to collect as much data as possible. This is why we have found ourselves in a world where we are constantly watched and our activities tracked.

Optimization is the push to make the AI ​​as accurate as possible. In the abstract world of logic, this push is unquestionably good. Yet, in the real world where AI operates, every benefit comes at a cost. In the case of optimization, the cost corresponds to the data. More data is needed to improve the accuracy of machine learning statistical calculations, and better data is needed to ensure that the calculations are true. To optimize AI performance, its handlers must collect information on a massive scale, sucking cookies from apps and online spaces, spying on us when we’re too oblivious or exhausted to resist, and paying top dollar. for insider information and behind-the-scenes spreadsheets.

Angus Fletcher and Erik J. Larson, “Optimization of machines is perilous. Consider “creatively adequate” AI.” at Wired

Optimization, they say, needs to be reduced to restore the right balance between humans and human machines.

But how would we go about it?

Fletcher and Larson provide three prescriptions that can be implemented immediately in our treatment of AI:

  1. Program the AI ​​to hold multiple possible interpretations at once, like “a human brain that keeps reading a poem with multiple potential interpretations simultaneously in mind.”
  2. Use data as a source of falsification instead of inspiration. Fletcher and Larson explain that AI currently acts as “a mass generator of trivially new ideas”. What if his skills were geared towards discovering “unrecognized van Goghs of today” instead of creating (a feat AI is incapable of at the human level). Instead, he could mine the countless number of works published online to find those “in a wildly unprecedented way” and bring them to light.
  3. In their final suggestion, Fletcher and Larson boldly challenge that we should merge with AI. But they don’t mean that in the science fiction sense of creating cyborgs. Instead, they mean improving the interaction between AI and humans so that humans are not subordinate to AI processes.

You can read the full article, a brilliant piece to ponder and digest, here.

In all of these suggestions, Fletcher and Larson lay out a case for improving AI and humans the partners, where the strengths of one compensate for the weaknesses of the other, a real complementary team. AI will never be as smart as humans. We should not look for AI to replace us, but to help us, and that means recognizing where AI will never live up to human potential.

Comments are closed.