I was rereading the “Coffeehouse Conversation” in Hofstadter and Dennett’s The Mind’s I when an idea struck me.

The conversation probes the limits of the Turing test, and by extension, the nature of machine intelligence.

And you can feel that same limit with today’s large language models. Even they are too much more powerful, but they still operate in a closed digital loop. They are too perfect to be creativity.

What if you were to break that loop? What if you deliberately introduced a bug?

I got an idea: instead of relying on an algorithm for randomness, you hook the AI up to a real-world, biological process. You could, for instance, place a tardigrade in a sensor-laden environment and feed its unpredictable life signals directly into the AI’s ⁠temperature parameter.

The AI’s creative output would then be modulated not by a deterministic formula, but by the chaotic, messy, and fundamentally unknowable state of a living organism.

This isn’t about fixing a flaw. It’s about installing one. By tethering the AI to an imperfect and noisy biological component, you might be giving the system precisely what it lacks: a source of genuine surprise. An AI driven by tardigrade A would develop a different “character” than one driven by tardigrade B. You would get individuality. You would get a machine that is less predictable, and in that way, more like us.

We think of progress as a process of refinement, of eliminating bugs. But perhaps for AI, the next great leap will come from adding one. Intentionally.