The world feels like it’s spinning off its axis lately. Stock markets dive without warning, climates shift unpredictably, and trivial disagreements explode into major conflicts overnight. How can anyone make sense of a world that’s always throwing more at you than you can handle?
Enter abstraction. It’s not just a fancy intellectual exercise—it’s your brain’s natural defense mechanism against chaos. Abstraction helps us step back, zoom out, and filter out the noise. Instead of drowning in details, we focus on patterns, themes, and core principles. But here’s the tricky part: does this zoomed-out view really clarify the chaos? Or does it oversimplify things, fooling us into thinking we’ve understood when we’ve only scratched the surface?
In many cases, as I have argued in my book, The End of Wisdom, abstraction is oversimplification. This is especially true for prescriptions or proverbial wisdom that tend to ignore the context underlying any decision. The context matters all too much – as they say, the devil is the in the details, and so is the truth.
But abstraction remains very useful. And in some cases, abstraction can allow us to understand something important; despite being context-free.
Cybernetics, a field born in the mid-20th century during the infancy of modern technology, bet heavily on the idea that abstraction was key to solving complex problems. But cybernetics wasn’t about wires, gears, or fancy gadgets—it was about ideas.
The genius and danger of cybernetics lie in its core concept: abstraction itself. Let’s dive deeper into how this approach works, why it’s remarkably powerful, and where it tends to stumble, using cybernetics as our lens. Especially today, as technologies like AI and biotech reshape the very fabric of society, understanding this delicate balance between abstraction and reality is more critical than ever. It’s no longer just intellectual—it’s about survival.
Why Abstraction is Essential
Reality, left unfiltered, is overwhelming. Too many variables, too much clutter. You don’t need every detail to understand something. If you want to track a storm, for example, knowing each raindrop’s path is pointless. What matters is the system driving the storm: wind patterns, temperature differences, pressure systems. This simplification is focusing your vision on what’s genuinely meaningful.
Cybernetics embraced this logic fully. Its pioneers didn’t care what a system was made of—metal, flesh, or finances. They focused on how the system functioned: feedback, control, and balance. A thermostat regulating a room’s temperature uses the same principle as your body fighting a fever. The beauty of cybernetics was its universality. One powerful concept could be applied almost everywhere. This universality is why cybernetics foresaw artificial intelligence before anyone knew how to build it.
Then there’s abstraction’s ability to imagine the future. Cybernetics wasn’t stuck thinking about what existed—it asked “what if?” Cyberneticians in the 1950s imagined machines that could learn, adapt, and evolve, long before such machines were possible. Today, we see their dreams in action through smartphones and voice assistants. Abstraction can act like a time machine, sending ideas forward into a future that hasn’t arrived yet.
But perhaps the biggest strength of abstraction is clarity. The chaos of reality clouds our judgment. Abstraction sharpens it. Why does a drone remain steady in the sky? Feedback loops. Why does a promising startup suddenly fail? Often the same reason: failure in feedback mechanisms. Cybernetics stripped away fluff to reveal core truths, transforming complex problems into manageable insights. This made it useful everywhere—from economics to robotics.
Cybernetics: A Revolution in Thinking
In the middle of the 20th century, computers were bulky and painfully slow. But cyberneticists weren’t limited by current technology—they were guided by their vision of tomorrow’s possibilities. Cybernetics was like practical science fiction. Norbert Wiener, the field’s founding father, didn’t just tweak existing technology; he modeled how systems—whether tanks, brains, or stock markets—adapt and adjust. His wartime research aimed to predict enemy airplane movements. The mathematics he developed now guides everything from automotive braking systems to algorithms predicting online behavior.
Another thinker, W. Ross Ashby, emphasized cybernetics as a science of possibilities. It wasn’t about what currently existed but what could exist. They were not merely working with circuits—they were mapping the underlying principles that guide all adaptive systems. Their insights still resonate deeply today, underpinning fields like neural networks and intelligent energy grids.
Yet Wiener was among the first to see a potential pitfall: smart systems could outrun human oversight. Abstraction could create technologies that evolved faster than we could control them. Today’s headlines—AI stock traders making mysterious decisions, autonomous drones selecting targets—prove Wiener right. Abstraction built our modern world, but it never guaranteed control.
The Hidden Pitfalls
Abstraction isn’t flawless—it carries serious risks. It can disconnect us from reality. Take a highly efficient factory: cybernetic principles streamline production, boosting profits enormously. But what about the workers laid off by the automation? That detail isn’t captured by abstraction. Wiener himself warned about this. Technology is only as good as the ethics behind it. Ignore those ethics, and your shiny new system can automate not just production, but suffering too.
Moreover, abstraction is alluringly simple. Clean theories feel elegant, seductive, almost perfect. Cybernetics promoted grand unified theories of everything—systems neatly packaged into neat ideas. Yet sometimes, these theories were more style than substance. Big, abstract ideas often lack the grit and dirt necessary for real-world application. Without rigorous testing and adjusting, abstraction remains a compelling TED Talk that never truly lands.
The solution isn’t abandoning abstraction, but grounding it. It must be treated as a tool, not a deity. Cybernetics succeeded most when it faced real-world tests head-on—building, breaking, repairing, and refining. That friction between theory and practice creates genuine innovation.
Today’s High Stakes
This isn’t merely historical curiosity—it’s urgent right now. The world faces unprecedented challenges. Artificial intelligence is reshaping jobs, gene-editing technologies are redefining biology, and quantum computing threatens to revolutionize everything we know about data and security. Each new leap is abstraction transformed into real-world impact. Cybernetics anticipated these possibilities. It recognized the potential—and the risks.
But now, abstraction’s flaws feel more immediate. Who’s responsible when an AI algorithm makes a catastrophic error? When an autonomous system inadvertently causes harm? These questions are no longer speculative—they’re today’s headlines.
We remain addicted to abstraction’s promise—its efficiency, scalability, and predictive power. Yet Wiener’s wisdom remains relevant: abstraction alone isn’t enough. It must be balanced with conscience and ethics. It must face the harsh test of real-world outcomes. Progress is not just about smartness—it must be about humanity. Abstraction can illuminate the path forward, but reality ultimately determines whether we walk smoothly or stumble disastrously.
In short, abstraction is neither good nor bad—it’s essential but insufficient. Cybernetics showed us how powerful abstraction could be, and also how easily it could lead us astray. The lesson isn’t to abandon abstraction but to master it, pairing vision with vigilance, theory with practice, dreams with accountability. If we manage that, we don’t just understand the chaos—we learn how to live with it, thrive despite it, and maybe even guide it toward something better.