Learning to love your inner Cyborg!

Why Collective Intelligence is the real payoff of AI

13 min read

The organisations that will realise the true benefits of AI aren't the ones deploying the most tools—they're the ones learning how to think together differently.

The massive competitive advantage AI will confer on organisations who get their AI strategy right is now becoming clear. In 2026, the competitive edge in knowledge work won’t accrue to those who have the flashiest AI tools, but to those who have figured out how to design and operate Collective Intelligence at scale.

As Thomas Malone wrote in his landmark book Superminds, we’ve always worked within Collective Intelligences—companies, committees, communities. But now, with generative AI, we have the chance to consciously design a new type: one where humans and machines co-think in real time, weaving together human judgment and machine scale into a new cognitive capability.

Collective Intelligence truly is a new thinking “paradigm” - we are not over egging the use of the ‘P’ word. This concept ushers in an era that will bring new levels of creative thinking and more advanced and informed decision-making. Organisations that master the Collective Intelligence challenge will benefit exponentially from those that get left behind dabbling in the foothills, just playing with AI tools and apps.

Today, AI’s impact at the business enterprise level has been modest, but in an ever faster changing world, characterised by volatility, uncertainty, complexity and ambiguity, it will be those organisations that effectively leverage the complementarity of biological and artificial intelligence that will turn AI into strategic advantage. So our message is clear… the future belongs to organisations that think differently.

A helpful lens for understanding this shift comes from the Harvard Business School article Navigating the Jagged Technological Frontier (Dell'Acqua et al, 2023). The authors distinguish between two types of AI augmentation: centaurs and cyborgs. Centaurs are humans using AI as an assistant—separate, parallel processing. The human remains in-charge but relies on the AI for speed or scale. Cyborgs, on the other hand, are something different. They represent a fusion, a system in which human and machine reasoning are blended, co-evolving, and hard to disentangle. Collective Intelligence, as it is beginning to take shape in advanced organisations, is fundamentally a cyborg model.

We are moving into a phase where organisations are not just using AI tools—they are thinking differently because of them with more diverse and capable collectives. The shift isn’t just operational, it represents a cognitive change (and challenge).

This is not speculative. A growing body of research is beginning to map out what this evolution might look like. The 2024 paper AI-Enhanced Collective Intelligence (Cui and Yasseri, 2024) introduces a multilayer network framework to describe how hybrid cognition emerges not from any one node—human or machine—but from the structured interactions among them. It emphasises the importance of the physical, informational, and cognitive layers that underpin meaningful collaboration. When aligned well, these layers form a new kind of intelligence system—distributed, contextual, and more than the sum of its parts.

This idea of structured, system-level intelligence is taken further by Kehler et al. in their 2025 paper on Generative Collective Intelligence (GCI). Here, the authors frame AI not just as an assistant, but as an interactive agent and a long-term knowledge system. It amplifies the distinct capabilities and abilities of humans and computers. GCI systems scaffold group cognition by helping humans frame problems, compare alternatives, and build shared memory. The goal is not to reduce human input, but to make that input more visible, cumulative, and comparative. GCI enables groups to preserve multiple perspectives and explore them systematically, supporting both creativity and convergence.

GCI makes the need for human cognition explicit and embraces it in the design of the architecture, radically expanding the potential benefits”

Kehler et al, 2025

To understand what makes this so powerful, we need to look at how humans think when supported by technology. The concept of cognitive offloading has long been used to describe how people use external tools to reduce mental load. We write things down, set reminders, use calculators—not because we can’t think, but because we want to think better. In the AI era, cognitive offloading becomes more strategic. We’re not just offloading memory or grunt work—we’re sharing the burden of reasoning. This is not about abdication. It's about deliberate and efficient collaboration with machines to achieve better results. In their extended mind theory, Andy Clark and David Chalmers argue that the mind does not stop at the skull – it is more than ‘just’ a biological construct. It extends into the tools, environments, and social systems we use to think. In the age of Collective Intelligence, this theory becomes practical reality. The workplace itself becomes an extended mind—part human, part machine, part cultural process.

This extension must be designed with social intelligence in mind. A 2025 paper on Socially-Minded Intelligence argues that the next major leap in AI won't be technical but relational. It makes the case that systems must learn to reason not just logically, but socially … understanding intent, norms, and group dynamics. If AI is now part of our thinking environment, then it must also learn to participate in human sensemaking. It must reason with us, not just for us.

The cultural implications of this shift are profound. Reid Hoffman, in his 2025 essay in The Times, calls this “superagency”: the expanded human capacity to make better decisions through partnership with well-designed AI systems. Drawing on Isaiah Berlin’s concept of positive liberty, Hoffman argues that AI, when used well, doesn’t restrict choice—it amplifies it. It provides scaffolding for foresight, introspection, and synthesis. This is not freedom from thinking. It is freedom through better thinking.

In organisations that embrace Collective Intelligence, AI tools become more than productivity enhancers. They become co-thinkers. The systems remember, prompt, simulate, and challenge. They help surface dissenting views. They structure creative tension. They reflect back assumptions and pattern-match across silos. In this context, what matters most is not the sophistication of the AI models, but the quality of the human-machine relationship and the design of the collective process.

SIDE BAR

To bring this to life, let’s look ahead just a little. The following short narrative paints a picture of what a mid-sized consulting firm might look like in the very near future, once Collective Intelligence becomes part of the everyday operating system.

It’s Tuesday morning. Amira logs in from Nairobi, Andrés from São Paulo, and Sarah from Leeds. Their client’s asked for a quick perspective on decarbonisation strategies in emerging markets — something useful for an upcoming board session.

Before the team meeting, Amira runs a few rough thoughts through their shared GPT workspace. The AI doesn’t just clean up the language — it spots where the team’s views differ, suggests a couple of parallels from past projects, and flags assumptions they haven’t discussed. Sarah notices one of the examples is drawn from a Southeast Asian transport initiative they hadn’t thought about.

When the call starts, they don’t walk through a slide deck. Instead, they look at a shared “thinking map” the system has pulled together. It shows three potential storylines, the assumptions behind each, and a rough sense of which stakeholders each one might resonate with. A fourth idea is listed too, but the AI has flagged it as weak — it’s built on shaky evidence and overlaps heavily with one of the other paths. They agree to drop it.

As they go through the options, Andrés questions one of the assumptions around social impact. Rather than argue in circles, they ask the system to model likely knock-on effects using similar past cases and public sentiment data. The results challenge their current timeline, so they adjust it.

After the call, the system sends out a short reflection note. It captures the main discussion points, highlights a couple of areas where the team didn’t reach agreement, and privately nudges Sarah — who was quieter than usual — to share a follow-up note if she has more to add.

She does. Her reflection shifts the tone of the piece slightly, putting more emphasis on equity and community outcomes. The system links her comments into the existing analysis, and the team builds the final version from there.

They send the synthesis back to the client 36 hours later. It’s not that they rushed. It’s that the process helped them focus on what mattered — and think better, together.

So how do organisations move toward this? There is no linear roadmap, but there is a trajectory. It requires a shift from isolated expertise to distributed reasoning, from fast answers to structured reflection.

The evolution to Collective Intelligence involves three interacting dimensions:

1.     Strategic Leadership and Vision

Everything starts when leaders stop seeing AI purely as a force for automation and start treating it as a cognitive partner. This requires strategic leadership from those who are prepared to invest time in identifying precisely where and how AI will play a role within their organisation in a way that will to provide them with a strategic advantage.

This involves understanding where to deploy the power of AI and where to dial up the human advantage and how to take this forward into the creation of co-intelligence: a truly symbiotic human/AI relationship that dials up the power of collective intelligence. The prize is a way of working that creates an organisation with the superpower of thinking differently and more creatively than their competitors.

2.     Aligning people and systems and building novel interdisciplinary analytical frameworks

The second thread involves organising people into appropriate teams and aligning organisational systems to ensure they are AI friendly, which will involve the specification of the most appropriate AI tools and applications. On the ground this involves building systems of shared memory, knowledge graphs, prompt logs, project traces that allow insight to accumulate an evolve. It demands that divergent thinking is protected and the convergence is structured, rather than imposed AND it only works when people trust the system which means making reasoning visible and accountable.

Importantly, this element will also include what is currently an underestimated dimension in the collective intelligence process. This is the creation of imaginative fresh interdisciplinary analytical frameworks in order to maximise the power of synergy between human and artificial intelligence.

With AI we now have the power when confronted with a problem to access a vast diverse range of mental models drawn from across different disciplines. We no longer have to operate within the silos of one or two disciplines when solving a problem. This access to such a vast eclectic array of knowledge is one of the immense advantages of our access to AI but currently little thought seems to be given to actually how we create these new interdisciplinary ways of working.

This is not simply a matter of saying we take a problem and look at it through the lens of say psychology, sociology, and anthropology and so on. It means the creation of new analytical constructs and concepts. Look at the way Herbert Simon, in fusing economics and psychology, delivered us the concept of “ satisficing “ to complement the traditional notions of maximising and optimising .

So an illustration of the kind of interdisciplinary framework we will need going forward would be going beyond the concept of “data analysis” to build powerful iterative frameworks for sensemaking - an approach that better enables us to unravel context and complexity.

And we will also need fresh mental models to enhance our critical thinking skills in the new era - one that fuses the best of deductive, inductive and abductive reasoning and so on.

3. Cultivating Polymathmind thinking skills .

This then brings us to the notion of building new human capabilities. To thrive within a Collective Intelligence requires the human element within this system to up its game. We at Polymathmind.ai believe that this human thread requires us to become much more comfortable as diverse thinkers … developing skills akin to the modern-day polymath.

These are skills that don’t just operate in single discipline silos. They are a mix of reinforcing or connective skills that together equip individuals to work in an interdisciplinary fashion to thrive in the new AI shaped environment. The end result is a collaborative effort - a Collective Intelligence -that helps organisations think better and reach outcomes or deliverables that no single expert could land on alone . And they do this with a greater sense of agency, creativity and adaptability.

What are the key Powerskills that deliver this Polymathmind way of thinking?

As the ‘Human-AI Interface’ our modern polymath is facilitating a fluid interaction between humans and AI across the creativity and thinking process, to create a powerful intelligence system. As the ‘Sensemaker’ he or she is extracting meaning from complexity, which works alongside his or her ‘Critical Thinker’ persona, bringing a range of skills to challenge assumptions and identify nuance. As the ‘Audacious Creator’ he or she is able to make that influential leap by thinking differently and in ‘Inspiring Communicator’ mode he or she is able to create narratives that that bring ideas alive and help create a shared team movement in the execution of ideas. Finally, our modern polymath will operate as a ‘Galvanising Leader’ - the person who can pull together all these foundational skills of collective intelligence and make things happen: turn ideas into action with an adaptive and sensitive leadership style

In embracing Collective Intelligence, learning how to think differently and more creatively will be top of the agenda for visionary CEOs. Our work shows that when these power skills are developed in tandem, teams become more than the sum of their parts. They don’t just collaborate better—they think better. Decisions improve—not because they’re faster, but because they’re more multidimensional. Strategy becomes more resilient—because it’s been shaped through multiple lenses, including those surfaced by machines. People become more engaged—because they can see their contributions shaping the group’s direction. And learning accelerates—because the system remembers and builds

And Collective Intelligence is not science-fiction. It is here now. The technology already exists. The question is whether we will design for them with the right values and the right skills. Because the true promise of AI is not intelligence in isolation. It is what happens when intelligence becomes a shared resource—distributed, dynamic, and deeply human.

Adam Riley
Founding Director at Decision Architects
David Smith
Director at DVL Smith Ltd