
I once made a decision based on what I thought was perfect information. The data was clear, the logic was sound, and all available reports pointed in the same direction. Everything suggested it was the right move. But I was wrong. And not just slightly wrong—entirely off the mark.
The mistake wasn’t the data itself, nor was it a failure of analysis. The mistake was in how I interpreted it. I saw what I expected to see. I looked for what I wanted to be there. I took the confidence of my own reasoning as evidence that I had reached the right conclusion. It was only months later, when things began to unravel, that I realised I had been blind to the very signals that should have made me hesitate.
Richard Feynman said it best: ”The first principle is that you must not fool yourself—and you are the easiest person to fool.”
This is the danger of decision-making at the executive level. It is not just about information. It is about understanding the weight of a decision, the hidden dynamics behind it, and the real-world consequences of being wrong. You can have all the data in the world, but if you are looking at it through the wrong lens, it will only serve to reinforce your mistakes.
Artificial intelligence promises to help with this. It can analyse vast amounts of information in seconds. It can cross-check reports, find hidden patterns, and raise red flags before we see them ourselves. Used well, it is an invaluable tool for executives. But used poorly, it becomes another source of misplaced confidence, a machine that tells you exactly what you want to hear.
So the real question is not whether AI should be used in decision-making. The real question is: how do we use it without losing the ability to think critically?
I have found three distinct ways that AI adds value at the executive level. The first is in information gathering, where it acts as a filter against overload, making sense of conflicting narratives and giving a single, coherent picture. The second is as a thought partner, one that can challenge assumptions, test scenarios, and highlight biases before they take hold. The third is in automation, not in decision-making itself but in eliminating the low-level operational clutter that drains time and energy from real leadership.
It begins with information. One of the biggest challenges in senior leadership is not a lack of data but rather too much of it. Every department has its own reporting structure, every executive brings their own set of numbers, and every meeting generates a new set of action points that may or may not align with what was agreed upon the week before. In the end, much of decision-making becomes an exercise in cutting through competing versions of reality.
There was a time when this meant manually reviewing spreadsheets, cross-checking reports, and pulling data from multiple systems just to answer a single question. Now, I use a personal GPT trained on company data, reports, and historical performance, allowing me to ask direct questions and get clear, structured answers in seconds. I no longer have to dig through ten different dashboards. I simply ask, ”Summarise the latest revenue trends and cross-check them against marketing performance over the past six months. Where are the gaps?”
The key is not just gathering information faster but seeing connections that would otherwise be missed. A few weeks ago, I ran a query asking for common themes in product delays over the past year. The AI flagged an issue I had not considered—every delay was linked, in some way, to the same approval bottleneck in compliance. The data had been there all along, but because it was buried across multiple reports, no one had put the pieces together.
That is what AI does well. It surfaces patterns that would take humans days or weeks to identify. But it does not think. It does not know what matters. It does not understand nuance. That is why the second, and perhaps most important, use of AI is not as an information source but as a way of challenging assumptions.
The most dangerous thing in leadership is believing you are right simply because everything appears to confirm your position. I have seen this mistake play out many times, and I have made it myself. We trust our instincts. We gather supporting evidence. We find reasons to dismiss inconvenient counterpoints. The smarter we are, the better we become at convincing ourselves we are correct.
Now, before making any major decision, I ask AI to argue against it. I ask it to pull data that contradicts my position. I ask it to simulate alternative scenarios. I ask it, quite simply, ”Where might I be wrong?”
There was a time when I approved a strategic shift in product development based on what seemed like overwhelming evidence in its favour. The numbers looked good. The projected market fit was strong. The engineering team had already built half the necessary infrastructure. It felt like a calculated risk. It was not. I had overestimated internal capability. I had underestimated the cost of pivoting mid-cycle. I had not thought deeply enough about what would happen if things went wrong.
If I had forced myself to step back, to challenge the decision, I would have seen the risks earlier. Now, I make that process part of every major strategic review. Before approving a direction, I have AI test its weaknesses.
But for all its strengths, AI has one critical flaw. It cannot read people. It can analyse sentiment, extract meaning from text, and summarise reports, but it cannot judge conviction, loyalty, or resilience.
I have worked with people who produce impeccable data, yet when things turn difficult, they disappear. I have also worked with people whose analysis may be less structured but whose commitment is absolute, people who will hold the line when it matters.
This is something AI will never see.
I have sat in meetings where someone presents an idea, and everything on paper looks sound, yet I know—simply by watching them—that they do not believe in what they are saying. I have also seen the reverse: a proposal that seems weak on the surface but is backed by someone with the sheer force of will to make it succeed.
These are the moments where leadership remains fundamentally human. Data will never tell you who will stay when things get difficult. AI will never measure the kind of determination that turns a failing project into a success. And that is why, no matter how powerful AI becomes, it will always be a tool—not a decision-maker.
The best executives will not be the ones who let AI guide them. They will be the ones who know how to use AI as a tool while keeping their instincts sharp.
The future of leadership is not artificial intelligence. The future of leadership is human intelligence, augmented by AI but never replaced by it.
If you are still making decisions the same way you did five years ago, you are already behind. The only real question is: where is your blind spot today?
Jörn Green
Technology & Gaming Executive | Platform & AI Strategy | Former Ubisoft, Play’n GO, Sony
Lämna en kommentar