Why resilience, not prediction, is now the most valuable skill.
Ernest Hemingway once described bankruptcy in a way that feels uncomfortably relevant to markets and careers today. “Two ways,” he wrote. “Gradually, then suddenly.”
The last four days in U.S. markets were a reminder of how quickly the mood can swing when a trade gets crowded and the story starts to wobble. The AI-driven bull market has been built on a clean narrative about productivity, platform dominance, and a new cycle of technology spending. But when markets are priced for perfection, the margin for error collapses. A slightly weaker data point, a slightly ambiguous earnings call, or a subtle shift in rates can turn a rally into a rout. The selling then becomes less about fundamentals and more about psychology. Panic is contagious, and in modern markets, it spreads at the speed of an algorithm. The facts did not arrive this week. The music just got quieter.
What made this week more interesting is what came back to the surface underneath the noise. The real repricing was not about one company or one quarter. It was about a fear investors have been trying to postpone: the possibility that large models will not just improve the software industry, but swallow a meaningful part of it. What changed is not the technology. What changed is the market’s willingness to believe the incumbents are safe.
Investors have been happy to price AI as a tailwind to enterprise software. What they have been less eager to price is AI as a substitute. That is the difference between a new feature and a new layer of the stack.
For decades, enterprise software was built on modern interfaces, proprietary workflows, and switching costs. Large models threaten that structure by turning software into an interchangeable layer on top of natural language. If the user can simply ask for outcomes, the value shifts away from the application and toward the intelligence substrate. In a world of natural language interfaces, SaaS starts to look like a tax on friction. The moat shifts. The economics move. And a few incumbents, for the first time in a generation, look exposed.
The uncomfortable question is not even that complicated. If a company can generate a custom workflow in minutes with vice coding, why would it pay thousands of dollars per seat for the privilege of clicking through someone else’s menus?
This should not have been surprising. It has been visible for a while. But markets, like people, have a habit of delaying obvious conclusions when the trend is working in their favor. In a bull market, the crowd does not want the truth. It wants confirmation.
This is also why running an AI startup in 2025 feels like building a company on constantly shifting ground. Every time OpenAI, Google, or Anthropic releases a new model, the entire industry recalibrates its expectations almost instantly. Founders feel it first, but investors, customers, and teams feel it too. The line that “hundreds of startups die with every new model release” now feels less like a joke and more like a description of how the market actually behaves.
The consolidation has a simple explanation. Tens of billions of dollars have flowed into foundation models, and by 2026, performance gains are concentrated among a handful of players. Even their relative rankings shift quarter by quarter. One week, Gemini looks ahead. Another week, Anthropic looks ahead. The rankings may flip again. The deeper takeaway remains stable. The horizontal LLM layer is saturated. The opportunity has moved up the stack, into vertical AI, proprietary workflows, and domain constrained precision. Strategy in AI now looks less like pure innovation and more like positioning.
For a while, vertical AI looked like the safer bet. Early winners validated the thesis. Harvey became the standout example in legal tech, with adoption that proved serious firms were willing to change behavior. But the terrain shifted again. Demos began circulating showing how generalist systems such as Claude could recreate meaningful parts of vertical products with nothing more than prompting, documentation, and a subscription plan. Those demos were not production-grade, but they did not need to be. The signal was clear. Code is becoming a commodity. Workflow is becoming commodity. Vertical differentiation is more fragile than many founders want to admit.
And then comes the second uncomfortable truth, the one that founders often whisper but rarely say out loud. It is not only that traditional software companies may be disrupted. It is that many AI companies will not survive either. The early-stage AI ecosystem is full of firms that have impressive demos but limited defensibility, dependent on rented infrastructure and undifferentiated data. In a world where model capabilities improve rapidly and distribution consolidates, a significant share of these firms will be squeezed between the hyperscalers above them and the incumbents beside them.
This is not just a software story. It is simply the most visible case study of a broader shift. What investors are experiencing in software is what leaders are experiencing everywhere: a system where the rules change faster than strategy can.
We are living in an era where the most important forces shaping business, politics, finance, and personal lives are increasingly wicked. The best way to explain what that means is to contrast wicked environments with kind ones.
Kind environments are the ones most of us were trained for. The rules are stable. Feedback is relatively fast. Skill translates into outcomes. The future resembles the past enough for planning to work. Most corporate careers were designed as kind environments. In kind environments, competence compounds.
Wicked environments are the opposite. The rules change mid-game. Feedback arrives late, distorted, or not at all. Outcomes are driven as much by hidden forces as by skill. Past patterns stop being reliable guides. Markets are wicked. Geopolitics is wicked. Venture capital is wicked.
Chess is a kind of environment. Backgammon is a wicked one. And increasingly, so is the modern professional economy. Some careers still reward stability and repetition, but the arenas that shape power, wealth, and security—technology, finance, geopolitics—are becoming less predictable and more reflexive. In those environments, adaptability isn’t a nice-to-have. It’s the compounding edge.
This is why the old distinction remains so useful: we know what we know, we know what we do not know, and we do not know what we do not know. The first category is comforting. The second category is manageable. The third category is where the real surprises live.
Those surprises are not risk. Risk is measurable. It is the kind of uncertainty you can price, hedge, insure, or model. The surprises are the events you cannot model because you do not even know they exist. They are the shocks that arrive without permission and then rewrite the context in which all your previous decisions were made.
This is the trap for modern leaders and professionals. There are so many things we do not know that it is tempting to treat the entire future as unknowable, and to surrender to either fatalism or constant anxiety. But unstable systems do not require omniscience. They require prioritization. We cannot know the next shock, but we can know what kind of world produces shocks. Leaders are not immune to uncertainty. They are simply expected to metabolize it for everyone else.
From markets and AI, we can extract three leadership realities.
First, regime shifts are happening faster than institutions can adapt. Second, many business moats are thinning, including those that looked unbreakable only a few years ago. Third, the range of outcomes is widening, which means resilience matters more than prediction.
The leadership failure of unstable environments is not ignorance. It is overconfidence. Leaders tend to behave as if the future will be a slightly modified version of the past. They over optimize. They build strategies that require one world to arrive on schedule. They build organizations that cannot absorb shocks. They build careers that depend on one identity remaining valuable.
This is why the biggest strategic error in these environments is over optimization. Most people respond to uncertainty by doubling down on prediction. They try to pick the single best profession, the single best investment, the single best corporate strategy. They confuse decisiveness with robustness. But optimization is a strategy for kind environments. In wicked ones, it is how you become brittle.
The correct response is not better prediction. It is better preparation.
Resilience is not conservatism. It is the ability to absorb shocks, recover quickly, and redeploy resources without losing strategic coherence. It is not about avoiding risk. It is about avoiding fragility. And resilience is expensive. It feels inefficient right up until the moment it saves you.
When I think about how to operate in this environment, I keep coming back to four layers. Not a philosophy, not a vibe, but a practical stack.
The layers are redundancy, optionality, judgment, and empathy. Redundancy keeps you alive. Optionality gives you choices. Judgment gives you direction. Empathy keeps people moving with you.
Redundancy is the ability to withstand shocks without collapsing. In finance, it means not being dependent on one asset class. In business, it means not being dependent on one supplier, one growth channel, or one narrative. In a career, it means not being dependent on one identity.
Optionality is the ability to pivot when conditions change. For individuals, this means building at least two economic selves. Not hobbies. Not side hustles for social media. Real capabilities that can generate value in different scenarios. In wicked environments, geographic myopia is a form of fragility. A career anchored in one market, one network, or one regulatory system is more exposed than it looks. It is like the past`s renaissance man, but more hybrid, like a centaur.
A simple example is already visible across the professional economy. A lawyer who understands prompting, workflow automation, and client psychology will outlast the lawyer who only knows case law. A marketer who can use AI tools and also understands human motivation will outperform the one who only knows tools. The future belongs to people who combine machine leverage with human judgment.
For companies, optionality means modular strategies, product portfolios that can be reweighted, and talent systems that can redeploy people rather than discard them. In unstable systems, internal mobility is not a perk. It is an operating requirement.
Judgment is the premium skill of the AI era. AI can produce endless information, endless summaries, and endless plausible strategies. But judgment is the ability to decide what matters, to interpret signals amid noise, and to make decisions under uncertainty with incomplete information. In practice, judgment is what separates strategy from content.
This is also where the scope-certainty trade-off becomes unavoidable. Luciano Floridi, a leading scholar of digital ethics, has argued that you cannot maximize both the scope of an AI system and its certainty. Broad systems inevitably make more errors. Narrow systems can be precise, but only within their limited domain. The same tradeoff is now showing up in leadership. Broad strategies look elegant in PowerPoint, but they collapse under real-world friction. Narrow strategies can execute, but risk irrelevance if conditions change. Wicked environments force leaders to manage this tension explicitly rather than pretending it does not exist.
Empathy, finally, is not a soft skill. It is an operating system for leading humans under stress. In a stable world, empathy is often treated as a cultural bonus. In an unstable one, it becomes a strategic asset.
In practice, empathy looks like overcommunicating, naming uncertainty out loud, and making people feel the ground is stable even when it is not. People do not behave like spreadsheets when uncertainty rises. They become fearful. They become reactive. They become tribal. They seek simple narratives. They resist change. Leaders who cannot read the emotional temperature of their teams and customers will misinterpret reality. And in wicked environments, misinterpretation is not a minor error. It is a fatal one.
So what should leaders, founders, and professionals do now, beyond vague advice about agility?
Start by stopping a few things.
Stop treating five year plans as commitments rather than hypotheses. Stop building strategy around a single narrative that requires the world to cooperate. Stop confusing cost cutting with resilience, because cutting muscle and calling it fitness is not a strategy. Stop outsourcing judgment to models simply because the output looks polished. Stop treating talent as a cost center in an environment where learning speed is the only durable advantage.
Then start designing for resilience.
For individuals, that means building at least two economic identities, treating AI as literacy rather than novelty, investing in portable reputation, and staying global in your thinking. In wicked environments, the most dangerous person is the one with no room to move.
For organizations, it means shifting from long range planning to scenario based operating models, increasing modularity in org design, and building internal mobility systems that treat talent as a portfolio of capabilities rather than a fixed set of roles. For the same reason, resilient firms build multiple supply chain options, multiple go-to-market motions, and multiple leadership bench candidates. They are not betting on one world. They are building for several.
For founders, the recommendation is even more brutal. Accept the trade-off between scope certainty and defensibility, and be honest about it. In order to survive in today’s AI market, you need at least two, and ideally three, of the following: proprietary data, a unique workflow deeply embedded in a customer’s day, and a genuine algorithmic edge. Five years ago, workflow alone could be enough. Today, if you cannot credibly say you have at least two, you are not building a company. You are building a feature that will be absorbed.
The comforting story of the last generation was that if you did everything right, you would be safe. The emerging story of this generation is harsher but more honest: safety is not guaranteed, but resilience can be built.
And if you want the simplest summary of how collapse, disruption, and professional irrelevance tend to happen, Hemingway already gave it to us. Gradually, then suddenly.
Cenk Sidar
