milkyway 6
milkyway 7
milkyway 8
founders strategy
March 09, 2026

Your SaaS Is Dead. Your Expertise Isn't.

1.png

What a crypto analytics CEO and a venture capitalist agree on about surviving the AI economy.

By Tanwa Arpornthip, Senior Advisor, SCB 10X  •  AI-VOLUTION The Series 2026

I sat on stage with Alex Svanevik, CEO & Co-founder of Nansen, at AI-VOLUTION The Series 2026 in Bangkok. We were supposed to talk about how AI intersects with crypto. What actually happened was a 45-minute dissection of why most software companies are sleepwalking into irrelevance, why firing your people is the dumbest move a CEO can make right now, and why the most underrated use of AI is strategy.

Here is what I think everyone in that room needed to hear. And what most business leaders are still getting wrong.

The $3.50 question

Alex put it bluntly: “What is stopping your users from just rebuilding their own version of your software for $3.50 with Claude Code in maybe one hour?”

That question should terrify every SaaS founder reading this. Because the answer for most of them is “nothing.” Nothing is stopping your customer from recreating your product over a lunch break. And if the customer can do it, so can a competitor who will undercut you on price while they’re at it.

The comfortable assumption of the SaaS era was that building software is hard. It required teams of engineers, months of development cycles, rounds of QA. That assumption is collapsing. The cost of producing software is trending toward zero. And if your business model depends on the difficulty of building software, you are standing on ground that is actively liquefying under your feet.

This isn’t hypothetical. I watch this happen at SCB 10X every day. Three years ago, when we used Nansen’s product, a user had to click through multiple dashboards to find what they wanted. It was powerful, but it was built for power users who could navigate the complexity. Today, you type a question in natural language and the AI agent handles it. Discovery, decision-making, execution. All in one conversation. The interface layer that used to be the product is dissolving into a conversation.

SaaS doesn’t die. It evolves. Or it doesn’t.

Alex’s framework here is precise, and I think it is correct. He sees three survival paths for software companies.

The first is to become a vertical agent. Not a generic chatbot with a skin on top, but a deeply specialized AI that does a specific job better than any general-purpose model ever could. Nansen is doing this for on-chain investing. Cursor is doing this for coding. The key word is “vertical.” You pick a niche. You go deep. You accumulate proprietary data, specialized evaluations, and a purpose-built interface that cannot be recreated by someone typing “build me a trading app” into Claude Code.

The second path is to become a component for agents. You don’t serve the end user directly anymore. You become infrastructure that other AI agents rely on. Think payment rails for agents, data feeds, wallet integrations, APIs. Nansen plays in this space too with their API and CLI that other models can call.

The third is infrastructure. But that game is dominated by the foundation model companies and the tech giants. Google, OpenAI, Anthropic. Unless you have billions in capital and access to massive compute, this path is probably not for you.

Everything else? Everything that doesn’t fit into one of these three? That’s the kill zone.

The expertise moat

Here is where I think the conversation gets genuinely important, and where most commentary about AI misses the point entirely.

If you go to ChatGPT right now and type “What exercise should I do today?” you will get a generic answer. It will be technically correct and functionally useless. It doesn’t know your age, your injury history, your training schedule, whether you’re preparing for competition or just trying to stay healthy, whether you train mornings or evenings, how many days a week you can commit. An expert coach would ask all of those questions before prescribing a single rep.

That gap between generic capability and expert application is the moat. And it’s the only moat that matters now.

I frame it this way: AI is an expert scaler. You take your expertise, the thing you know how to do better than almost anyone else, and you dump it into the AI. Now that expertise operates at scale. Nansen dumped years of blockchain analytics knowledge into their AI agent, and suddenly every user has access to what used to require a dedicated crypto analyst. That’s not AI replacing expertise. That’s expertise being amplified to an absurd degree.

The question for every business is no longer “How do I reach the customer?” It is “How do I distinguish myself from a generic chatbot?” And the answer is always the same: your expertise. The things a general-purpose model simply does not know because it hasn’t lived your experience, worked your deals, served your clients, navigated your market.

If you are in marketing, what do you know about marketing antiques that ChatGPT doesn’t? If you are in sales, what closing techniques have you developed through thousands of conversations that no prompt can replicate? If you are in design, what principles of taste and judgment have you refined over a career?

That is your competitive advantage. Not your codebase. Not your dashboard. Your knowledge.

Firing people is a 10x productivity reduction

One of the most pernicious narratives in the AI discourse is the one about headcount reduction. “AI will replace X million jobs.” “We can cut our team by 40%.” Every time I hear this, I want to ask: have you actually done the math?

Here is the math. Each person on your team, equipped with AI tools, can potentially produce 10x, 50x, even 100x their previous output. Alex said it clearly: “Which makes them even more valuable. And so the rational thing is not to have less of them, but to have more of them.”

I experience this firsthand. As we sat on stage having that conversation, I had AI agents running research on 115 companies simultaneously. Each company’s research takes about 10 minutes of compute time. Last year, the same research took four hours per company when done by humans. That’s not four hours of active human labor being eliminated. That’s four hours being compressed into 10 minutes of autonomous work while I do something else entirely.

The person who set up that research pipeline, who knew what questions to ask, what data to prioritize, what signals to look for in the outputs, that person is now wildly more valuable. Not less. You fire that person and you lose the expertise that makes the AI useful in the first place.

Alex added something important here. When we talk about increased productivity, people imagine someone chained to their screen for 80 hours a week, grinding through Claude Code like a machine. That’s not what this looks like. At Nansen, they call it “A New Way to Work.” You go to the gym while agents work in the background. You attend a conference while your AI does research. You take a walk and brainstorm with your AirPods. The output goes up and the screen time can go down. That’s a healthier equation, not a dystopian one.

The taste problem (and why it’s temporary)

I told the audience something I believe strongly: AI went from giving me zero design capability to making me an average designer. The problem is that average is no longer good enough when your competitors can dump their expert taste into the same tools and immediately produce something superior.

People associate AI with “slop.” Generic outputs, that unmistakable “AI-made” aesthetic. And for the past two to three years, that association has been empirically accurate. But Alex made a prediction I agree with: this is the year that changes. When someone with genuine taste and expertise wields these tools, they don’t produce average work. They produce exceptional work at a velocity that was previously impossible. He described his Head of Design getting what amounts to an Iron Man suit. The same creative judgment, the same refined taste, but with capabilities that let him go deeper into details and cover more ground than any human could alone.

The insight here is subtle but crucial. AI doesn’t replace taste. It amplifies it. A person with no design sensibility plus AI equals average output. A person with refined design sensibility plus AI equals extraordinary output at extraordinary speed. The expert doesn’t become obsolete. The expert becomes superhuman.

The cost of knowledge work is trending toward zero

I said this on stage and I want to elaborate because I think it’s the most important macroeconomic observation about AI that nobody is talking about enough.

Knowledge work, the kind of work where the output is information, analysis, or decisions, is becoming nearly free to produce. The marginal cost of generating one more research report, one more market analysis, one more strategy document is approaching zero.

I gave a live example. That morning, I was trying to explain a storytelling technique to my team at SCB 10X. A side-by-side comparison format where Platform A speaks in full color while Platform B is grayed out, then they swap. Standard presentation design. Some team members didn’t get it from my verbal description. So I went to AI and said: “Create a rough animation, storyboard it down to the second, show me what happens when.” I went to lunch. When I came back, there was a working prototype of the video.

The time from concept to communicable prototype collapsed from potentially days of back-and-forth with a designer to one prompt and a lunch break. That’s what it means for knowledge work to approach zero cost. The bottleneck isn’t production anymore. The bottleneck is knowing what to produce.

AGI is already here (for work)

We spent time on stage debating AGI, and I want to be direct about where I land on this. From a strictly academic perspective, we can argue endlessly about whether current systems truly constitute Artificial General Intelligence. The goalposts keep moving. First it was the Turing Test. Then it was creative writing. Then it was scientific reasoning. Every time AI clears a bar, we set a new one.

From a practical perspective, the debate is irrelevant. Think of the most capable executive assistant you could hire. An Ivy League graduate who speaks multiple languages, can program, do mathematics and statistics, conduct research, write analyses, and communicate on your behalf. Think about what you would pay for that person. Now consider that you can get functionally equivalent capability for under $250 a month.

For the level of work most of us do day-to-day, AGI is here. Not in some theoretical sense. In the practical sense that for knowledge work, the AI can do what you need it to do with a level of competence that meets or exceeds a highly capable human.

Alex made a good point about bottlenecks. The physical world is a bottleneck. You can’t iterate on lab experiments at the speed of token generation. Regulation is a bottleneck. Self-driving cars are technically feasible in many environments but legally prohibited. Security is a bottleneck. Giving an autonomous AI agent access to your company’s systems creates attack surfaces that most security teams aren’t prepared for. But within the domain of pure knowledge work? The constraints are disappearing faster than most organizations can adapt to.

The spirituality of matrix multiplication

I closed the panel with something that gets more true the longer I think about it. GPUs were not made for AI. They were made for gaming. The gaming community’s insatiable demand for faster graphics drove Nvidia to optimize matrix multiplication to an extraordinary degree. And it turns out the underlying mechanism for training neural networks is the exact same mathematical operation as rotating your cartoon character on a screen.

What are the chances that gamers inadvertently funded one of the most consequential technological breakthroughs in human history? The same hardware that renders Call of Duty created the substrate for systems that can reason, write, code, and strategize.

I’m not usually one for spiritual language, but there’s something deeply strange about the coincidence. And as someone who welcomed his first child into the world a year ago, wondering how to balance work and parenthood, and then watching AI make that balance achievable in ways I couldn’t have imagined, the strangeness feels less like chance and more like design.

What to do Monday morning

I’ll leave you with the framework I shared with the audience. I call it the Delegation Framework, and it works because it mirrors how you’d manage a brilliant but new employee.

Give the AI a task. Let it produce a first draft. It will look polished and be wrong because it doesn’t have your business context. That’s expected. Now point it to the relevant documents: your SOPs, your brand guidelines, your historical data, your strategic priorities. Have it revise. Now give it feedback. “This section doesn’t account for our supplier relationships. Go read the procurement SOP and incorporate it.” It revises again.

After three or four cycles, the output is genuinely good. More importantly, make the AI summarize what it learned from the feedback. Inject that summary into your prompt the next time. The system gets better with each iteration.

This is how you manage humans too. You don’t expect a new hire to nail it on day one. You give context, provide feedback, and build institutional knowledge over time. AI is no different except the cycle time is minutes instead of months.

The companies and individuals who will thrive in this era are not the ones with the best AI tools. Everyone has access to the same tools. The ones who will thrive are the ones with the deepest expertise, the clearest processes for encoding that expertise into AI systems, and the discipline to iterate until the output matches their standards.

Your SaaS might be dead. Your expertise never will be.

Watch the full discussion at  https://www.youtube.com/watch?v=zzoVO_1A3fE&t=4s

Use and Management of Cookies

We use cookies and other similar technologies on our website to enhance your browsing experience. For more information, please visit our Cookies Notice.

Reject
Accept