If you hadn’t heard, there’s an AI race sweeping through Silicon Valley.
While stalwarts like Google, Meta, and Microsoft, and newcomers like OpenAI and Anthropic, are grabbing the headlines as they unveil model after model, there are plenty of startups trying to find a foothold.
For investors, it can be hard to tell which of these upstarts have true technical potential and which are likely to fade into tech oblivion.
That’s part of why Jenny Xiao, the cofounder of Leonis Capital, said it’s important for venture capitalists to get serious about understanding the AI technology they’re investing in.
Leonis Capital, founded in 2021, is now deploying its second $40 million fund. The venture capital firm is focused on next-generation AI companies and has invested in over a dozen since its launch, including MaintainX, Motion, and SpectroCloud.
Business Insider asked Xiao and her cofounder, Jay Zhao, to share the top five questions they ask founders when evaluating their tech. They shared their responses over email, which have been condensed and edited for clarity.
Xiao’s Top 5:
What becomes possible if models improve by 10-20%?
The best founders don’t think in terms of incremental feature improvements — they think in capability thresholds. We’re looking for whether they understand that progress in AI is often non-linear, and whether they can anticipate which future capabilities might fundamentally change or even break their product.
One of the most interesting takeaways from our Leonis AI 100 — where we benchmarked the most important AI startups — is that the strongest AI founders build just ahead of the next technical breakthrough.
A good answer talks about entirely new workflows that get unlocked, not just marginal efficiency gains, and clearly explains how the company would adapt or pivot as the technology evolves.
Is what you’re building already on the road map of a foundation model lab?
Most AI startups don’t fail because they’re bad, but because they’re building something that OpenAI, Anthropic, or Google can eventually ship “for free” as a feature.
That’s why we don’t accept “they’re not focused on this” as an answer. We push founders to explain what internal constraint would actually prevent a foundation model lab from building the same thing. If the answer comes down to focus or culture, that’s not a real moat.
Strong answers acknowledge that the big labs technically could build it, but doing so would break their incentive structure, pricing, or distribution model; require operational complexity that doesn’t scale for them; or shift value to downstream execution rather than model capability.
Some founders can also credibly argue they have a 12- to 18-month head start. Weak answers, by contrast, lean on claims like being “more vertical,” having “more niche data,” or understanding customers better — responses we hear from about 95% of founders.
We’ve noticed a consistent pattern: Most founders underestimate foundation models, and most VCs underestimate them even more.
What data exists only because your product exists?
Asking “How much proprietary data do you have?” is usually the wrong question, since no early-stage company has truly meaningful data. And if a founder’s pitch boils down to “we have more data, therefore our models are better,” that’s a red flag: foundation models often improve faster than proprietary ones, and the companies building them have enough capital to buy data and quickly close any gaps.
What matters much more is not how much data exists today, but whether the product naturally generates better data over time. For example, industrial systems where data only emerges once software is embedded in workflows, or products where switching costs come from accumulated context.
What would happen if a competitor cloned your product in 30 days?
We ask this question to understand where defensibility actually lives once code becomes commoditized. If a founder can’t answer it clearly, they likely don’t understand their own moat.
Cosmetic advantages like code, UI, or models are easy to copy, while structural advantages — such as being a system of record or being embedded into compliance, audits, or standard operating procedures — are much harder to replicate.
The question also reveals founder temperament: Strong founders are candid about their vulnerabilities (“Here’s what we’re most worried about”), while weaker ones tend to get defensive and insist a threat “would never happen.”
What part of your system gets harder to change as you scale?
This question forces founders to explain their earliest technical decisions and the trade-offs they intentionally made, revealing whether they understand system dynamics rather than just iterating on features.
Strong answers sound like, “We chose to execute actions, not just make suggestions, which increased liability but created real lock-in,” or “We became the system of record instead of a thin layer, which slowed integrations and sales early but made switching organizationally expensive later.” By contrast, many founders default to saying they want to “stay flexible,” which often signals they haven’t yet designed anything fundamental.
The best AI systems are opinionated by design, deliberately removing degrees of freedom and hard-coding assumptions about workflows, authority, or data flow.
Zhao’s Top 5:
What did you have to unlearn to see this opportunity?
Real insight usually comes from abandoning old assumptions. This question helps distinguish founders who simply stumbled into AI from those who experienced a genuine shift in how they see the world.
We’re looking for intellectual flexibility, not credentials. Strong answers point to a specific belief the founder once held, why they believed it, and what changed their mind; weak answers stay vague, like saying they “realized AI was going to be big.”
What would a well-funded team get wrong if they tried to copy you?
This question assumes copying is possible and pushes founders to explain what isn’t obvious. Shallow answers fall back on claims like “they couldn’t move as fast” or “they don’t have our data,” while stronger ones point to hard-to-see advantages such as deep operational knowledge, customer-specific integrations, or accumulated context that isn’t visible from the outside.
When was the last time you changed your mind about something important?
Founders who can’t point to moments where they changed their mind usually aren’t learning — they’re executing a fixed plan in a world that won’t stay fixed.
This question reveals epistemic humility and whether a founder is genuinely truth-seeking or just looking for confirmation. AI moves too fast for people who can’t update, and we’re especially wary of founders who treat every pivot as vindication rather than correction.
What has to go right in the world — not just inside your company — for this to work?
Market timing, regulatory shifts, and platform dependencies kill far more startups than bad management. This question tests whether founders think probabilistically about external forces and whether they’ve designed for resilience instead of relying on luck.
Founders who say “nothing” haven’t thought hard enough. The ones we want to back can name several concrete external risks and explain how they’re hedging against them.
Tell me about a decision you made that the people around you disagreed with. How did you sit with that?
The best founders aren’t contrarian for its own sake, but they can hold an unpopular position long enough for the world to catch up.