Kintsugi CEO Warns Building Healthcare AI Is Financially Unsustainable

Kintsugi CEO Warns Building Healthcare AI Is Financially Unsustainable


Building artificial intelligence (AI) for healthcare has never been more technically feasible — or more financially punishing. As generative AI innovation and adoption accelerate, even startups with clinically validated models, peer-reviewed trials, and enterprise demand are discovering that scientific proof no longer guarantees commercial survival. Regulatory clearance does.

Kintsugi, a mental health AI startup that developed clinically validated voice biomarkers capable of detecting depression and anxiety from short clips of free-form speech, is the latest example. After raising roughly $30 million, completing years of clinical research, and conducting a pivotal study of about 1,600 participants over four years, the California-based company recently shut down its operations.

“When you’re building at the intersection of AI and a highly regulated healthcare space, venture investors still expect you to be at $100 million in ARR by year ten—and now, with the acceleration driven by AI and LLMs, they expect that by year three or five. But in healthcare, you can’t even sell your commercial product until you are FDA cleared,” Grace Chang, founder and CEO of Kintsugi, told me. “That fundamental mismatch between venture timelines and regulatory timelines creates enormous structural tension for startups like ours.”

However, instead of selling its technology or letting it disappear inside an acquisition, Kintsugi is releasing its core AI models and research publicly so other developers can continue building systems for global mental health innovation.

“We realized that giving it away, releasing everything under Apache 2.0, the most liberal license, might actually create the greatest impact. There would be no proprietary IP barrier. Any company can use it commercially without paying us,” said Chang. “After seven years and roughly $30 million invested, handing it to the world felt like burning the boats. But it also felt like staying true to our mission of making mental healthcare access more visible and scalable.”

The decision reflects both a mission-driven exit and a structural warning for the AI healthcare sector. Recent industry data shows healthcare AI shutdowns rose more than 25% between 2024 and 2025 amid a broader funding contraction, with mental health AI companies facing particularly intense scrutiny around clinical efficacy and regulatory compliance. New legislations have further tightened oversight. The state of Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806) in Aug. 2025, signed by Gov. J.B. Pritzker, prohibiting the use of AI in therapy and psychotherapy services. Industry analysts estimate that only about 16% of mental health AI tools have undergone rigorous clinical testing, raising concerns among regulators about real-world effectiveness.

Kintsugi’s team considered pivoting toward security applications after validating the technology’s relevance across telecommunications and financial institutions, Chang said. However, the shift created investor confusion over the company’s identity. That ambiguity complicated fundraising, weakened investment rounds, and led to increasingly difficult terms from venture firms, while also triggering intense acquisition discussions with large technology companies and well-funded startups.

“We tried to raise an additional $10 million around the security dimension of our work, investors kept asking, ‘Are you security or are you healthcare?’ Even though the surface area was voice and human insight detection. That ambiguity became contentious,” she explained. “The terms we were offered were almost untenable. It felt like the market didn’t quite know where to place us.”

In the company’s final weeks, acquisition conversations grew increasingly predatory. Some were framed as attractive opportunities but largely amounted to short-term payroll extensions — roughly $25,000 to $50,000 a week — in exchange for as much as $1 million in equity. “At that point, you have to ask what the company would even walk away with,” Chang said. “You’re negotiating on a compressed timeline, and everyone knows it.”

The Capital Burden Of Regulated AI

Kintsugi spent four years running seven clinical trials. Chang has a background spanning product management, entrepreneurship and machine learning. She holds a bachelor’s degree in computer science and economics from the University of Southern California and an MBA from UCLA Anderson, and later attended Stanford University. Under her leadership, Kintsugi set out to expand mental health access by embedding voice-based AI into clinical workflows for insurers, health systems, pharmaceutical companies, and employers.

The company was on the verge of securing FDA De Novo clearance and launched as Japan’s default mental health screening tool before announcing the shutdown of commercial operations in Feb. 2026. The platform’s science worked; its economics, however, proved far more challenging.

“If what you’re building has never been done before, the FDA’s de novo pathway is almost prohibitively expensive for a startup. You have to plan for millions more than you think. Consultants charge $300 to $400 per hour just to coordinate trials and clean data,” said Chang. “And when you’re paying hourly, the timeline always stretches. It’s designed in a way where delays benefit the system, but for a startup, every extra month can be existential.”

Health systems and insurance companies require clearance from the U.S. Food and Drug Administration (FDA) before they can widely use AI tools, especially those that are related to clinical decision support or reimbursement.

Clearance for clinical AI tools, however, takes years and involves significant validation studies. Moreover, specialized machine learning engineers demand premium salaries, while clinical trials and infrastructure costs add millions of dollars to the expenses before any revenue is generated. Startups have to bear these costs before commercialization is even feasible. Likewise, macro-economic instabilities and reduced budgets in healthcare systems have slowed pilot programs and extended procurement cycles. Enterprise customers are typically hesitant to adopt new clinical solutions, especially in times of policy uncertainty or government spending constraints.

“Working with large enterprises is very illusory. You see big potential revenue numbers—the kind your investors pressure you to chase—but the sales cycles are long. When an enterprise says, ‘Let’s revisit this next quarter,’ that’s nothing to them. But three months to a startup is a lifetime. You can burn through capital waiting for decisions that feel minor on their side,” said Chang

This creates a mismatch between the rate of technological innovation and the economics of regulated deployment. Even firms that have proven technology and a clinical need can find it difficult to close the gap. “For us, the barrier wasn’t a lack of evidence but the cost structure required to bring that evidence into production. For an independent startup, sustaining that asymmetry proved untenable,” Chang told me.

Kintsugi has open-sourced its clinically validated voice biomarker models under an Apache 2.0 commercial license, along with model cards documenting validation metrics and dataset structure, documentation of its signal-processing and training pipeline, and aggregated non-identifiable dataset metadata built over seven years. The company is not releasing raw patient audio or any personally identifiable data.

She said that it is striking that it can take seven years to bring to market a tool designed to improve mental health screening, even as fewer than 4% of patients complete the paper-based assessments still widely used today. “The need is obvious, but the friction is systemic. When you’re building something that delivers objective signals about human health, you step into expanded regulatory oversight, and that slows everything.”

Kintsugi is not alone in its struggles with the regulatory and clinical challenges in the mental health AI industry. Mindstrong, a former prominent digital mental health startup, originally focused on using AI to create biomarkers based on smartphone usage patterns to diagnose and track neuropsychiatric disorders. After running into regulatory and clinical adoption challenges, the company shifted its focus to virtual therapy and coaching offerings but ultimately shut down in early 2023, laying off more than 130 employees despite raising over $100 million in funding.

A Structural Warning For AI Startups

Kintsugi’s corporate life is coming to an end, but its research will live on. The shutdown points to a larger structural issue in the development of AI in the regulated space: Who can afford to bring validated technology to market? With the bar continuing to be raised on regulatory expectations and development costs, more early-stage companies will likely be forced to validate both clinical and financial success before scale is even a possibility.

Chang said that if there is one lesson she would offer founders, it is to “lean into discomfort”. Regulatory ambiguity, enterprise uncertainty, and funding volatility are unavoidable when building in highly regulated markets, she noted, but confronting those realities directly leads to clearer decisions.

“The more you shine a light on what’s actually happening underneath, the better your decisions will be,” she said. “Sometimes that truth forces hard questions. Is our differentiation real? Are we willing to keep pushing this rock uphill without knowing when it will finally roll down? That’s the startup journey.”



Source link

Leave a Reply