Key Takeaways
- In the broader digital ecosystem, how telehealth platforms market their services and engage patients can influence care.
- AI-driven patient acquisition and marketing is not subject to the same oversight as clinical tools, creating potential compliance and patient safety risks.
A rapidly growing artificial intelligence (AI) powered telehealth startup is drawing scrutiny from regulators and industry observers, raising broader questions about oversight, advertising practices, and patient safety in the digital health sector.
MEDVi, a telehealth company focused on weight-loss treatments, has reported explosive growth, generating hundreds of millions in revenue with a minimal workforce and heavy reliance on AI tools for operations, marketing, and customer interaction.
Recent investigations and regulatory actions, however, suggest that the company’s rise may also expose gaps in the governance of AI-enabled telehealth platforms.
FDA Warning Letter Highlights Compliance Issues
On Feb. 20, 2026, the U.S. Food and Drug Administration (FDA) issued a warning letter to MEDVi, along with 29 other companies, after reviewing its website and marketing practices. The FDA said the company promoted compounded versions of semaglutide and tirzepatide and made claims that were “false or misleading.” Those claims included assertions that its products were equivalent to FDA-approved drugs and that the company compounded the medications itself.
The warning letter reflects a broader enforcement trend, as the FDA has targeted telehealth companies marketing compounded weight-loss drugs, particularly as supply shortages for branded medications have eased. The warning letters are not formal enforcement actions, but they do signal regulatory concern and require companies to address the identified issues.
AI-Driven Marketing Practices Under Scrutiny
Separate reporting has raised concerns about MEDVi’s advertising ecosystem, which relies heavily on affiliate marketing and AI-generated content. Business Insider investigations found that some advertisements promoting MEDVi services featured what appeared to be fabricated physician identities, including profiles using AI-generated images and misleading credentials.
The company’s founder has said that a portion of its marketing is driven by affiliates and that policies are in place to remove non-compliant advertising. However, oversight of those affiliates remains a central concern.
The use of AI-generated marketing, including personas, in health-related advertising has drawn attention, particularly when disclosures are unclear or absent. MEDVi’s business model relies on partnerships with third-party clinical infrastructure providers, such as those from OpenLoop Health, to supply licensed clinicians, process prescriptions, and fulfill pharmacy orders.
This approach highlights a growing trend in digital health: separating front-end patient acquisition and engagement from backend clinical services. While the model may enable rapid scaling and lower operational costs, it also raises questions about accountability for clinical decisions, oversight of marketing practices, and the role of AI in patient-facing interactions.
The MEDVi case comes at a time when policymakers are grappling with how to regulate AI in health care, particularly as telehealth platforms expand. Several policy challenges are relevant in this case.
Fragmented Oversight
Responsibility for telehealth platforms spans multiple federal and state entities. These agencies include the FDA, which oversees drug safety, labeling, and marketing claims; the Federal Trade Commission (FTC), which regulates deceptive advertising and consumer protection; and state medical boards, which govern licensure and clinical practice standards. creating delays or gaps in enforcement. This multi-agency framework can create regulatory gaps or delay enforcement, particularly when companies operate across jurisdictions or blur the lines between clinical care, marketing, and technology.
AI in Patient Acquisition Vs. Care Delivery
While regulatory focus has been placed on clinical AI tools as medical devices, MEDVi’s model underscores risks associated with AI use in marketing, patient engagement, and intake processes. This is a space that is less tightly regulated, leaving marketing and intake-related AI functions primarily governed by general consumer protection laws rather than health care-specific regulations. Thus, while AI-driven marketing or patient interactions may influence patient behavior, they are not subject to the same level of scrutiny as clinical tools.
Recent FTC actions against digital health companies for misleading health claims suggest that regulators are beginning to expand scrutiny into these non-clinical AI uses, particularly when they intersect with health outcomes. For example, the FTC took action against NextMed, a telehealth weight-loss platform, and its business operators after marketing access to specific GLP-1 drugs through its subscription model, using unsubstantiated claims about average weight-loss outcomes.
Compounded Medication Oversight
Telehealth companies offering compounded medications operate within a complex regulatory environment, particularly when marketing claims intersect with FDA approval standards. Compounded drugs are not FDA-approved, and companies must avoid making claims suggesting equivalence with approved medications. The FDA has increased enforcement in this area, particularly around compounded GLP-1 drugs, emphasizing that marketing claims must not be false or misleading and must clearly distinguish compounded products from approved drugs.
As shortages of branded medications resolve, FDA scrutiny has intensified, particularly for telehealth companies that combine prescribing with aggressive marketing strategies, such as MEDVi.
Affiliate and Third-Party Risks
Some digital health companies, such as Hims & Hers, rely on affiliate marketing networks or third-party advertisers to drive patient acquisition and have drawn outcry from the clinical community. The use of decentralized marketing networks introduces compliance challenges, especially when companies rely on third parties to generate patient demand.
The FTC has repeatedly emphasized that companies are responsible for the claims made by their affiliates and must ensure that marketing is truthful, substantiated, and properly disclosed. Failure to monitor third-party marketing can result in enforcement actions, even if the company did not directly create the content. However, severe issues have been identified for years, indicating that social media influencers and pharmaceutical companies continue to violate drug-advertisement requirements without visible repercussions.
A Test Case for AI-Enabled Telehealth
The MEDVi situation reflects a broader tension in digital health: innovation is advancing faster than regulatory frameworks can keep pace. The company’s rapid growth, combined with regulatory scrutiny, highlights how AI-driven models may test existing policy boundaries in telehealth, particularly around transparency, clinical accountability, and consumer protection. As federal and state regulators continue to evaluate AI’s role in health care, cases like MEDVi may help shape future guidance on how digital health companies deploy AI across both clinical and non-clinical functions.
For clinicians, the case underscores the importance of evaluating not only the clinical tools used in telehealth but also the broader digital ecosystem, including marketing practices, data use, and third-party partnerships.
Disclosures:
- This article was developed with AI-assisted research tools and edited by the Telehealth News editorial team for accuracy and clarity.