

A Sudbury-based health technology company has been selected for a national accelerator program run by Google, as it looks to expand its use of artificial intelligence in medical clinics.
Waive Medical is one of 14 startups chosen for the program, which supports companies building AI-driven products with mentorship and technical guidance among other supports.
The company is the only participant from northern Ontario.
Founded in 2020, Waive develops software designed to reduce administrative workloads in healthcare settings. Its tools integrate with electronic medical record systems to process documents, manage tasks and follow up with patients.
Shreyansh Anand, co-founder and CEO of Waive, says clinic staff are often pulled away from patient care to handle basic administrative duties, and this program aims to eliminate some of that routine administrative work.
“Staff are right now being underutilized. Instead of supporting the patients and working with higher value-added work with the doctors, such as preventative care, they’re right now being forced to focus on work like doing basic data entry on a document that comes in. That’s really reducing the clinic’s efficiencies,” he explained.

(Waive Medical)
Anand gave an example of using the app to book patients for follow-up care.
“Right now instead of the staff reaching out, our system will then read that task and and send out a message to the patient saying, ‘Hey, doctor wants to book a follow up, please book by going to this booking link or calling the office,’ or whatever the the form of the clinic wants it to be. Once the patient books, it will stop the automation, and it will update the doctor and the staff saying a task has been completed as a patient’s book, their appointment.”
The system is designed to operate in the background, with minimal disruption to clinical workflows.
“Most doctors don’t realize our system is live within their clinics,” he said, pointing to the hundreds of clinics currently using the program across six provinces.
Anand says the company applied to the Google accelerator as demand for its services continues to grow.
“We’re at a point where our team is really experiencing a lack of capacity because of how many clinics are looking to use our systems. We’re really looking for technical and business mentorship from Google to help our product get to the next level so we can support more clinics just as fast,” he said.
He says the team learned it had been accepted earlier this month with its first call with the tech giant on Friday.
Experts point to ‘massive advantages’ with medical AI
David Eliot, author of Artificially Intelligent: The Very Human Story of AI and a PhD candidate at the University of Ottawa, says AI has significant potential in healthcare, particularly when used to handle routine tasks. One of those benefits, he says, is automation to address ongoing staffing shortages.
“People don’t realize that, especially primary care physicians, spend a lot of their day doing that work, doing that administration, taking the notes, transcribing the meetings,” he said. “We’re giving those doctors an opportunity to do more tasks, to reach out to new people, to do more research, to take on more patients.”
At the same time, Eliot said there are unresolved questions, especially when AI is used beyond administrative functions.
“One of the biggest ones that we’re dealing with is liability,” he said. “When it’s a machine in there, you get this interesting problem where if the machine is making the subjective call. If it’s looking at it and saying, ‘I don’t think that’s cancer’, making the same decision a human might have made, and then it was cancer and it doesn’t get treated in time, who do we hold liable for that?”
He also points to privacy concerns around the use of sensitive health information.

(TechPolicy.Press)
“This is extremely sensitive data that we’re talking about here. Stuff that people hate having released for good reasons,” he said.
He said there are federal and provincial privacy laws that regulate how biometric and personal health data are stored in healthcare settings, unlike in other sectors, such as the Personal Information Protection and Electronic Documents Act and Personal Health Information Protection Act.
Regardless, Eliot said patients should feel comfortable asking how AI is being used in their care.
“Good healthcare needs trust. And I think patients need to inform themselves about how to ask these questions, make sure they’re asking the right questions and interpreting the response of the medical doctors or the medical institution properly,” he said. “It’s something about being informed, understanding what’s being used, how it’s being used, and why they’re making that decision to use it.”
Company draws the line at decision-making
Anand says Waive’s technology is designed to support clinic operations, not replace clinical judgment.
“We also share those concerns that AI is creeping into medical decisions,” he said. “I’m not sure if it’s dangerous; it’s just that we don’t know yet.”
The company also says it is mindful of privacy, noting that patient data remains with the clinic and is only processed temporarily.
“The clinic data is always going to be with the clinic. It’s never going to be with us,” he said.
Source link