It has been suggested that healthcare has a lot of potential for AI use, both to assist with medical treatments and to reduce the time-consuming administrative duties associated with providing clinical care. Paris-based digital health startup Nabla, co-founded by AI expert Alexandre Lebrun, states that it is the first to create a GPT-3-powered device to assist physicians in their work, particularly in terms of paperwork.
Nabla has unveiled their new product, titled Copilot, as a digital assistant for medical professionals. The service has begun as a Chrome extension, which helps with transcribing and utilizing information from video meetings. Additionally, it is expected that an in-person consultation tool will be available in a few weeks.
As medical professionals interact with patients, Copilot automatically converts what is being said into the various documents that would typically be generated from such a meeting — such as prescriptions, follow-up appointment letters, and consultation summaries. This technology is based on GPT-3, a language model constructed by OpenAI that is able to produce human-like text. GPT-3 is being used to create numerous applications, including ChatGPT from OpenAI itself.
Nabla was one of the first organisations to try out GPT-3 when it arrived in 2020. They are currently utilising GPT-3 (as a paying customer) as the basis of Copilot. According to Lebrun, their aim in the near future is to create their own substantial language model tailored to the specialised language and requirements in medicine and healthcare, to power Copilot, any other services Nabla develops in the future, and perhaps even applications for other companies.
The startup states that their early version has already gained attention, being utilized by medical professionals in the USA and France, along with more than 20 physical and digital medical centers with impressive medical staffs.
It is yet to be determined what extensive, long-term applications there will be for generative AI technologies, as well as whether the massive language models that drive them will be a positive or negative influence in our society; and if they will be profitable.
At present, healthcare is one of the primary areas of focus for people examining the effects of recent developments. This has been split into two areas of growth. The first is the application of technology for clinical support, such as the Harvard Medical School paper discussing the utilization of ChatGPT for patient diagnosis. The second is the automation of more mundane tasks as highlighted in the Lancet report on the future of medical discharge summaries.
Much of the labor is still in its beginning phases, primarily due to the fact that healthcare is particularly delicate.
Lebrun stressed in an interview that although language models can be very useful, they can be unreliable as well, since there is a 5% chance that they can be wrong. He noted that this is not acceptable in healthcare, as mistakes can have dire consequences.
Healthcare appears to be an ideal area to be enhanced with AI as clinicians are overwhelmed with patients and worn out. The global medical profession is suffering from a persistent dearth of doctors due to some exiting the profession as well as needing to manage a lot of specific paperwork in addition to treating patients. Completing the necessary paperwork and scheduling appointments required by laws and regulations, as well as patients’ needs, can be a laborious task. However, mistakes can still occur due to human error.
Despite this, many aspects of medical care have already been digitized, encouraging patients and doctors to be more likely to use digital instruments to help with the rest.
Alexandre LeBrun was driven by this concept when he founded Nabla, with the aim of primarily assisting doctors with administrative tasks, not medical procedures or patient consultations.
LeBrun is well-known for creating language-based programs. In 2013, he sold his enterprise-oriented startup, VirtuOz, to Nuance so he could lead their development of digital assistant technology for businesses. Afterwards, he founded and sold Wit.ai to Facebook, where he and his team worked on developing chatbots for Messenger. He then devoted his time to FAIR, the Artificial Intelligence research center owned by Facebook which is located in Paris.
Lebrun thought that the tools used by businesses to communicate with buyers could be used for more than just advertising and retention of customers; they could be put to use in other less ambiguous circumstances as well.
Lebrun commented that as early as 2018, it was evident that physicians were taking a considerable amount of time to update patient documents, which prompted them to consider utilizing AI technology and enhanced machine learning to make this process easier within the healthcare system.
Lebrun didn’t tell me, but it’s likely he noted this when robotic process automation (RPA) was becoming increasingly popular.
Robotic Process Automation made automation in the business world more visible. However, giving aid to physicians in real-time discussions is much more complicated than just doing repetitive tasks. Since there is a limited set of language and topics involved in a doctor-patient chat, this became a great opportunity to utilize an AI-assisted helper.
Lebrun brought up the concept to Yann LeCun, who was his supervisor during that time and still holds the title of Facebook’s head AI research scientist. LeCun backed his thought, so Lebrun departed and LeCun became one of Nabla’s first financiers.
It took Nabla a few extra years to reveal the financial backing it had received–almost $23 million–which the business had chosen to keep private until it released its first product. This was a “super app” for women which allowed them to monitor various health-related queries, blend the data with other information, and seemed mainly created as a medium to help it determine what people desired in distant health treatments, and what they could develop from that.
Last year, a “health tech stack for patient engagement” was created which was based off of the central metric of Lebrun’s earlier products, which was engagement. This was quite interesting.
You may have doubts about a startup that is trying to address an issue within the healthcare system, but the founders lack any medical experience. The two other founders are COO Delphine Groll, who previously managed business growth and communications for different media companies, and CTO Martin Raison, who has collaborated with Lebrun since Wit.ai.
Lebrun mentioned that he had pondered the thought of halting his venture’s progress in its early stages to pursue a career in the medical field.
He decided not to, and instead relied on feedback and data from medical professionals and other healthcare providers, employing them in his startup to guide the development of the product which is now known as Copilot.
Jay Parkinson, MD, MPH, and Chief Medical Officer of Nabla, stated that Nabla Copilot is intended for clinicians who wish to stay at the forefront of medicine. Aware that doctors are often short on time and would rather not fill out digital health records, Nabla’s advanced clinical notes enable physicians to spend more time looking their patients in the eye. During the appointment, the eye should pay attention to what is being said and send a summary of the discussion to ensure that each word spoken is remembered. Parkinson, who has recently become a part of the startup, is an entrepreneur himself and his telehealth startup, Sherpaa Health, has been purchased by Crossover.
In order to improve AI, the company Copilot has implemented an opt-in data-sharing system, where no data is stored on their servers and is compliant with HIPAA and GDPR regulations. Those who agree to share their information will have it run through specially-developed pseudonymisation algorithms. At this time, there are no intentions to develop medical aides: no proposals of diagnoses nor anything comparable.
Lebrun remarked that it was much easier to say than to actually do. The AI of Nabla, while being constructed, kept attempting to give out diagnostics to its users without being asked to do so, in spite of the engineers’ attempts to make it not to, according to Lebrun.
He noted that they were careful not to intrude too much and perform diagnostics, so they taught their Artificial Intelligence not to do so.
He stated that a different kind of product could potentially be created in the far off future, but a lot of work and testing would need to be done before that could happen.
He was emphatic that they didn’t consider chatbots as a viable option for medical care, instead they were focused on improving doctors’ lives by providing them more time.