Intellectual property can be a thorny issue in health tech and medtech. The intersection of healthcare data and AI is setting up some complex patent showdowns and interesting ethical discussions. What are the implications for how we think about personal health data and innovation based on that data? Munck Wilson Mandala Partner Greg Howison shared how attorneys are thinking about AI, IP, connected devices and the data they generate in response to emailed questions.
You have said that most people neither understand the importance of data nor the role of data derived from connected medical devices when it comes to IP. What do you mean?
Any medical device with any type of sensor collects data. That data can be ephemeral—meaning it exists only for local evaluation by the device—it can be stored for later retrieval, or it can be offloaded via a wireless transmitter. Implanted devices have historically used a near field communication link for this purpose. However, FDA approval of Bluetooth transmitters for usage within the body has allowed for the advent of Bluetooth-equipped implanted devices that can communicate with smart phones. This ability to communicate between a smart phone and an implanted device will be at the center of all new medical device technology going forward, as any implanted device will be capable of use for real time monitoring and data collection. This data is valuable.
Data from simple medical devices in clinical trials is being collected and stored by medical device companies for the purpose of training large diagnostic models. This collected data is actually a valuable asset for these companies and the question is, who owns the data when a person is involved and associated with the data?
What kinds of legal questions are raised by the way medtech companies, big and small, are sourcing data from their connected devices?
The legal issues relate to HIPAA and privacy. Is it possible to provide a document that transfers all rights in the personal data to a company and if so, what is a company allowed to do with this data? The issue there is the HIPAA umbrella. Even though one has possession of data, protection of that data may still come about under HIPAA. Does the sale or distribution of that data for the purpose of training a model violate HIPAA? Does the sale or distribution of the trained model violate HIPAA?
There is currently a large copyright case out there dealing with what can be gleaned from a trained model, in which ChatGPT is being sued by litigants claiming that the mere use of their copyrighted creative work to train a model results in the model being a derivative work of their “original work”.
Such an approach requires there to be close association with what ChatGPT outputs and the author’s original work.
This line of thought can be extended to models used in the health sector for training and the such. For example, could a well-trained model somehow be used to back into the medical information of a person? Suppose there was an individual with some rare disease that uniquely identifies them, and a query is made to the model as to what other maladies are associated with that disease. In a litigation environment, the litigants would argue that the results of this query might uniquely identify that individual, but the model gurus do not agree with that. It is difficult to argue that a model is a derivative work of or closely associated with a personal profile. There will no doubt be litigation up the road on that. But a well-crafted release should be able to address this issue.
Are there competing legal theories regarding data ownership when a person/patient is involved?
You have to first think about data in and of itself. Data per se is like picking up a handful of sand—it is just a bunch of numbers, but the person who gathers it (or uses a device to gather it) “creates” it (or uses a device to create it). One can then store all of this data and keep it under wraps and thus own it.
Now, suppose the data is created using a machine interfaced with a patient. The creator (the one operating/controlling the machine) still owns the data, but there is now a question as to the patient’s rights. This is the old issue of whether one owns their medical records—collecting data from a machine is no different than writing information in a file, but this does not necessarily revoke a patient’s right to access the information. Thus, one might own the data they created from a patient, but the patient should still have the right to access the data.
Furthermore, data ownership can have restrictions placed thereupon by privacy concerns. There exists a restriction on the use of clinical information in that it cannot be identifiably associated with a particular patient. The exact same thing applies for collected medical device data. Suppose one were to collect blood pressure data from a class of patients aged 65 – 70 over the course of a year in a very particular locale. This could be used to create a chart of trends and likewise train a model. Since HIPAA concerns are always there, one usually is required to release their data for use by other professionals.
That is a long answer to the question, and the short answer is simply that privacy rules. The creator or collector of the data owns the data because they collected and assembled it into a structured database, but if it is collected from a patient there are some potential restrictions on the use of it.
What are you seeing as categories of IP use cases among your clients?
IP for these clients can be classified as patent, trademark (branding), and data. The value of any one of those depends on what business one is in. For example, any business that requires diagnostics in their business plan will likely use AI in their development efforts as a tool, such as for gene therapy research. Almost all drug research will make use of AI models trained on large (possibly proprietary) data sets as a tool in the research and development of their drugs. These are businesses that will “use” their IP, in the form of data, as a tool in their efforts to develop products as a source of value.
Then there are businesses that will create IP as a source of value. A business developing a new medical device will have patents at the heart of their IP portfolio, of course. When the device is sold, the person or company using the device will create data, and this creation/collection of data will become an important aspect of their IP. Any company invested in clinical trials will have data collection as a central part of their IP portfolio. One expects, however, that any use of a medical device will more than likely have a separate license as to who owns the data. Even though one owns a device, there is usually a software component to the device that is only licensed. For example, when an individual puts on a wearable with a communication link, the data created by the device from the individual’s measurements usually goes to a cloud service, and the user license agreement likely stipulates that all of the data thus collected is owned by the service provider/licensor – look at the fine print!
What kind of nuances are you seeing across your healthcare client categories in terms of legal and regulatory challenges for class 1-3 medical devices? Diagnostic development?
From a pure investment strategy perspective, the time to proof of concept is key. For Class 1 devices, the time required to prove effectiveness is comparatively fast. So, an investor will know if that dog will hunt within a short period of time. Getting over the FDA hurdle is also not that onerous or expensive for Class 1 devices – it may take a couple of years. The time to ROI on these Class 1 devices is short and more in line with the expectations of most investors.
As one shifts to Class 3 devices the investment perspective changes dramatically. FDA approval for Class 3 devices is expensive and time consuming. Even if the device proves effective and viable for the market, it still has to go through the FDA safety evaluation process, and even if the product gets through the FDA, there remain the hurdles of getting a medical (CPT) code for reimbursement and then gaining acceptance in the marketplace. The investment risk primarily lies in whether the device will even gain traction in the market after clearing the lengthy regulatory processes.
One example of this is neurostimulators for pain treatment. Medtronic is the leader in this area. The new products on the market address specific pain issues with a lower price point and arguably better safety than their predecessors. I have worked on a migraine neurostimulator, which was an interesting area. This is a head-implanted neurostimulator – a Class 3 device. I think it will be on the market next year. Once you identify a large market (such as that for chronic migraines) that has a need because what is out there may not provide an adequate solution, this presents a possibility for a new device to successfully be introduced into the market. A lot of the existing migraine devices are external and are thus not Class 3 devices, so they are able to be introduced into the market faster than implantable devices. To go the implant route requires one to be sure that it is the correct route. The device I worked on was based upon the well-known minimally invasive Reed procedure developed by Dr. Kenneth Reed, the inventor of this device. Thus, the effectiveness was already well proven for the stimulation process, and it was just the safety issue that needed to clear the FDA due to the device being an implant with a rechargeable battery. This was considered the path to take, as external leads are not as effective in applying precise stimulation to a target nerve as implanted leads.
What are some of the nuances of IP when it comes to the importance of the data derived from connected medical devices that AI/machine learning algorithm models are trained with?
I do not think there are any nuances as to data. It is just a by-product of the device. If a device is developed for an application, such as real time monitoring of blood glucose, the main purpose is to allow a patient to sit there with their phone and monitor their glucose levels in real time. This could be used to, for example, set off an alarm or use the information via an AI engine to predict what levels of dosage are required for an insulin pump. But there is a lot of data that is collected and, when combined with other information on the individual, value exists. There is no sense in throwing that away. It is used on one hand to make the device useful in real time and on the other hand to augment these large training databases.
What’s your outlook for 2024 /what are some of your predictions/what will you be watching/looking out for regarding IP in the area of machine learning algorithms derived from/trained on connected device data?
I think there will be an increase in use of such things as ChatGPT for user interfaces in the medical industry across the board. The main use that may be most disruptive is in self-diagnosis. One will be able to take a picture of a mole, for example, or a rash associated with shingles, and send it to an AI engine that will provide a surprisingly accurate diagnosis. The use of AI with all these new test strips out there will change telehealth quite a bit. For example, a test strip for a UTI or Strep coupled with a patient’s medical history will be used to trigger a prescription event in a cost-effective manner.
Then there will be the analysis of a patient in a physician’s office prior to the physician even seeing the patient based on data collected at check-in that will provide a preliminary diagnosis and a recommendation for drugs and such. Think of a patient in a dermatologist’s office walking into a booth and doing a complete body scan—total body photography—prior to meeting with the dermatologist. The scan can be run through an AI engine and the dermatologist provided the results in a fraction of the time normally required and with arguably higher accuracy. Hospitals will also use AI for matching patients to health care providers to make patient check-in more efficient. Click here to view an example of this.
Most of these uses of AI do not require FDA approval. There will be some ethics issues raised by physicians using this for assisting in diagnoses, but that will be at a different level. For devices, the trained AI model will usually be a fixed model that just has to be shown to be effective as to that medical device, as is the case for a programmed CPU used in current medical devices. As long as the program is fixed in the CPU it is FDA approved, but once changed it has to go back to the FDA, as would be the case with an AI model that has its training altered. There is a concern with adaptive models that retrain themselves as they are used, and that will raise FDA issues.
Again, IP in the form of the data in a trained AI model used for the operation of a device is valuable in the sense that AI is important in realizing such a device. But the data collected from the operation of the device is a separately valuable byproduct that can be used to create large training databases that are in turn used in creating new AI models. All wearables that collect data can provide a lot of benefits when data collected therefrom is disposed in a patient’s stored profile and used for making any decision regarding that patient, not to mention that their data can be used for making decisions as to other patients in the collective. Wearables will be the primary area to impact the medical device arena in the near future. They are Class 1 devices at best, and most can be processed through an FDA 510(k) procedure.
Have you been encouraged or discouraged by work done by the FDA and other organizations in attempting to create standards in machine learning regulation and related areas and why?
For the most part, the FDA does not (or should not) have to be all that concerned with AI. In the area of therapeutics, if AI is used to develop a drug, that should not be an issue. However, if AI is used to “evaluate” the effectiveness of the drug, that is where the FDA has an issue to address. Any time AI is used to analyze data to predict some result that would in turn be utilized to generate the result or provide predicted information needed to approve the drug, there will always be a validation issue associated with the AI engine.
For a medical device, on the other hand, the incorporation of a trained AI model into the device is a one-time event that results in a fixed and programmed device that is able to receive inputs from sensors and the such and “predict” some result. The device utilizes a specific, fixed AI engine that has been trained on some databases. If one were to use a wearable with sensors that sense various biological inputs and then use these biological inputs to predict a result such as, for example, blood pressure, the question is whether that prediction is valid. There was a standard for measurement validation put forth back in 2018 for devices that are used to provide blood pressure information on a patient.
Although we now look at purpose-built wrist blood pressure monitors as being accurate, they had to be validated at some point. I am sure there exists some accuracy issue with respect thereto, but the designers do utilize some type of algorithm to account for errors in sensors and the like. As the power of AI engines increases, one can imagine it will be possible to take smart watches and utilize their various sensor inputs to somehow approximate the wearer’s blood pressure with a trained machine learning system. The result, however, will still be nothing more than a prediction. The question remains whether that predicted result is a valid result.
Therefore, any time you use an AI engine there will be a question as to the validity of the results—an issue that will be of concern to the FDA. As another example, there are systems out there that purport to provide real-time blood glucose measurements. They provide this based upon a measurement of parameters associated with such things as sweat. If these measured parameters are run through an AI engine that provides the blood glucose measurement, it is important to know how valid these results are. There are just a lot of variables that go into the prediction provided by any AI engine, such that any AI-generated prediction presents a concern to the FDA.
The FDA may require some more proof that the model has no issues, but I do not think that will be a big issue. Updating a model will require a bit more work though. When a model used in the operation of a medical device is adaptive and changes based upon feedback, that will have to be addressed separately. In general, the FDA is just beginning to get its arms around all of this. It is just recently that these trained models have been implemented in chips that are capable of being incorporated into a medical device.
(Based on policy questions from WIPO)
Should IP policy create new rights in data?
This goes to the question of there being some intangible rights in the data. A very complicated issue. Data is something that is collected from a device or source that creates it. This is what we refer to as a product of the device or process. We can protect the device and the process in most cases, but the issue is whether the product of that device or process is protectable. Right now, the data generated is not protectable as an intangible property. Sometimes one can argue that the final data structure (these large training datasets are structured datasets) is protectible, but not the data itself. These large training datasets will probably remain proprietary at best.
Should AI algorithms be patentable?
An algorithm per se is unpatentable. This all goes to the premise that abstract ideas are not patentable. If the algorithm changes the operation of the machine in a significant way, that can rise to the level of patentability. But the algorithm by itself is currently not patentable subject matter. This is something Congress would have to deal with.
Photo: metamorworks, Getty Images