By Caitlin McCormack
By Caitlin McCormack
With the advent of at-home genetic testing kits and more research discussing genetic underpinnings for a variety of disorders and diseases, it appears we’re entering the era of personalized healthcare.
However, with genetics there’s often more than meets the eye – direct to consumer testing kits aren’t necessarily as reliable as many people might think, and they don’t always give a complete picture.
Similarly, artificial intelligence (AI) – especially machine learning – has the potential to revolutionize health care diagnosis and decision-making, but just how far away are patients and health care providers from handing over the reins to machines?
In the genes
“In the last five to 10 years we’ve seen genetic testing increase substantially, and in every space. It used to just be within a genetic counselling or geneticist office that we were doing and seeing genetic testing,” says Laura Smith, Vice President of Commercial Operations at Ambry Genetics. “Over the last decade we’ve seen surgeons and oncologists and cardiologists, even OB-GYNs, and all sorts of physician practices offering genetic testing.”
Genetic testing can identify a gene linked to age-related degeneration of the intervertebral discs in the spine – a common cause of lower back pain. With this knowledge, providers can encourage patients to take proactive steps to change lifestyle habits earlier to help minimize any effects.
Smith stresses that it’s important consumers and health care providers understand the different types of genetic testing available. Recreational genetic testing, or direct to consumer (DTC) testing, gives information about health risk, ancestry, and other traits, and are entertaining, inexpensive options. Smith does not recommend using those tests to make health care decisions, however.
“Often they only look at a very small subset of risk, and they don’t really look at the entire gene or set of genes that could cause disease. There could be misleading information, false positive information, or false negative information.”
In fact, a study by Ambry Genetics, published in Genetics in Medicine, revealed DTC genetic tests were accompanied by a 40 percent false-positive rate, incorrectly indicating a disease is present. This is why it’s so important to work with a health care provider educated on the tests to help decipher the results.
Smith recommends people do clinical grade testing from a laboratory that has CLIA certification and CAP accreditation, and that looks at a specific disease subset. This way they can get a definitive diagnosis one way or another.
“Genetic testing is not so black and white, unfortunately,” she says. “There’s times where we find variants of unknown significance that don’t require action, but are findings that might require action in the future.
Despite current limitations, Smith expects to see continued growth in direct to consumer testing, saying this will require healthcare providers of all scopes to be educated in the field, so that they can make sense of all the different offerings.
The advent of AI
While genetic testing is often seen as fairly run-of-the-mill, artificial intelligence (AI) is a tool patients might balk at in the health care setting. Health care providers have already been using AI for years to create patient medical records. Known as natural language processing, a provider can dictate patient notes via recording, where a computer program will translate the file into a written document.
Seok-Bum Ko, professor and grad chair in the Department of Electrical and Computer Engineering at the University of Saskatchewan has been working with a group of emergency room doctors recently on diagnosing rib fractures.
ER doctors don’t always have access to a radiologist in the emergency room setting, and lack the depth of musculoskeletal knowledge that manual therapy practitioners hold that allow them to make a quick and accurate diagnosis. Ko’s software can identify six different types of rib fractures, as well as pinpoint their location, allowing the doctor to quickly and effectively diagnose a patient.
“They can then send them on their way, or they can treat the patient appropriately in an efficient way. That’s a good impact.”
For subtleties and complicated scans, this type of software is a time and brain saver.
“Even a good, experienced doctor can sometimes make a mistake because they are tired or sometimes forget, but this does not happen to AI,” says Ko.
Bryn Williams-Jones, director of the bioethics program in the School of Public Health at the University of Montreal agrees, saying these sorts of mundane tasks are best left to a computer, which won’t be distracted by the fight they had with their spouse the night before, or become bored with the task in the way a human could.
“What you do is you allow your expert to focus on the rare incidents, the one-offs. That’s what decades of experience and medical training, of human intuition that allows you to identify the one outlier – that’s where you want that expertise.”
While Ko says we’re still a ways off from machines making the final call on disease diagnosis, it will come sooner than people expect. He says that while computer programs can diagnose changes up to 10 times faster than specialists, health care providers are still essential for synthesizing other data such as a patient’s history, age, or smoking.
“AI can’t handle those things at this point nicely, so all factors must be integrated perfectly before the AI machine makes a final decision. Because we’re dealing with a human life, we’ve got to be extremely careful and cautious.”
Williams-Jones says there is a key benefit of machine learning when it comes to diagnostics.
“Where AI becomes very interesting is helping with modelling, and going through reams and reams of data that’s very hard for humans to process, and looking for very low incidents or linkages and then looking at possibilities,” he says.
Just as a doctor is needed to help interpret results from genetic testing, so too must a doctor give final approval to an AI-made diagnosis. Williams-Jones, Smith, and Ko appreciate the strides of technology in healthcare, but still see a role for health care providers, especially those who are providing hands-on treatment.
“The machine doesn’t necessarily know whether it’s right or not – it’s looking for links, but some of them may just be garbage,” says Williams-Jones. “But if you generate that and then you bring in the human being to look at it, it becomes a very powerful tool to help identify linkages and low incidents or markers that you would never otherwise find.”
Tech not without issues
When it comes to limitations of AI in healthcare, Ko stresses it’s still very much a numbers game at this point, and the systems need more calibrated data in order to fine-tune their accuracy.
“We need a lot more labelled data to train the machine,” he says, “But medical imaging, because of confidentiality issues and privacy, we don’t have enough labelled data, which means we can’t train the machine appropriately.”
Another major concern, Williams-Jones says, is ensuring there is no bias built into any software used in the health care setting.
“If you’ve got poor quality data going in and you’re not aware that the data is poor quality, the results that come out of it are poor quality, and you don’t realize that,” he says.
For example, imaging from a sports clinic will skew in a way that negatively impacts chiropractors and RMTs working in a rehabilitation role in a retirement home setting, and vice versa. Being aware of biases built into software programs is the first step in ensuring they’re removed, says Williams-Jones.
While job losses in the transcription field have already happened, technological advances could knock even more providers out of jobs. In areas such as diagnostics, fewer radiologists can confirm more diagnoses if they’re simply double-checking a software’s results.
If we want to move towards a knowledge economy, Williams-Jones says we need to skill-up the entire population, otherwise these tools contribute to social inequality and injustice.
“We want to ensure that these innovations in AI, in big data, and learning systems don’t lead to reinforcing social inequalities where people are uneducated and unemployable because the robots are taking all the jobs.” he says. “Knowing that it’s a possibility, we can plan for it but that means we have to do things like invest in our primary and secondary schools.”
It’s not all bad news though. Ko’s research has shown that radiologists at all levels of expertise are more accurate in their diagnoses after reviewing AI results. And, he says, these time-savings will allow for further medical advancements.
With these very advanced, very personal technologies come fears that information could be hacked or misused. Smith notes some people fear losing out on a job or being denied health benefits or life insurance due to the results of their genetic testing. Ko adds that cyber security and the infrastructure that goes along with it must be implemented properly in order to ensure patient data is kept confidential, while still allowing for the growing of databases and machine learning.
Williams-Jones notes that while people often have this deep-seated fear of killer robots and surveillance societies, while those fears aren’t entirely unfounded, they are also not as extreme as many believe. Still, public discourse and input is important to ensuring responsible systems are put in place.
As we continue moving towards an online health system, Williams-Jones says it’s important to ask some key questions. Who has access to the data; what about privacy rights? Is this data being sold? Is the data being housed on commercial platforms? Are those commercial platforms secure? If the company that owns the commercial platform gets sold, is that data now the property of the new company? What about the person who contributed that data and do they need to be notified? Who is responsible in the event of a misdiagnosis?
Responsible data management needs to be a cornerstone of these programs, which is why Williams-Jones finds it promising to see policies and procedures being developed by all stakeholders as advancements are made.
“There’s the Montreal Declaration on Responsible Innovation in AI, there are European declarations — and all of this is being driven not by ethicists, but by the scientists and the engineers saying ‘we need to think about this.’ They are actively talking about ethics, and responsibility, and integrity, and that’s leading to very vibrant discussions.”
The future for HCPs
Williams-Jones says that professionals in rehabilitation – physios, MTs, and chiros – need to be active players in the uptake of these technologies.
“They have an obligation to pay attention to this and to ask the right questions. ‘Is this a technology that I actually need to do my job better, or is it just the new toy of the day with lots of bells and whistles?’ That’s a real professional ethics issue.
“The professional has an obligation to invest the time needed to learn about these technologies and to work with the developers and help shape the technology, because they’re ultimately the expert on how to help patients deal with a range of health conditions.”
While all of these technologies are advancing at lightning speed, it’s important to remember that it takes several years for a software to be developed, tested, and implemented – not to mention receiving government approval. We’re still a ways off from handing over the controls to a computer for our health care decisions. Ultimately, machines will never replace hands-on practitioners, but their roles will certainly shift as a result of these advances.
CAITLIN McCORMACK is a Toronto-based freelance writer, specializing in health & wellness and technology content. You can see more of her work at Caitlin Writes (caitlinwrites.ca).