On October 6, 2020, as part of ICTC’s Technology and Human Right Series, Rosina Hamoni, Research Analyst with ICTC, interviewed Dr. Frank Rudzicz, who holds multiple roles including Director of AI at Surgical Safety Technologies, Associate Professor of Computer Science at University of Toronto, Co-Founder of WinterLight Labs, Faculty member at Vector Institute for Artificial Intelligence, and CIFAR Chair in Artificial Intelligence. Rosina interviewed Dr. Rudzicz about AI in the field of cognitive impairment, AI ethics in surgery, and privacy and data protection.

Rosina: Thank you for joining me today, Dr. Rudzicz! It’s a pleasure to speak with you. You’re widely known as an expert in speech language, brain-computer interfaces, and AI applications in healthcare, holding various roles in the industry. For our audience, could you please explain a bit about your work?

Dr. Rudzicz: I am at an intersection of a few different disciplines — basically machine learning and natural learning processing applied to healthcare. One side is academic research — so at the University of Toronto, the Vector Institute, and St Michael’s Hospital we do foundational academic research. And on the other side of the Venn diagram is trying to put research into practice. Two of my students started a company a few years ago called WinterLight Labs, and I’m also Director of AI at Surgical Safety Technologies. Overall, our work is to shift technology in order to improve people’s lives. It’s really a motivating factor for the research we do.

Teaser image

Photo by National Cancer Institute on Unsplash

Rosina: One of your many roles includes your role as a Co-Founder and Scientific Advisor of WinterLight Labs. WinterLight Labs has created a tablet-based speech analyzer that detects and monitors cognitive impairment through speech by using AI. This will certainly change the field of cognitive impairment, as typically Alzheimer’s disease or other types of dementia manifest in the brain long before symptoms appear. Could you please speak a bit about this project and how this novel approach will complement the field of cognitive impairment?

Dr. Rudzicz: We came to this project as computer scientists. Myself and the students, we’re all used to using natural language processing (NLP) for various kinds of software.

We see NLP all the time in our daily lives. When you’re typing on your phone, word predictions come up, that’s actually a probabilistic model. Google Translate is another example. It’s everywhere. We’re used to applying NLP to these kinds of tasks.

As for WinterLight Labs specifically, we first thought about what healthcare challenges are related to language that we could solve with NLP. We talked to language pathologists and clinicians that deal with speech and Alzheimer’s and dementia. People can’t assess symptoms for Alzheimer’s and dementia appropriately with these old-fashioned paper-and-pen tools. With an aging population and limited infrastructure, there is this “silver tsunami” — [aging population] wave coming. We realized we can develop tools to measure voice, listen to pitch, count words, measure grammar, and meaning, and we realized that we have a big list of tools that we can apply to the problem. Everything snowballed. We started working with some open research data from a site called AphasiaBank, and we got some positive results in predicting dementia of various kinds. Things have been moving ever since then.

Rosina: Have you encountered any challenges during its development or adoption?

Dr. Rudzicz: We encountered a ton of challenges of different types. There were technical challenges and then more of what we could call “cultural challenges.” A lot of earlier work was in English. We relied on getting transcripts of what was said into text form. Speech recognition works reasonably well for English, but it doesn’t necessarily work as well for different languages. In English, people would say “um” if they can’t remember something. But then when we’re looking at French, “um” is much more common in healthy speakers: it’s how they tend to speak in France. In that case, it doesn’t necessarily mean that they’re having trouble remembering the word. The exact models — statistical models or ML [machine learning] models — we were building didn’t directly apply to different languages. Separately, many challenges were cultural, in terms of professional culture. With AI in healthcare, the first question people had was often “Are you trying to replace me?” Cultural issues like that were hard to break down. As computer scientists, we tend to focus on the mere accuracies of our predictions, but sometimes those models can be difficult to interpret, or it’s not clear how those results can actually change clinical workflow or clinical practice. We’ve therefore been focusing more on machine learning in context.

Rosina: You recently co-authored an article called Ethics of Artificial Intelligence in Surgery. I think a lot of our readers might be familiar with some principles in medical ethics, such as beneficence and principles in AI ethics around algorithmic biases. Could you talk a little bit about how ethics in these two fields intersect? What is different or new about AI ethics in surgery?

Dr. Rudzicz: To a large extent, the challenges are exactly the same. When you specifically focus on training models to achieve the highest accuracy on some task, these models can end up having biases embedded in them. For example, in NLP, we build models that try to understand language to some extent, but if we use text data from a few decades ago, then that data might have now outdated gender biases embedded in them. There’s a famous paper, Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, that explores these gender biases and provided a methodology to, I guess you could say, “surgically remove them.” Similarly, another example came out a few years ago about detecting lesions on the skin and whether they are benign or malignant. But the question was then asked, “Does it only work on white skin?” It was recorded based on a mostly German population, if I remember correctly.
 

The ML people will say, “If we’re able to correct for negative bias but still keep in useful information about say demographic differences, then that’s the best scenario.” But in practice, privacy is one of the biggest umbrella topics in medicine and governance. It doesn’t make sense to hand over all the information to computer scientists who are looking to use it somehow in these new devices. While researchers want to improve and reduce bias, they can’t test whether their algorithms are successful unless there is a “gold standard.” If someone asks, “Is my model biased?” well, I can’t tell if you don’t tell me what is included (or excluded) in the dataset.

One of our features at Surgical Safety Technologies is that we have this product called the Black Box, which is basically cameras and recordings in the operating room. The cameras can see all the nurses, surgeons, anesthetists, and patients. Our technology can blur various kinds of identifying information. So we blur faces, information about the room, and other details that could be used to identify a person. We can filter out much information, and yet we’re able to retain enough information that people in our team can still build the tools to increase accuracy. There hasn’t been a conflict. We’ve actually worked very well together.

Rosina: In the same article, you talk about black-box AI, explainable AI, and healthcare policies. Could you expand on the challenges that black-box AI might raise in healthcare ethics?

Dr. Rudzicz: The OR Black Box is similar to the black box in an airplane: if something goes wrong (or very right!), then there is a complete recording of what happened. That’s not in the same sense as when people talk about “black box AI.” “Black box AI” is a generic term that basically means a system that is opaque: you can’t see how it works inside. And that’s kind of what neural networks and modern machine learning models have become. They’re these relatively complex networks with sometimes billions of connections. They’re very complicated things. Give these networks a piece of data, like an image or a video or some text, and then out on the other side of the network comes a decision like “This person has cancer,” or “This is an example of a bleeding event,” but not an explanation of why that decision was made.

So, a lot of what we’re trying to do with “explainable AI,” is to develop tools that allow us to open up the black box and make it transparent. This field really only started a few years ago, and I think there’s a lot of work left ahead of us. We need to evaluate complicated modern neural networks that can make mistakes, but we evaluate them using complicated neural networks that can make mistakes. There’s some controversy, but I think it’s a very important direction.

Rosina: How does your organization ensure that participants’ data are secured and confidential in terms of their rights to privacy and data protection?

Dr. Rudzicz: I’m not a lawyer, so I will explain this answer from a technologist’s point of view. Different jurisdictions often have different obligations. Like I mentioned earlier, we can blur faces and remove tattoos in video, for example, and apply extra steps of deidentification, not just with AI but with humans in the loop as a quality check. There is an extensive process of informed consent as well, the specifics of which are typically decided upon by the hospital or site.

In academic circles, we reduce access to data to a very limited number of people. Typically, researchers who see the data go through the TCPS2 program: ethical conduct for research involving people. It was actually kind of cool, computer scientists got to go through this extra training in ethics, which we don’t normally get in computer science degree programs.

Rosina: From your perspective, as a scientist, what do you think are the most interesting issues we’re currently facing with regard to AI and human rights? For this question, please feel welcome to go beyond your own work, it can be about anything you find most engaging.

Dr. Rudzicz: That’s an amazing question. It usually comes down to equity. There was inequity in healthcare before AI, and it may be accelerating with the more AI we have. Modern AI needs to run on powerful computers, but who can afford these supercomputers? Individuals usually can’t, so in Canada it’s often large universities — perhaps with some hospital affiliation — but community clinics typically haven’t been able to take advantage of these advances yet.

Along similar lines, the COVID Alert app, while not technically speaking AI, runs the risk of exacerbating inequity in that it doesn’t work on older phones. So suddenly you have people in these marginalized communities where outbreaks are happening or where the incidence is much higher — they’re the ones who need the app, but they can’t get it because they don’t have up-to-date iPhones, for example.

Overall, AI will be a transformative technology and will be an overall benefit rather than a detriment, but I also think that we need a wide diversity of voices and organizations to guide the development of AI.

Teaser image

Frank Rudzicz is a scientist at the Li Ka Shing Knowledge Institute at St Michael’s Hospital, Director of AI at Surgical Safety Technologies Inc., an associate professor of Computer Science at the University of Toronto, co-founder of WinterLight Labs Inc., faculty member at the Vector Institute for Artificial Intelligence, and CIFAR Chair in Artificial Intelligence. His work is in machine learning in healthcare, especially in natural language processing, speech recognition, and surgical safety. His research has appeared in popular media such as Scientific American, Wired, CBC, and the Globe and Mail, and in scientific press such as Nature.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.