Med-Di-Dia

MENU menu

MedTech and Ethical Artificial Intelligence

 

The amalgamation of Medical Devices and SaMDs is going to have a major impact on the MedTech space. With advancing technologies delivering excellent results, Medical Device Manufacturers have now started increasing the use of Artificial Intelligence (AI).

 

AI has the potential to revolutionise various industries, but technological adaptation brings with it several controversies. Researchers and developers have raised concerns about the lack of an ethical framework for adaptation and implementation of AI as it poses a great threat to end-users and consumers.

 

These concerns may be especially relevant to medical device manufacturers, who increasingly use AI in new medical devices like smart monitors and health wearables. New standards and regulations on ethical AI may provide essential guidance for medical device manufacturers interested in leveraging AI.

 

Challenges in MedTech and Artificial Intelligence

 

Common Challenges with AI Applications:

 

Expanding usage of Artificial Intelligence poses various ethical challenges, and these include:

AI-powered healthcare systems have struggled with similar problems. A 2020 report in Scientific American noted that AI-powered diagnostic algorithms often performed worse when analysing health data from under-represented groups. The article referenced a report describing AIs intended to support doctors in reading chest X-rays that ended up performing worse when presented with an X-ray from “an underrepresented gender.”

 

MedTech and Ethical Artificial Intelligence

SA also pointed to another article, “Machine Learning and Health Care Disparities in Dermatology,” which shared concerns about the potential consequences of skin cancer detection algorithms trained primarily on data from light-skinned individuals. Devices like smart health wearables have the potential to revolutionise healthcare—but if the algorithms they rely on are biased, they may be limited in their usefulness. The problem lies in the vast datasets that AI relies on.

 

AI algorithms typically perform worse when analysing new data from groups that were underrepresented in training data. In practice, this often means training will encode existing biases into a new algorithm. At the same time, the use of AI may give those biases an objective veneer, sometimes allowing them to slip through undetected.

 

These challenges will not stop and continue evolving, but they should not hinder the innovation process. The ecosystem needs to build and develop ethical practices and principles to navigate these hindrances.

What can AI Ethical Standards Include?

 

There is no defined solution to any of the challenges posed by AI. Many organisations are pioneering and curating regulations and standards that can support other companies in developing ethical AI products.

 

Ethics on AI are still being developed, but we could witness tremendous progress as seen by Deloitte’s summary of some key trends. These developments serve as a starting point for device manufacturers to focus on creating safe AI integrated devices.  The Institute of Electrical and Electronics Engineers (IEEE) launched The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) in 2018. The purpose of the program is to develop specifications and a certification framework for developers of AI systems who work to mitigate issues of transparency, accountability, and AI bias.

 

Ultimately, the organisation hopes ECPAIS certifications will assure end-users and individual consumers that certain AI products are safe. Their developers are taking active steps to manage the ethical challenges AI can pose. Bias in AI could require further elaboration on developer best practices and new guidelines.

Major AI organisations are already progressing towards understanding how bias in datasets translates to bias in AI algorithms.

 

This understanding has helped them to develop new frameworks for preventing discrimination in new AI algorithms. Google’s AI division has already published a set of recommendations for responsible AI use. IBM’s Fairness 360 framework offers a “comprehensive open-source toolkit of metrics” to help developers uncover unwanted bias in new AI algorithms.

 

In any case, better training and data-gathering methodologies will likely be necessary for medical device manufacturers to minimise bias in new healthcare algorithms.

 

The growing use of AI means businesses have started to consider how they will manage the ethical issues that AI algorithms can pose.

 

The development of new frameworks, guidelines, and standards can help businesses develop new AI systems and products. 

 

Don’t let regulations hold back your innovations!

Connect with us for a Free Consultation

Call Now: +353 (0)91-704804

Send An Email: mdd@mddltd.com

Back...

DO NOT FALL BACK!

 

Get access to the Med-Di-Dia’s newsletter, where industry experts help you to stay on top of shifting global markets.

Stay updated with the latest Trends in the world of Medical Devices!

 

* indicates required