
hands
AIDA blog: A milestone for ethics in clinical data sharing

AIDA blog: A milestone for ethics in clinical data sharing
Thoughts from AIDA Director Claes Lundström
When someone brings up the topic of AI ethics, the backdrop is usually a feeling of grave concern. The discussion revolves around potential adverse effects, often in the context of philosophical reasoning about what AI may be like in a distant future. While this is worth considering, I’d argue that we should spend more time talking about the ethics of AI here and now. What could be sound guidelines for healthcare, academia and industry in their current day-to-day work in AI?
Data sharing is a key issue for AI, and at the core of the ethical discussion. Yesterday, a paper was published that is likely to become a key reference point for practical ethics of data sharing – a special report in Radiology by David Larson and Stanford colleagues including Curt Langlotz, called “Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework” (https://pubs.rsna.org/doi/10.1148/radiol.2020192536).
First of all: Read it! I’ll discuss some central points below, but you’ll need the full paper to appreciate the rigorous analysis behind their proposed framework, such as the backing found in recognized standards for biomedical ethics.
A fundamental question is: Who owns the data? One could argue that the patient is the owner, or the care provider – the latter is de facto the interpretation behind data sales that have occurred. The conclusion by Larson et al. is a third alternative. Once the clinical data has been used to provide care, the primary purpose is fulfilled for both the patient and the care provider. Beyond that point, the moral priority is to improve healthcare for future patients, which is best pursued by viewing the data as a public good rather than an asset under ownership.
With this ownership view, it becomes evident what the appropriate moral standpoint is. Sharing is good. Sharing is the ethical and responsible course of action. Selling data for profit is unethical. Providing data under exclusive arrangements is unethical, since the limited spread limits the benefits that can be gained from the data.
Remember that sharing is not just any aspect of AI in healthcare, it is the absolute key. High-quality AI comes from massive amounts of representative data, i.e., from many places. And the main obstacle for clinical use at this point is the domain shift challenge – we know we can get great precision for data from the sources we trained on, but also that performance will drop when we go outside of those sources. Broad sharing is the remedy.
But how does this affect industry? Healthcare needs good tools, tools need to be provided by companies, and companies need revenue to provide those tools. By stopping data from becoming proprietary assets, do we remove essential business incentives? No. While having access to good data is necessary for AI development, it is far from sufficient. Huge efforts are needed to turn data into knowledge and tools, with everything from data readiness preparation to training scheme design. These activities are value-adding, and companies doing them need to get returns on such investments. Indeed, the Stanford authors acknowledge the value of turning data to knowledge and conclude that it is fair to make a profit from it. I would add that it’s not only fair but necessary for AI advances to ever reach patients.
Healthcare providers have struggled to deal with the fact that data under their control is valuable. The proposed ethical framework makes it clear that their role is not in asset monetization, but in effective data stewardship for the greater good. Apart from that it is the right thing to do, I think that this role fits care provider organizations much better, and that it will be a relief for them to have clear ethical grounds for choosing that path.
I should also mention that a crucial part of the ethical framework is to have strong safeguards preventing privacy problems in sharing. Importantly, the authors propose a pragmatic handling of de-identification, allowing for a reasonable balance between the risk of errors and the effort needed on a case-by-case basis.
How does AIDA operations relate? I’m very happy to conclude that the ethical framework is very much in line with the Data sharing policy we have developed for the Swedish context at AIDA (https://datasets.aida.medtech4health.se/sharing/). Interestingly, however, the scope of the Stanford paper is not only research as in the AIDA policy, it includes also development efforts and purely commercial activities.
While ethical arguments are often used to question potential AI efforts, the Stanford framework highlights the opposite aspect of AI ethics: We should acknowledge the ethical obligation to make advances valuable for society, without letting misdirected ethical concerns stop us in that endeavor.
AKTUELLT
Integrerad diagnostik som en hörnsten i precisionsmedicin
Under Vitalis 2025 arrangerade AIDA, Genomic Medicine Sweden och Sectra en session som presenterade integrerad diagnostik som en hörnsten i precisionsmedicin. Alltså när radiologi, patologi och genomik med mera kombineras - med nya IT-verktyg, AI-stöd och framför allt arbetssätt.
Ny AIDA projektansökningsomgång ”Införande av AI för medicinsk bild på klinik”
AIDA är en nationell arena för forskning och innovation kring artificiell intelligens, AI, för medicinsk bildanalys. Basen är i Linköping men arenan är nationell. Just nu bjuder AIDA in till en förslagsomgång för kliniska fellowships och kliniska utvärderingsprojekt. Deadline för ansökan är 12 november 2024.
Valideringsplattformen VAI-B i fokus på Röntgenveckan i Uppsala
Årets upplaga av Röntgenveckan hade tema AI i dag och i framtiden. Här höll Medtech4Healths projekt AIDA två sessioner. Ett om validering av AI som lyfte den nationella valideringsplattformen VAI-B.
Snabbare hjälp vid blodpropp i lungan
När röntgenläkare i Halmstad tog hjälp av AI hittade de fler blodproppar hos patienterna än tidigare. Nu har AI-granskningen blivit en del av det dagliga arbetsflödet och kortar tiden till diagnos och behandling. Sedan år 2020 har sjukhuset i Halmstad haft hjälp av AI, artificiell intelligens, för att granska röntgenbilder.
NYHETSBREV
Följ nyheter och utlysningar från Medtech4Health – prenumera på vårt nyhetsbrev.

AKTUELLT
BME@LiU visar vikten av tvärsektoriellt samarbete
Under onsdagen deltog Medtech4Healths programdirektör Lena Strömberg på konferensdagen BME@LiU där hon gav en presentation om resultat från programmets nio år samt alla de verktyg som Medtech4Health erbjuder via Medtech Arena. Dessutom engagerades hon i en rad samtal kring forskning, utveckling och det medicintekniska regelverket MDR. För inbjudan stod bland annat Peter Hult, kontaktperson vid SO Sydöstra.
Premiär för Medtech4Healths etiska guide för utveckling, upphandling och användning av välfärdsteknik!
På Vitalis i morse var det premiär för en ny etisk guide för införandet av välfärdsteknik. Guiden presenterades av Thilia Nyberg, Funktionsrätt Skåne, och Stefan Johansson, forskare och tillgänglighetsexpert, Begripsam.
Så har fyra av Sveriges regioner förbättrat sitt innovationsarbete med stöd av Medtech4Health.
Under årets upplaga av Vitalis i Göteborg publiceras slutrapporten för Medtech4Healths strategiska projekt Innovationsmotorer. Rapporten är en fyllig och praktisk beskrivning av arbetet i de fyra regionerna Region Uppsala, Västra Götalandsregionen, Region Västerbotten och Region Östergötland.
Uppskattad dragning på Vitalis under rubriken ”Användarmedverkan i praktiken – en guide”.
För en framgångsrik användarmedverkan finns tre nycklar: tid, tillit och tillgång. Tiden att inkludera användare från projektstart, en gemensam tillit i relationen, och god möjlighet för alla att delta i utvecklingen.
NYHETSBREV
Följ nyheter och utlysningar från Medtech4Health - prenumera på vårt nyhetsbrev.