Innovationsprojekt

AIDAs utvecklingsprojekt tar fram AI-baserade beslutsstöd och drivs av forskargrupper inom industri och akademi från hela landet i samarbete med vårdgivare. Nedan finns sammanfattningar av AIDAs aktuella projekt.

Evaluation of deep learning to predict the diagnosis of LBD using 18F-FDG PET

Kobra Etminani, PhD
Halmstad University

The main purpose of this project is to compare deep machine learning methods to human visual interpretation in clinical applications. We want to evaluate whether Artificial Intelligence (AI) models (including shallow and deep learning algorithms) could be trained to predict the final clinical diagnoses in patients who underwent 18F-FDG PET scans of the brain and, once trained, how these algorithms compare with the current standard clinical reading methods in differentiation of patients with final diagnosis of LBD or no evidence of dementia. We hypothesized that the AI model could detect features or patterns that are not evident on standard clinical review of images (both visual and quantitatively with the available commercial programs for brain quantification) and thereby an earlier detection of pathology, improving the final diagnostic classification of individuals.

There are various problems within this domain including intra-observer differences and limited number of nuclear medicine specialists with experience in 18F-FDG PET brain scans. We believe that we can contribute to develop an AI algorithm that is more invariant to different nuclear medicine specialist and help in coming faster to a diagnosis from the images thus improving healthcare for these patients.

Trustworthy AI-based decision support in cancer diagnostics

Joakim Lindblad
Uppsala University

To reach successful implementation of AI-based decision support in healthcare it is of highest priority to enhance trust in the system outputs. One reason for lack of trust is the lack of interpretability of the complex non-linear decision-making process. A way to build trust is thus to improve humans’ understanding of the process, which drives research within the field of Explainable AI. Another reason for reduced trust is the typically poor handling of new and unseen data of today’s AI-systems. An important path toward increased trust is, therefore, to enable AI systems to assess their own hesitation. Understanding what a model “knows” and what it “does not know” is a critical part of a machine learning system. For a successful implementation of AI in healthcare and life sciences, it is imperative to acknowledge the need for cooperation of human experts and AI-based decision-making systems: Deep learning methods, and AI systems, should not replace, but rather augment clinicians and researchers.

This project aims to facilitate understandable, reliable and trustworthy utilization of AI in healthcare, empowering the human medical professionals to interpret and interact with the AI-based decision support system.

Segmentation of the waist of the nerve fiber layer by AI for clinical evaluation of progression of glaucoma

Per Söderberg
Uppsala University

Glaucoma is a major cause of vision loss. Objective evaluation of glaucoma with visual field testing is insufficient for early detection and early establishment of progress of the disease. In the optic nerve head, nerve fibers conducting the visual information from the eye to the brain merge into the optic nerve. Optical coherence tomography images the structure of the optic nerve head with close to micrometer resolution. We have developed a semi-automatic strategy that allows angularly resolved measurement of the waist of the nerve fibers at the optic nerve head, PIMD(angle). We have proven the feasibility of the strategy, and that PIMD(angle) measurement provides; much better resolution, can be obtained much quicker and with a lot less strain on the patient, than visual field testing.

Now, development of a fully automatic software is required for implementation in routine clinical work. We have demonstrated that it is feasible to use artificial intelligence (AI) for fully automatic detection of PIMD(angle). In this project a fully automatic method will be; developed, evaluated against a large dataset of semi-automatically classified volumes of optic nerve heads in glaucoma patients, and applied to new measurements of the optic nerve head in no pathological humans.

AI-based evaluation of coronary artery disease using myocardial perfusion scintigraphy based on deep learning

Miguel Ochoa Figueroa, PhD
Region Östergötland

The project aims to develop an AI algorithm for the evaluation of coronary artery disease using myocardial perfusion scintigraphy (MPS) based on deep learning convolutional neural networks. Analysis of perfusion acquisitions will be performed based on the American Heart Association’s (AHA) standard 17-segment model (Figure 1). In order to create a robust AI algorithm the patient selection will be very exclusive and made by only including those patients who have gone through invasive coronary angiography (ICA). The AI algorithm will also include patients with low or very low probability for ischemia according to the pre-test probability by Gibbons et al [1].We will include around 750 patients which will fulfill the above stated criteria. Finally we will add some heart conditions, which present with typical patterns in the MPS such as left bundle branch block to also train the AI algorithm with these patterns.

The development of the AI algorithm will reduce intra-observer differences between doctors. Another problem is the limited number of nuclear medicine specialist with experience in MPS, which is also decreasing in the near foreseeable future. An automatic or even semi-automatic system will drastically improve the productivity and thus also improving health care.

Radiological Crowdsourcing for Annotation of Emphysema in SCAPIS CT Images

Mats Lidén, PhD
Örebro University

Chronic obstructive pulmonary disease (COPD) is a common and progressive lung disease generally caused by smoking. Emphysema, an irreversible destruction of the lung tissue, which can be assessed by CT imaging, is one key feature of COPD.

In the national Swedish CArdioPulmonary bioImage Study (SCAPIS), 30.000 Swedish men and women aged 50 to 64 years have been investigated with detailed imaging and functional analyses including a thoracic CT scan. One of the aims in SCAPIS is to predict COPD and provide better understanding of the disease. Automated emphysema quantification in the acquired CT images is, therefore, expected to be effective for future research and may emphasize the clinical role of imaging in COPD.

The purpose of the present AI project is to develop a method for quantifying the emphysema extent in CT images acquired in the SCAPIS Pilot project in Gothenburg 2012. This includes evaluation of an alternative approach for obtaining radiological annotations – radiological crowdsourcing. Instead of recruiting a small number of thoracic radiologists for the time consuming annotation task, the annotation is split into a very large number of small annotation tasks that can be performed by a large number of participating radiologists.

AI-based Decision Support System for Burn Patient Care

Tuan Pham, PhD
Linköping University

Burns are among the most life-threating of all injuries and a major crisis for the global public health with an implication to a considerable health-economic impact. In the European Union, burns are known as the most common fatal injuries after transport accidents, falls, and suicide.

Burns are classified into several types by depth: superficial dermal, deep dermal, and full thickness burns.  The burn extent of a patient is quantified as the percentage of total body surface area (%TBSA) affected by partial thickness or full thickness burns.  The initial assessment of %TBSA is crucial for the continued care.

To achieve precision burn assessment for optimal clinical decision making, our aims are to use digital color images and develop AI tools for 1) complex burn depth prediction, 2) measurement of body surface area, and 3) calculation of %TBSA in a precise, easy-to-use, and cost-effective way. The proposed AI-based decision support system will be extended for the management of chronic wounds, including diabetic ulcers, which our team members are planning to work on.

AI for a Healthy Eye

Christoffer Levandowski, PhD
QRTECH

Retina

Porwal et al. (2019)

The project will deliver an automated algorithm to make the analysis process of retinal images from diabetes patients more time-efficient, benefitting both health care professionals and patients.

The Swedish National Diabetes Register (NDR) covers 95 % of diabetes patients and had approximately 455 000 registered patients 2017. Diabetes patients are offered fundus screenings regularly, as a preventative measure. This results in large amounts of image data being collected and studied manually every year. The need for a more time-efficient alternative is evident. We propose a solution based on machine learning, enabled by the progress made in the field of artificial intelligence. By using machine learning methods, an automated algorithm can be created and used as a support system for diagnosis by the hospital staff. Instead of manually analysing the images, the algorithm will analyse them and provide filtered results as a foundation for diagnosis decisions. Thereby, the patients in need of treatment would be identified earlier, reducing distress and uncertainty. Furthermore, the reduction in tedious manual work would allow ophthalmic nurses usually responsible for analysing retinal images to redirect their focus to other tasks, such as treatment and patient contact.

Brain Tumor Segmentation Using Multi-Modal MRI

Anders Eklund, PhD
Linköping University

Brain tumors severely affect the quality of life for a large number of people. In this project the goal is to improve segmentation of brain tumors by combining novel diffusion imaging with deep learning. Existing approaches for segmenting brain tumors use 2D or 3D deep learning (convolutional neural networks), using different types of structural magnetic resonance imaging (MRI), for example T1 weighted images, T2 weighted images and T1 weighted images with gadolinium contrast agent.

In this project, we will take advantage of advanced diffusion imaging techniques featuring general gradient waveforms to further increase our information about the underlying micro structures. Using a multi-channel 3D CNN, the network will learn to combine the different MRI modalities to improve segmentation.

The project will lead to automatic segmentation and volume estimation of brain tumors, which could save time for neuroradiologists. This could be useful for comparing tumor size before and after treatment. The segmentation can also become more accurate, by combining several types of MR images, which can lead to a better foundation for treatment decisions.

Machine Learning for Reconstructive Surgery

Helen Fransson, PhD
Medviso

3d printed scull

3D printing has the potential of revolutionising reconstructive surgery.Corrections of skeletal deformities are often very complicated and time consuming. Today most corrections of skeletal deformities are done on free hand thus depending much on the expertise of the surgeon and some deformities are so complicated that it is not possible to correct them.

In this project we will develop machine learning techniques for creating cutting/drilling guides and patient-specific implants for 3D printing. The techniques will be developed together with surgeons andintegrated into existing CE-marked software available for clinical use. The developed technique has a great potential to improve patient care by making it possible to correct very advanced deformities withhigh precision and lowered complication rates. Pre manufactured 3D printed implants will shorten operation times. It has been shown that shortened operation times translates to reduced risk for infections and complications, thus this can lead to better outcome for the patient and reduced costs.

Artificial Intelligence for Human Brain Structural Connectomics

Rodrigo Moreno, PhD
KTH Royal Institute of Technology

Connectomics aims at studying connectivity among different areas of the brain. Structural connectivity, which is the topic of this project, can be inferred by analyzing tractograms computed from diffusion MRI (dMRI). Structural connectomics is a promising tool for detecting connectivity impairments due to diseases. Unfortunately, the current state-of-the-art tractography methods have high sensitivity but very low specificity, which makes it difficult to perform structural connectivity analyses. Despite its potential, the current use of artificial intelligence (AI) in the field is relatively scarce, partially due to the orientation-dependent nature of dMRI data and the difficulty of having reliable ground truths.

In this project, we aim at developing an AI solution for structural connectivity analysis. These methods will be used to assess impairments in structural connectivity of patients with monoaural canal atresia (MCA), that is, patients that are born with one ear canal closed but have all the internal hearing organs intact. More specifically, the project include: a) development of a bundle-specific deep learning-based tractography method; b) test the method for performing structural connectivity analysis for MCA patients. Detecting the time point where early-stage impairments in brain connectivity appear in MCA patients is crucial to give them the best possible treatment.

Improving the Quality of Cardiovascular MRI Using Deep Learning

Tino Ebbers, PhD
Linköping University

Cardiovascular MRI

In conventional MR Angiography, contrasts agents are crucial to obtain contrast between blood pool and surrounding tissue. However, the most common types of contrast agents are gadolinium based, which is contraindicated in patients with renal impairment. Moreover, recent studies have suggested that it can remain deposited in the brain after the examination. Phase-contrast angiography allows for the acquisition of angiographic images without the use of contrast agents, but the quality and spatial resolution of these data are not always sufficient.

The purpose of the proposed project is to develop and evaluate deep learning techniques to enhance image quality of cardiovascular MR imaging methods. The main focus of the work will be on improving important quality characteristics of cardiovascular MRI, such as blood-tissue contrast and image resolution, thus improving clinical applicability of these images for diagnostic purposes.

Successful implementation of these tools will result in less need for studies requiring contrast agents, contributing to a reduced risk for contrast-agent related complications and lower costs for clinical healthcare. Moreover, obtaining MR data with higher resolutions will improve the diagnostic quality of the images and facilitate shorter examination times, resulting in reduced patient discomfort and examination costs.

Image- and AI-Based Cytological Cancer Screening

Joakim Lindblad, PhD
Uppsala University

Oral cancer incidence is rapidly increasing worldwide, with over 450,000 new cases found each year. The most effective way of decreasing cancer mortality is early detection, which makes routine screening of patient risk groups highly desired. However, screening for oral cancer is not feasible with today’s methods that rely on painful tissue sampling and laborious manual examination by a medical expert. A consequence is that oral cancer is often discovered as late as when it has metastasized to another location. Prognosis at this stage of discovery is significantly worse than when it is caught in a localized oral area.

We will develop a system that uses artificial intelligence (AI) to automatically detect oral cancer in microscopy images of brush samples, which can quickly and without pain be routinely taken at ordinary dental clinics. We expect that the proposed approach will be crucial for introducing a screening program for oral cancer at dental clinics, in Sweden and the world. The project, which involves researchers from Uppsala University, Karolinska University Hospital, Folktandvården Stockholms län AB, and the Regional Cancer Center in Kerala, India, will greatly benefit from AIDA to turn developed methods into clinically useful tools.

Platform for Efficient Processing of Large Study Cohorts

Einar Heiberg, PhD
Medviso AB

New medical guidelines need large clinical studies to improve and change medical treatment and patient management. Currently, there is an unmet need for efficient tools that can analyze imaging data from large-scale study cohorts with 1,000-100,000 patients. Machine learning has a critical role in the analysis of large study cohorts, and may provide a paradigm shift in how support tools for clinical decision are developed.

Existing clinical analysis software packages are not adequate for analyzing large patient cohorts. The main reasons are 1) Workflow is not streamlined enough. 2) Manual interactions are required to load/save data and batch processing are lacking. 3) Existing tools do not save the results in an open format that allows re-processing to extract new data or for use with machine learning approaches.

Medviso has in close collaboration with Department of Clinical Physiology developed the software Segment for medical image analysis. This software platform is freely available for research purposes.

The purpose of his project is to improve the existing software platform with tools adopted to process large study cohorts to overcome all above obstacles.

AI Based Tumor Definition for Improved and Milder Radiation Therapy

Carl Sieversson
Spectronic Medical AB

The result of our development project will be used by physicians for planning of radiation therapy. In today’s radiotherapy, a physician manually delineates the exact contours of the tumor in a three-dimensional MR image. Based on this delineation, a set of intensity modulated radiation beams are directed towards the tumor, subjecting the tumor to the intended radiation dose while avoiding excessive radiation to sensitive surrounding healthy tissue. However, manually delineating a tumor is a time-consuming work characterized by a great deal of individual variation and uncertainty regarding the exact boundaries of the volume to be included. Recent scientific publications identify this arbitrariness as one of the largest sources of error in modern radiation therapy. The outcome of our development project will provide physicians with an automatically delineated target volume which can be used as a starting point for further manual refinements. These delineations will be generated by a deep-learning based software, which adaptively improves itself by studying how different physicians outlines different types of tumors.

Decision Support for Classification of Microscopy Images in Digital Pathology Using Deep Learning Applied to Gleason Grading

Anders Heyden, PhD
Lund University, Lund

Prostate cancer (PCa) is the second most common malignancy in men worldwide. In Sweden, the incidence is now close to 10 000 new cases per year. Correct identification of the stage and severity of the PCa on histological preparations can help the healthcare specialists predict the outcome of the patient and chose the best treatment options.

The best method of evaluating the severity of prostate cancer in tissue samples is based on the architectural assessment of a histologically prepared tissue specimen (from biopsy or surgery) by an experienced pathologist. The pathologist assigns a “Gleason grade” ranging from 1 (benign) to 5 (severe cancer). This type of cancer severity grading is highly correlated with prognostic outcomes and is the best biomarker in PCa to predict outcome. However, intra-observer differences between pathologists are a major problem. This results in costly repercussions; under-diagnosed patients may become more ill while over-diagnosed patients receive unnecessary treatment, in all decreasing the quality of life and increasing the costs for the healthcare system.

The project aims at developing decision support systems for Gleason scoring of microscopic images of prostate cancer based on deep convolutional neural networks (CNN). We will evaluate the results and obtain feedback on difficult cases from pathologists. The purpose of such a system is to obtain a more reliable estimate of the Gleason score and thus a correct treatment for the patient. The project is based at the Centre for Mathematical Society, Lund University, with close cooperation with the Department of Urology at Lund University Hospital and SECTRA in Linköping.

Interactive Visualization Tools for Verification and Improvement of Deep Neural Network Predictions

Ida-Maria Sintorn, PhD
Vironova AB, Stockholm

Understanding and being able to show what leads to a decision using AI and machine learning methods is important to gain acceptance for their use in clinical diagnostics. The purpose of this project is to adapt and develop interactive visualization tools to explain and test what regions and details in an image are important for the decision of the artificial neural network. Interacting with the visualizations and providing corrections and feedback will further train the neural network for improved performance. The interactive tools will, in addition, allow for convenient hypothesis testing regarding importance of image features by e.g. blocking regions or scales of images or part of a network and seeing the effect on the results.

Verifying what information decision support systems based their decision on is the key for rapid conversion of research results to clinical use. The tools to be developed in this project will hence, not be directly used clinically but rather support and serve as a catalyst for deploying other decision support systems in the clinics. The tools will be evaluated on clinical applications using electron microscopy for kidney diagnoses and light microscopy for cervical cancer screening.

Machine Learning for Automated Measurements of Liver Fat

Magnus Borga, PhD
AMRA AB, Linköping

Non-alcoholic fatty liver disease (NAFLD), a range of diseases characterized by steatosis, is associated with the metabolic syndrome and can lead to advanced fibrosis, cirrhosis, and hepatocellular carcinoma. Non-alcoholic steatohepatitis, a more serious form of NAFLD, is now the single most common cause of liver disease in developed countries and is associated with high mortality. Diagnosis and grading of hepatocellular fat in patients with NAFLD usually requires a liver biopsy and histology. However, as liver biopsy is an expensive, invasive, and painful procedure that is sensitive to sampling variability, the use of MRI as a non-invasive biomarker of liver fat has shown tremendous progress in recent years. Automation of this technology would further reduce costs for clinical use. Measuring fat in the liver is, however, not trivial since larger blood vessels and bile ducts need to be avoided in order to get accurate estimates of the liver fat fraction. Therefore, the aim of this project is to develop an automated method, based on machine learning, for placement of regions of interest (ROI) in which the liver fat can be quantified.

Interactive Deep Learning Segmentation for Decision Support in Neuroradiology

Robin Strand, Uppsala University

Many brain diseases can damage brain cells (nerve cells), which can lead to loss of nerve cells and, secondarily, loss of brain volume. Even slight loss of nerve cells can give severe neurological and cognitive symptoms. Technical imaging advancements allow detection and quantification of very small tissue volumes in magnetic resonance (MR) neuroimaging. Due to the enormous amount of information in a typical MR brain volume scan, and difficulties such as partial volume effects, noise, artefacts, etc., interactive tools for computer aided analysis are absolutely essential for this task.

In this project, we will develop and evaluate interactive deep learning segmentation methods for quantification and treatment response analysis in neuroimaging. Interaction speed will be obtained by dividing the segmentation procedure into an offline pre-segmentation step and an on-line interactive loop in which the user adds constraints until satisfactory result is obtained. See the conceptual illustration.

The successful outcome of this project will allow detailed correct diagnosis, as well as accurate and precise analysis of treatment response in neuroimaging, in particular in quantification of intracranial aneurysm remnants and brain tumors (Gliomas WHO grades III and IV) growth.

Automatic Detection of Lung Emboli in CTPA Examinations

Tobias Sjöblom, Uppsala University

Pulmonary embolism (PE) is a serious condition in which blood clots travel to, and occlude, the pulmonary arteries. To diagnose or exclude PE, radiologists perform CT pulmonary angiographies (CTPA). Each CTPA consists of hundreds of images. Manual CTPA interpretation is, therefore, not only time-consuming, but is also dependent on human factors, especially in the stressful conditions of emergency medical care. Several automatic PE detection systems have been developed, but none with acceptable accuracy for clinical usage.

Our team will combine expertise in clinical radiology, medical image processing and analysis, and diagnostic technologies to develop a system for fast, precise and reliable automatic identification of pulmonary embolization in CTPA examinations. The availability of a large set of annotated CTPA examinations, which is assembled by our team, is a particular asset which enables development and training of advanced deep learning methods to address the task. We therefore expect to reach performance required for usage in a daily clinical practice.

The resulting system will save precious time of both patients suspected of having PE and of expert radiologists during their daily clinical routines. This will, in turn, have a strong positive impact on the health system in Sweden, considering that CTPA is today one of the most common emergency CT examinations in the country.

Simultaneous Landmark Detection and Organ Segmentation in Medical Images for Orthopedic Surgery Planning

Chunliang Wang, KTH, Stockholm