nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

nnU-Net (v2) is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task in the biomedical domain.

The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. Pretrained models for all datasets used in the model’s reference evaluation study are available for download at Zenodo. nnU-Net model is made publicly available at GitHub as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

Description: Exemplary segmentation results generated by nnU-Net for a variety of datasets are shown in the figure below obtained from the model’s reference publication:

nnU-Net handles a broad variety of datasets and target image properties: All examples originate from test sets of different international segmentation challenges to which nnU-Net was applied. Target structures for each dataset are shown in 2D projected onto the raw data (left) and in 3D together with a volume rendering of the raw data (right). All visualizations were created with the MITK Workbencha, Heart (green), aorta (red), trachea (blue) and esophagus (yellow) in CT images (dataset (D)18). b, A549 lung cancer cells (purple) in FM (D22). c, Lung nodules (yellow) in CT images (D6). d, Liver (yellow), spleen (orange), left and right kidneys (blue and green, respectively) in T1 in-phase MRI (D16). e, Synaptic clefts (green) in EM scans (D19). f, Edema (yellow), enhancing tumor (purple), necrosis (green) in MRI (T1, T1 with contrast agent, T2, FLAIR) (D1). g, Kidneys (yellow) and kidney tumors (green) in CT images (D17). h, Thirteen abdominal organs in CT images (D11). i, Hepatic vessels (yellow) and liver tumors (green) in CT (D8)14j, Left ventricle (yellow) in MRI (D2). k, Right ventricle (yellow), left ventricular cavity (blue) and myocardium of left ventricle (green) in cine MRI (D13). l, HL60 cell nuclei (instance segmentation, one color per instance) in FM (D21). (figure obtained from model’s reference publication)

The nnU-Net model can automatically adapt to new datasets. The figure below shows how nnU-Net systematically addresses the configuration of entire segmentation pipelines and provides a visualization and description of the most relevant design choices.

Proposed automated method configuration for deep learning-based biomedical image segmentation: Given a new segmentation task, dataset properties are extracted in the form of a ‘dataset fingerprint’ (pink). A set of heuristic rules models parameter interdependencies (shown as thin arrows) and operates on this fingerprint to infer the data-dependent ‘rule-based parameters’ (green) of the pipeline. These are complemented by ‘fixed parameters’ (blue), which are predefined and do not require adaptation. Up to three configurations are trained in a five-fold cross-validation. Finally, nnU-Net automatically performs empirical selection of the optimal ensemble of these models and determines whether post-processing is required (‘empirical parameters’, yellow). The table on the bottom shows explicit values as well as summarized rule formulations of all configured parameters. Res., resolution. (figure obtained from model’s reference publication)

With this model its developers aimed at outlining a new path between the status quo of primarily expert-driven method configuration in biomedical segmentation on one side and primarily data-driven AutoML approaches on the other. Specifically, they defined a recipe that systematizes the configuration process on a task-agnostic level and drastically reduces the search space for empirical design choices when given a new task.

  1. Collect design decisions that do not require adaptation between datasets and identify a robust common configuration (‘fixed parameters’).
  2. For as many of the remaining decisions as possible, formulate explicit dependencies between specific dataset properties (‘dataset fingerprint’) and design choices (‘pipeline fingerprint’) in the form of heuristic rules to allow for almost-instant adaptation on application (‘rule-based parameters’).
  3. Learn only the remaining decisions empirically from the data (‘empirical parameters’).

This recipe as implemented in the nnU-Net model was validated on the ten datasets provided by the Medical Segmentation Decathlon. The resulting segmentation method (nnU-Net) is able to perform automated configuration for arbitrary new datasets. In contrast to existing research methods:

  • nnU-Net is holistic, that is, its automated configuration covers the entire segmentation pipeline (including essential topological parameters of the network architecture) without any manual decisions.
  • automated configuration in nnU-Net is fast, comprising a simple execution of rules and only a few empirical choices to be made, thus requiring virtually no compute resources beyond standard model training.
  • nnU-Net is data efficient; encoding design choices based on a large and diverse data pool serves as a strong inductive bias for application to datasets with limited training data.

The general applicability of nnU-Net’s automated configuration is demonstrated in 13 additional datasets in the model’s reference publication. Altogether, results are reported on 53 segmentation tasks, covering an unprecedented diversity of target structures, image types and image properties. As an open source tool, nnU-Net can simply be trained out-of-the box to generate state-of-the-art segmentations.

Data Availability: All 23 datasets used in the model’s reference evaluation study are publicly available and can be accessed via their respective challenge websites as follows.

Results: The nnU-Net outperformed specialized pipelines in a range of diverse tasks. In the figure below an overview is provided of the quantitative results achieved by nnU-Net and the competing challenge teams across all 53 segmentation tasks. Despite its generic nature, nnU-Net outperforms most existing segmentation solutions, even though the latter were specifically optimized for the respective task. Overall, nnU-Net sets a new state of the art in 33 of 53 target structures and otherwise shows performances on par with or close to the top leaderboard entries.

Quantitative results from all international challenges that nnU-Net competed in. For each segmentation task, results achieved by nnU-Net are highlighted in red; competing teams are shown in blue. For each segmentation task, nnU-Net’s rank as well as the total number of competing algorithms is displayed in the bottom-right corner of each plot. Note that for the CHAOS challenge (D16), we only participated in two of the five subtasks for reasons outlined in section 9 of Supplementary Note 6. The Cell Tracking Challenge leaderboard (D20–D23) was last accessed on 30 July 2020; all remaining leaderboards were last accessed on 12 December 2019. SIM, simulated (obtained from model’s reference publication)

Release Notes for nnU-Net V2: The core of the old nnU-Net (nnU-Net v1) was developed in a short time period while participating in the Medical Segmentation Decathlon challenge in 2018. Consequently, code structure and quality were not the best. Many features were added later on and didn’t quite fit into the nnU-Net design principles.

On the other hand, nnU-Net V2 is a complete overhaul. The “delete everything and start again” kind. So everything is better (in the author’s opinion). While the segmentation performance remains the same, a lot of cool stuff has been added. It is now also much easier to use it as a development framework and to manually fine-tune its configuration to new datasets. A big driver for the reimplementation was also the emergence of Helmholtz Imaging, prompting the developers to extend nnU-Net to more image formats and domains. as highligthed here.

Acknowledgments: nnU-Net is developed and maintained by the Applied Computer Vision Lab (ACVL) of Helmholtz Imaging and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ).

Claim: The nnU-Net model sets a new state of the art in various semantic segmentation challenges and displays strong generalization characteristics requiring neither expert knowledge nor compute resources beyond standard network training. As indicated by Litjens et al. and quantitatively confirmed by the results presented in the model’s reference publication, method configuration in biomedical imaging used to be considered a “highly empirical exercise”, for which “no clear recipe can be given”. Based on the recipe referenced above, nnU-Net is able to automate this often insufficiently systematic and cumbersome procedure and may thus help alleviate this burden. The nnU-Net is meant to be leveraged as an out-of-the box tool for state-of-the-art segmentation, as a standardized and dataset-agnostic baseline for comparison and as a framework for the large-scale evaluation of novel ideas without manual effort.

GitHub Page

Zenodo Page

Documentation

Reference Publication

1 Comment

Leave a comment