The Quantitative Imaging Network The National Cancer Institute Then: 1939 And Now: 2016


A Streamlined Workflow for Quantitative Assessment of Brain Tumor Burden Using FTB



Yüklə 4,95 Mb.
səhifə9/10
tarix30.10.2018
ölçüsü4,95 Mb.
#75971
1   2   3   4   5   6   7   8   9   10

A Streamlined Workflow for Quantitative Assessment of Brain Tumor Burden Using FTB
Timothy D. Dondlinger1, Michael J. Schmainda1, Todd R. Jensen2,

Kathleen M. Schmainda3, Melissa A. Prah3
1Imaging Biometrics (IB) LLC, Elm Grove, WI, 2Jensen Informatics, LLC, Brookfield, WI,3Radiology, Medical College of Wisconsin (MCW),

Milwaukee, WI.
Introduction: For a novel imaging technology to successfully translate into routine clinical practice, it must, be 1) proven scientifically with validated clinical outcomes, 2) regulatory compliant, and 3) intuitive and efficient for end users. The overall goal of this U01 project (CA176110) is to develop a highly flexible and customizable FDA-cleared software workflow tool (SWT) that enables repeatable fractional tumor burden (FTB) calculations, which have been demonstrated to quantitatively predict overall (OS) and progression free survival (PFS) in glioblastoma (GBM) patients [1-3].
Methods: FTB generation is a multi-step process that uniquely applies DSC-MRI perfusion parameter thresholds (relative cerebral blood volume, rCBV), generated from proprietary algorithms contained within IB Neuro™ (Imaging Biometrics, LLC, Elm Grove, WI), to quantify tumor burden. Histologically validated through stereotactic biopsies at multiple sites [1, 4], these thresholds have been proven to reliably distinguish tumor from treatment effect with high accuracy (>95%). Along with IB Neuro, the initial FTB processing steps also require IB Delta Suite™ to perform image co-registration, subtraction, and rCBV threshold (class) map generation. FTB is the ratio of voxels above the pre-defined rCBV threshold to overall voxels in an enhancing region. While deemed clinically valuable as a diagnostic and longitudinal assessment biomarker, the manually intensive FTB processing proved too time consuming or cumbersome for high-volume institutions, and error-prone for smaller health care providers lacking technical support.
A value-stream-map was created with close input from the developers of the FTB process. This mapping technique identified the ideal SWT, automation opportunities, quality assurance outputs for clinical validation, and pre-defined steps for user acknowledgement and confirmation. This technique was used as input for the development of IB Rad Tech™; a customizable workflow “wizard” that harnesses the underlying technologies of all IB products. Figures 1 and 2 show image co-registration and Delta T1 mapping, respectively, of the overall FTB workflow process.


Figure 1: IB Rad Tech dialogue box after co-registering

pre- and post-contrast T1w images.



Figure 2: IB Rad Tech dialogue box and a [standardized] Delta T1 map for identifying ROIs of true enhancement.
Results: Prior to IB Rad Tech™, it took an experienced radiologist ~45 min and 63 unique operations (with simultaneous maneuvering of 2 software applications) to perform a single FTB work up. Now, with IB Rad Tech™, a trained MR Technologist can, in ~10 minutes, perform this process, which requires only 3 unique operations and is contained in a single customizable application with longitudinal reporting capabilities that continue to be enhanced. Moreover, user-defined instructions can be added for any processing steps to provide site-specific guidance or reminders (see Fig. 2) and longitudinal assessment and reporting continue to be enhanced (Fig. 3).

Figure 3: normalized rCBV counts, by threshold, across multiple time points.

(Insert: one slice of a computed FTB map).


Conclusion: The development of an FTB SWT within IB Rad Tech has enabled consistent and efficient generation of a unique surrogate biomarker; one that has recently been correlated to OS and PFS in the treatment of GBM patients. This streamlined and customizable SWT is FDA cleared, requires minimal training, and can be readily integrated into large and small health care organizations alike.

References:



[1] Hu LS, et al. Neuro Oncol. 2012 Jul; 14(7): 919–930,

[2] Prah MA et al., MRI-perfusion derived fractional tumor burden (FTB) is predictive of overall and progression free survival in newly diagnosed glioblastoma following concomitant chemoradiotherapy,

[3] Prah MA, et al., MRI-perfusion derived Fractional Tumor Burden (FTB) stratifies survival in recurrent glioblastoma following treatment with bevacizumab,

[4] Prah MA, et al, Comparison of diffusion and perfusion parameters in distinguishing radiation effect and necrosis from GBM. 2015; Toronto, Ontario, Canada. Mira Smart Conferencing. Please visit www.imagingbiometrics.com for more information about IB’s capabilities and technologies.

Acknowledgements: NIH/NCI U01 CA176110

Quantitative Volume and Density Response Assessment:

Sarcoma and HCC as a Model

Lawrence H Schwartz, M.D. and Binsheng Zhao, D.Sc.

Columbia University Medical Center, New York City

Demonstration Project

A Prototype Imaging Platform for CT-based Response Assessment of Solid Tumors

With the support of our U01 grant, we have developed an imaging platform to assess tumor and tumor change with therapy. This platform is based on an open source, Weasis, and implemented with almost all basic imaging viewing and manipulation functions as those provided by a commercial PACS workstation (e.g., layout, synchronization, zoom-in/-out, window/level, basic drawing and measurement tools). In addition, we have integrated our solid tumor segmentation algorithms, as well as a boundary correction tool, into the platform. For example, to segment a lung lesion, an operator needs to press the button labeled as “Lung” in the Segmentation area thru the user interface of the imaging system, and then draw a region-of-interest enclosing the lesion to be segmented, only on one image slice. Seconds after releasing the button, computer-generated contours will be superimposed on the lesion throughout all images containing the lesion for reviewing. If any part of the segmentation result is not satisfactory, the operator is allowed to modify it using the editing tool. Based on the final segmentation, tumor volume (as well as the two maximal diameters as defined by RECIST and WHO) can be calculated automatically. The measurement results will be stored in a database and tumor size changes during the course of therapy can be quantified and graphically displayed. We are currently enhancing this response assessment system so that it can be used more efficiently.




Computational Radiomics System to Decode the Radiographic Phenotype
Joost JM van Griethuysen1,3,4, Andriy Fedorov2, Chintan Parmar1, Ahmed Hosny1, Nicole Aucoin2, Vivek Narayan1, Regina GH Beets-Tan3,4,

Jean-Christophe Fillion-Robin5, Steve Pieper6, Hugo JWL Aerts1,2
1Department of Radiation Oncology and 2Radiology, Dana-Farber Cancer Institute, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, 3Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands, 4GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands, 5Kitware, 6Isomics

Introduction: Medical imaging is able to noninvasively capture the radiographic phenotype of a tumor before, during and after treatment. Recent studies have shown the promise of quantifying this radiographic phenotype using Radiomics: the comprehensive and automated extraction of a large panel of features. However, there is a need for standardization of feature definitions and image preprocessing, which have been shown to significantly impact the performance of the extracted data. To address this issue, we developed PyRadiomics, an open-source informatics platform implemented in Python for the streamlined extraction of a large panel of engineered features from medical images. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung-lesions.

Methods: From the publicly available lung cancer diagnostic and screening CT cohort of the Lung Image Database Consortium (LIDC-IDRI) we included 429 distinct lung lesions from 302 patients. Each lesion was segmented and rated for malignancy by four expert radiologists. Using PyRadiomics, we extracted 1120 radiomic features from each nodule and for every reader. Stability for segmentation variation was assessed using the intraclass correlation coefficient (ICC) for each feature extracted from all four segmentations. Features with ICC > 0.8 were considered stable and assessed for performance to distinguish between malignant/benign using unsupervised clustering. Finally, to assess the performance of a multivariate biomarker model, we divided the cohort into training and validation. We selected 25 features using Minimum Redundancy Maximum Relevance (mRMR) from the training cohort and fitted them into a random forest classifier. After training, the performance of the model was tested on the validation cohort using the Noether test.

Results: Using unsupervised clustering, we were able to distinguish 4 clusters, with significant differences in the malignancy classification between the clusters (P<0.0001, χ2 test). 81/88 (92%) lesions in cluster 1 were malignant, whereas 38/40 (95%) lesions in cluster 2 were benign. Proportion of malignant lesions in cluster 3 and 4 was 78/143 (54%) and 38/158 (34%), respectively. The biomarker model trained on the training cohort was able to significantly predict the malignancy status of the lesions in the validation cohort with AUC 0.79 (95% CI 0.73-0.85, Noether test p-value<0.0001)



Conclusion: PyRadiomics provides an easy-to-use, flexible platform for radiomic feature extraction. Features extracted using PyRadiomics from lung CT available from the LIDC-IDRI cohort were able to significantly predict the malignancy status of the individual lesions. More information can be found at www.radiomics.io.

SOFTWARE DEMONSTRATION
ePAD: Enabling imaging assessment of imaging biomarkers

in the workflow of clinical trials
Daniel Rubin, Cavit Altindag, Sheryl John, and Emel Alkim
Stanford University
Radiological imaging provides detailed information about cancer that could greatly assist in objectively and accurately assessing the response of cancer patients to treatment. Although quantitative image-based criteria such as the Response Evaluation Criteria in Solid Tumors (RECIST) are common practice in clinical trials, measurements on cancer lesions are not routinely performed in clinical practice. Radiologists currently report their observations about the status of tumor burden subjectively (i.e., “increasing” or “decreasing” size of lesions) without providing measurements, and this can be insufficient to assess cancer response. If tumor measurements could be made with minimal mouse clicks, most Radiologists would make these measurements. However, current tools provide no simple and systematic way to identify the tumor burden being tracked nor to consistently capture the measurements of each lesion over time. Our exhibit will show a tool we are developing to enable quantitative image-based assessments of tumor burden in clinical research workflows and potentially in routine clinical practice. Our tool, called the electronic Physician Annotation Device (ePAD), assists radiologists in viewing and measuring cancer lesions. It is freely available from the authors, and though it is currently intended for use in clinical research environments, it could ultimately be adopted in routine clinical practice in the future.
The ePAD tool displays images and collects image annotations (such as lesion measurements) in compliance with the Annotation and Image Markup (AIM) standard developed by the National Cancer Institute. ePAD is a rich Web client—a software program that runs in a Web browser on any computer platform, including commercial Radiology workstations. ePAD captures lesion measurements from Radiologists as the view the images, and it saves them in the AIM format. We have created several specific enhancements to ePAD to simplify and streamline lesion measurements in busy radiology practice settings. First, users can browse images and make measurements on lesions quickly by using ePAD’s compact lesion summary display which shows the lesions that have been measured previously and enables users to view prior measurements with a single click of the mouse. Second, ePAD ensures that the minimum information necessary to create a meaningful quantitative image report is collected, including a lesion name, measurements, a lesion type, and the anatomic location of the lesion. Text entered into ePAD is matched to RadLex to prevent spelling errors. Third, ePAD produces a quantitative imaging report summarizing the lesions and measurements, including a graphical display of tumor burden, helping to provide decision support to referring physicians about the response of cancer patients to treatment.

Our demonstration will include a computer running ePAD and a poster giving the attendee an overview of the capabilities of ePAD and AIM in the workflow of image evaluation during clinical trials, and specifically demonstrate the following aspects:


1) Workflows: We will show the current Radiology workflow for measuring cancer lesions and the potential improvement using ePAD. Attendees will have hands-on experience with the current workflow of viewing images, identifying target lesions, measuring them, and how this is streamlined using ePAD.
2) Architecture and deployment: We will explain how ePAD fits into the informatics infrastructure of institutions that may wish to adopt it.
3) Decision support: We will show how ePAD, by leveraging prior measurements in AIM, prompts the radiologist to annotate all target lesions (and to recognize missing measurements), and how it provides decision support to the Oncologist by producing patient response graphs automatically from AIM-annotated images.
4) Auditing and quality assurance: We will show attendees how AIM enables linking the lesion measurements to the actual annotations on images, enabling rapid audit and quality assurance on quantitative assessments of images.
5) Lesion tracking: We will demonstrate the ability ePAD to query historical annotations in a patient who had several follow up studies, automatically generating a quantitative imaging summary report.
6) Assessing quantitative imaging biomarkers: We will show how ePAD facilitates integrating the computation of novel quantitative imaging biomarkers into the clinical image interpretation of clinical trials. It accomplishes this via its plug-in architecture, and various algorithms written in different languages can be seamlessly executed during image viewing and annotation.
7) Workflows for incorporating imaging biomarkers into clinical trials workflow: We recently have integrated ePAD with the Quantitative Imaging Feature Pipeline (QIFP) project—another QIN project—that is developing a library of quantitative imaging methods and workflows for executing quantitative imaging biomarker algorithms that enable introducing them into clinical trials.

Open source tools for standardized communication of

quantitative image analysis results
Andrey Fedorov, PhD

Brigham and Women’s Hospital and Harvard Medical School, Boston

Quantitative imaging holds tremendous but largely unrealized potential for objective characterization of disease and response to therapy. Certain quantitation techniques are gradually becoming available both in the commercial products and clinical research platforms. As new tools are being introduced, tasks such as their integration into the clinical or research enterprise environment, comparison with similar existing tools and reproducible validation are becoming of critical importance. Such tasks require that the analysis tools provide the capability to communicate the analysis results using standardized mechanisms. The use of open standards is also of utmost importance for building aggregate community repositories, such as TCIA, and data mining of the analysis results. The goal of this presentation is to introduce the attendees to some of the free open source tools available to the QIN community. Specifically, we will discuss the following tools:



  1. DICOM for Quantitative Imaging (dcmqi) library that provides tools for conversion between the commonly used research formats and standardized DICOM representation of the results produced by the quantitative image analysis tools, such as image segmentations, parametric maps and quantitative measurements. The dcmqi tools are available for researchers use as OS-specific packages or Docker images, and are accompanied by the source code, test data, documentation and tutorials: http://github.com/qiicr/dcmqi.

  2. Quantitative Reporting extension of 3D Slicer provides visual user interface to the functionality provided by dcmqi, and supports the workflow of loading DICOM image data, annotating it with semantically-rich segmentations, calculating a variety of quantitative measures over the region of interest, and exporting all of the resulting data as a collection of DICOM objects linked together by a DICOM Structured Reporting object that follows DICOM SR template 1500. The resulting dataset can be reopened with all of the semantics of findings populated and visualized as shown in the example below. Quantitative Reporting extension is accompanied by test data and documentation: https://qiicr.gitbooks.io/quantitativereporting-guide.






DATA VISUALIZATION OF QIN PET/CT WORKING GROUP CHALLENGES

PET/CT Working Group
Abstract

The goal of this project is to develop data visualization tools to support collaborative analysis of QIN working group challenges. The QIN PET/CT working group had conducted two challenges in 2015-2017 in the areas of CT imaging of lung nodules, the “radiomics feature challenge” and the “interval challenge”.



  • Radiomics uses quantitative features to describe nodules during classification or prediction tasks. These mathematical descriptors characterize the size, shape, texture, intensity, margin and other aspects of a nodule in different ways. In this challenge, we evaluated the repeatability of features between several runs of the same segmentation algorithm and the reproducibility across different segmentation algorithms. We also compare the agreement of features provided by the eight participating institutions.

  • The lung interval challenge used lung CT images from the National Lung Screening Trial (NLST) of patients at risk for lung malignancies taken before and after a year-long interval. Participants were provided data from two visits each of 100 patients (50 with benign lesions and 50 with malignant lesions). Participants, using their segmentation algorithms, provided volumetric estimates for the identified nodules. The goal of the analyses was to explore the performance of volumetric changes in predicting the status of the lesion (malignant or benign).

Analyses of these challenge results was performed collaboratively by sub-groups of the PET/CT WG. We developed a range of data visualization tools to aid the analyses. These tools are publicly available through a web-interface.


Specifically, we will demonstrate the:
Radiomics Correlation Explorer: We visualize the interdependence between several different institutions’ radiomics features using a graphical model.
Radiomics Statistical Explorer: Using scatter plots and interactively-generated significance statistics, we go more in-depth to visualize the expected and unexpected correlations between selected radiomics features from separate institutions.
Lung Segmentation Explorer: We visualize both the predictive ability and collective agreement between five institutions’ automatic segmentation algorithms. We present a fully-interactive app with rapidly-generated scatter plots, histograms, ROC curves, correlation matrices, confusion matrices, and diagnostic agreement charts. Results may be stratified by segmentation size and malignancy status for deeper dives.

QIN Tools involved in Prospective Clinical Trials



QIN Tools involved in Prospective Clinical Trials



Appendix 1: Report on 2016 QIN – NCTN Planning Meeting

December 2016


Report on the 2016 QIN-NCTN Planning Meeting

Lawrence Schwartz, David Mankoff, Robert Nordstrom, Lori Henderson, Paul Kinahan, Susanna Lee, Andriy Fedorov, Charles Apgar, Mark Rosen



March 20, 2017
Summary

This one-day planning meeting was held at the Sonesta Hotel in Philadelphia, PA USA on December 13, 2016. The purpose was to bring together thought leaders from the NCTN together QIN investigators and related groups for roundtable discussions on (1) what oncologists need for quantitative imaging with their oncology trials and (2) what imagers can offer to improve the efficacy of these trials. The specific goal of the meeting was to generate 4 to 6 ideas on how to develop prospective testing of quantitative imaging tools in national level clinical trials. The morning session started with a series of short presentations by oncologists involved with national trials in the areas of systemic therapy, locally targeted therapy, immunotherapy and precision oncology. This was rounded out by short presentations on the uses of imaging in clinical trials. The central part of the meeting was 4 parallel break-out sessions intended to generate ideas for prospective testing of quantitative imaging tools in national level clinical trials as mentioned above. The meeting concluded with a review of the breakout groups, a first pass at a summary, and a list of potential next steps.


The breakout sessions and following summary discussion led to a good exchange of ideas between imagers and NCTN therapy trial leaders. Specific discussion items are listed below. In addition, we provide a brief summary of: (1) QIN tool listing and descriptions needed to promote the use of QIN tool in NCTN trials, (2) a time scale for moving forward with QIN tool integration into NCTN trials, and (3) next steps and framework for greater collaboration between the QIN and the NCTN.
1. QIN tool description: NCTN leaders emphasized the need for high-level, brief descriptions of QIN tools directly suitable for to NCTN groups. These should briefly describe the tool, the quantitative imaging QI tasks and/or disease sites to which the tools is applicable, and requirements for using the tool (including imaging data and expertise requirements). There was an emphasis on distinguishing methods for use in a wide range of sites, i.e. community (NCORP) sites versus a select group of academic sites in the tool description. There was also a desire to define the characteristics of the imaging data needed for each tool, including acquisition and device calibration/qualification needs, emphasizing that tools applicable to the type of images acquired in routine practice will have the greatest use. All parties emphasized the need for direct QIN investigator participation in NCTN meetings and trial development discussions. Specifically, while the high-level descriptions of QIN tools suitable for NCTN groups is useful, the distribution of descriptions alone would not be sufficient to advance QIN tools into NCTN clinic trials. There is a wide range of tools and methods already available from QIN investigators, thus there is potentially fertile ground for the use of these in NCTN trials, but the NCTN PIs will not know to look for them without vigorous participation by QIN investigators.

2. Time frame: Most NCTN discussants emphasized the need for discussion of QIN tool integration into clinical trials should start as early as possible in trial development. They noted that the time from early concept development to trial implementation is typically 1 - 1.5 years or more. There might be some opportunities for earlier integration of QIN tools into emerging NCTN trials, especially if image collection is already planned, image data requirements for the tool are not too strict or challenging, and central image analysis is possible.
3. Next steps and framework for QIN-NCTN collaboration: In addition to the refinement of the QIN tool description noted above, Effective next steps will rely heavily on educating quantitative imaging and QIN investigators in the culture of NCTN trial development and execution and increasing the participation of QIN investigators in the NCTN groups. This could include:

  1. Directing QIN members to specific disease sites and NCTN groups guided by examples from the breakout session and overall summaries.

  2. The QIN leadership might help direct and incentivize individual QIN U01s toward specific trials and groups to help increase their participation.

  3. At same time, ongoing emphasis of clinical trials and the NCTN at the QIN annual QIN F2F meeting, such as done in the spring 2016 meeting and ongoing discussions fostered by Larry Schwartz in QIN EC calls, is strongly encouraged.

Finally, there was a suggestion to repeat this QIN-NCTN planning meeting annually, and/or to align the meeting with individual NCTN Group fall meeting on a rotating basis, likely the Alliance fall meeting in 2017.

Yüklə 4,95 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   10




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə