|
|
(One intermediate revision by the same user not shown) |
Line 1: |
Line 1: |
| = Introduction =
| |
|
| |
|
| The future developement of ITK relies heavily on feedback from the community of users and developers as well as guidelines from the clinical and medical research community.
| | We skipped this exercise on the period 2006-2007 |
|
| |
|
| This page is intended to gather feedback regarding the future direction of ITK and in particular to list specific features and functionalities that would make the toolkit more useful for the medical imaging community.
| |
|
| |
|
| = Users/Developers Support =
| | * You may want to take a look at the road map for 2007-2008 |
|
| | ** [[ITK Roadmap 2007 2008|Roadmap 2007-2008]] |
| This section relates to the services that will help beginners to start with ITK as well as material that will help experienced users to take better advantage of the functionalities available in the Insight toolkit.
| | * Or the previous road map for the period 2005-2006 |
| | | ** [[ITK Roadmap 2005 2006|Roadmap 2005-2006]] |
| * Tutorials
| |
| * Mailing List Support
| |
| * Bug Tracking / Triage / Fixing
| |
| * Weekly telephone conferences
| |
| | |
| = Infrastructure =
| |
| | |
| == Maintaining Existing Infrastructure ==
| |
| | |
| * Dealing with new compilers
| |
| ** Express 2008 is in Beta2
| |
| * Using Bullseye code coverage tool in addition to gcov
| |
| ** Bullseye works in Windows and Linux
| |
| | |
| == Improving Infrastructure ==
| |
| | |
| === Nightly Testing ===
| |
| | |
| * CMake
| |
| * Dart v3
| |
| ** Includes integration of BatchMake with CMake/Dart.
| |
| *** BatchBoards that plot results from multiple datasets and machines, and over time will be integrated with Dartboards.
| |
| | |
| === Grand Challenges ===
| |
| | |
| In this section we would like to address the need for comparing different algorithms by running them using common collections of images.
| |
| | |
| This can be done by
| |
| | |
| # making the collections publicly available
| |
| # or by the authors making their source code publicly available
| |
| # or combinations of both.
| |
| | |
| The goal is to provide an infrastructure where algorithms can effectively be compared in a fair and reproducible context.
| |
| | |
| * RIRE (and other automated test tools)
| |
| ** Complete integration of BatchMake and the Insight Journal
| |
| *** Provides automated scoring of IJ-submitted segmentation and registration algorithms.
| |
| *** Scoring is accomplished using BatchMake and sequestered testing data
| |
| ** Gather additional data with truth for testing.
| |
| *** Data must have already been acquired by others and must have already been made available for public use.
| |
| *** Funding will go towards manipulating the data and defining scoring metrics to work in the RIRE testing environment
| |
| | |
| = Improving ITK =
| |
| | |
| == Improving Existing Code Base ==
| |
| | |
| * Completing Statistics Refactoring
| |
| ** Classifiers refactoring (hardly backward compatible / will use deprecation policy)
| |
| ** Multi-threading Expectation Maximization (EM).
| |
| *** method by Dempster, Laird, and Rubin, John Wiley and Sons, 1973 - that method is used by STAPLE, classifiers, and reconstruction methods).
| |
| * Multi-Processors
| |
| ** Work with Dan Blezek to integrate new threading library (that uses a thread-pool and other advanced threading features) as an alternative to ITK's threading existing library | |
| ** Thread pools, in particular, will reduce the overhead (programmer and CPU) of using threads.
| |
| | |
| == Keeping up with the Field ==
| |
| | |
| * Adopting new methods from the literature
| |
| ** Process the backlog of current contributions in the Insight Journal
| |
| *** Move methods to the Review directory
| |
| *** Perform code reviews and subsequent fixes (coding style / refactoring)
| |
| ** Transition methods from MICCAI 2007 Open Source Workshop
| |
| * GPU framework
| |
| ** Implement a portion of a pipeline (perhaps a simplified one with no branching allowed and no prior state saved) on the GPU. | |
| ** This goes beyond using someone else's library to implement one algorithm, but instead would build a framework on which GPU algorithms could be developed.
| |
| * Data provenance
| |
| ** Next step after image databases is data providence, i.e., tracking images after they leave the database.
| |
| ** Providence refers to tracking the acquired data and subsequent processings that lead to a particular image.
| |
| ** Two components are essential
| |
| *** Tracking the processings while image is in memory
| |
| *** Tracking the processings while image is on disk
| |
| ** Regarding tracking images in memory
| |
| *** ITK's metaDataDictionary is ideal | |
| ** Regarding tracking images on disk
| |
| *** While this code be done within some image formats, providence information can be lost when an image is converted between formats.
| |
| *** To preserve data independent of the image format, we propose to use an adjunct file to store an image's providence. That is, an extra file that parallels (refers to) the actual data that is stored in an arbitrary image format.
| |
| ** Modify ITK's filters and extend ITK's metaDataDictionary to generate, store, and maintain data providence information.
| |