Medical Imaging: Current status and future perspectives

Habib Zaidi

Division of Nuclear Medicine, Geneva University Hospital

CH-1211 Geneva. SWITZERLAND

Abstract:
Medical imaging is often thought of as a way of viewing anatomical structures of the body. Indeed, x-ray computed tomography and magnetic resonance imaging yield exquisitely detailed images of such structures. It is often useful, however, to acquire images of physiologic function rather than of anatomy. Such images can be acquired by imaging the decay of radioisotopes bound to molecules with known biological properties. This class of imaging techniques is known as nuclear medicine imaging.

Keywords:
Medical Imaging. Image Processing. Multimodality. PACS.

1. Introduction

Recent advances in imaging technology allow evaluation of biologic processes and events as they occur in vivo. For example, new magnetic resonance and radioisotope imaging methods reflect anatomy and dynamic body functions heretofore discerned only from textbook illustrations. These new methods give functional images of blood flow and metabolism essential to diagnoses and to research on the brain, heart, liver, kidneys, bone, and other organs of the human body.

Although the mathematical sciences were used in a general way for image processing, they were of little importance in biomedical work until the development in the 1970s of computed tomography (CT) for the imaging of x-rays (leading to the computer-assisted tomography, or CAT scan) and isotope emission tomography (leading to Positron Emission tomography (PET) scans and single photon emission computed tomography (SPECT) scans). In the 1980s, MRI eclipsed the other modalities in many ways as the most informative medical imaging methodology [1]. Besides these well-established techniques, computer-based mathematical methods are being explored in applications to other well-known methods, such as ultrasound and electroencephalography, as well as new techniques of optical imaging, impedance tomography, and magnetic source imaging. It is worth pointing out that, while the final images of many of these techniques bear many similarities to each other, the technologies involved in each are completely different and the parameters represented in the images are very different in character as well as in medical usefulness. In each case, rather different mathematical or statistical models are used, with different equations. One common thread is the paradigm of reconstruction from indirect measurements. this is the unifying theme of this report.

The imaging methods used in biomedical applications include:

Introductions to some of these modalities, and their frontiers, are briefly described. The physics, mathematical, and engineering challenges inherent in each of the methods are not included. While the emphasis is on research challenges, significant development opportunities are also pointed out where appropriate. In the following section, we step back to present a vision of the emerging world of biomedical imaging, wherein imaging plays an expanded role in diagnosis and therapy, and sophisticated image processing gives medical personnel access to much greater insight into their patients' conditions. To emphasize the mathematical underpinnings of biomedical imaging, Refs. [1,2] take a cross-cutting and more detailed look at mathematical models and algorithms and outlines the related research opportunities.

Microwave imaging, infrared imaging, electron spin resonance, and interferometry are not discussed in this report because those methods at present are further from the forefront of development and applications. Also not covered in this report are the techniques of in vitro microscopy, such as confocal microscopy, atomic force microscopy, fluorescence microscopy, and methods of electron microscopy used to study biological systems. Although use of radioisotopes and contrast agents in x-ray computed tomography and MRI involves the chemical sciences, innovations in these areas are not critically evaluated in this report because they are too far afield of its focus.

Although this report emphasizes methodologies for visualizing internal body anatomy and function, some mention is warranted of the importance of improving techniques for the evaluation of human biology and disease processes through visualization of external features and functions. For example, sequential image-based descriptions of skin texture or color, gait, flexibility, and so on would require the development of convenient observation systems, perhaps with greater sensitivities than the human eye, and mathematical methods (e.g., artificial intelligence) for identifying significant changes.

This report discusses some of the modalities listed above in some detail to give the reader a picture of the current state of development. It also points to aspects of these biomedical imaging technologies where a deeper understanding is needed and to frontiers where future advances are most likely. The reader is encouraged to imagine the horizons for new developments and to critically examine the recommendations offered for the further development of each imaging modality.

2. Morphological imaging

Tomos is Greek for slice. The standard slice orientation in most brain imaging is transaxial or "axial". Left is shown at right. Note that, like the "lower organs", we look up to the brain. Other standard planes of view are coronal and sagittal (fig. 1). Non-tomographic images represent "projections" from a single point of view and include bolus contrast x-ray angiograms and MR angiograms. Tomographic images are made up of little squares called "pixels" (picture elements), each of which takes a grey-scale value from 1 (black) to 256 (white). Pixels are about 1 mm. on each side. The thickness of the slice is often 3 or 5 mm, thus creating a threedimensional volume element, or "voxel", which is shaped like a shoe box. Pixel intensity represents an average from tissue within the voxel.

Reconstruction is the abstract "rebuilding" of something that has been tom apart [3]. In the medical imaging context, it is often necessary to acquire data from methods that essentially "tear" data apart (or acquire the data one piece at a time) in order to be able to view what is inside. Also, a big part of reconstruction is then being able to view, or visualize, all the data once it's been put back together again.

CT (X-ray computed tomography).
A beam of x-rays passes through the brain and is detected according to the density of the tissue encountered. Detectors positioned around the circumference of the scanner collect attenuation readings (acquires a number of projections) from multiple angles. Then, these different views through the patient must be combined using a computerized algorithm to reconstruct the three-dimensional object.
MRI (magnetic resonance imaging).
When protons are placed in a magnetic field, they become capable of receiving and then transmitting electromagnetic energy. The strength of the transmitted energy is proportional to the number of protons in the tissue. Signal strength is modified by properties of each proton's microenvironment, such as its mobility and the local homogeneity of the magnetic field. MR signal can be "weighted" to accentuate some properties and not others. In MRI, the imaging device acquires a number of cross-sectional planes of data through the tissue being studied. Since all of these planes must be stacked back together to obtain the complete picture of what the tissue was like, MRI entails some amount of, reconstruction and lots of visualization, too.
When an additional magnetic field is superimposed, one which is carefully varied in strength at different points in space, each point in space has a unique radio frequency at which the signal is received and transmitted. This makes constructing an image possible. It represents the sPatial encoding of frequency, just like a piano.
Serial-Section Microscopy.
In Serial-Section Microscopy, the tissue being studied is sectioned into a number of slices and each slice is put into a microscope. Then, images are captured of each slice. To recreate how the tissue looked before we sectioned it, we must put all the images of all these slices back together again, just as if we were putting the real slices of tissue back together again.
Confocal Microscopy.
In confocal microscopy, the microscope can obtain a single plane of image data without having to slice the tissue. In this case, we don't need to realign the images of the slices much, but just stack them back together and then visualize the result.

So, since it sounds like putting the slices back together is easier for confocal, MRI, or CT, then why do serial-section reconstructions at all? Well, it turns out that the smaller the thing you are looking for is, the more difficult it is to get an imaging technique like MRI or confocal to "see" it, because the information from surrounding areas blurs out what you are looking for (there are a lot of other technical reasons, but let's just stick with this for the moment). So the smallest thing that MRI can "see" is about 1mm cubed. For confocal microscopes, the smallest object that's detectable is about 1/10 mm (1/10000 mm). But once you slice the object up, you can use other forms of microscopy such as an electron microscope to be able to see objects almost as sn1all as 1/10000 mm (1/100000000 mm). There are even newer "atomic force" microscopes that even let you detect individual atoms, but not many people (as yet) have done 3D reconstructions at this minute level. The problem is, though that the smaller you go, the more artefacts can be introduced, so the reconstruction process gets much more difficult.

Fig 1. MRI section of a human head
Fig 1. MRI section of a human head showing different slice orientations of the reconstructed volume.

3. Functional imaging

The most common form of nuclear medicine scan uses a gamma-ray emitting radio-isotope bound to a chemical with known physiological properties. After it is administered, single photons emitted by the decaying isotope are detected with a gamma camera [4]. These cameras consist of a lead collimator to ensure that all detected photons propagated along parallel paths, a crystal scintillate to convert high-energy photons to visible light, and photo-multiplier tubes and associated electronics to determine the position of each incident photon from the light distribution in the crystal. A two-dimensional histogram of the detected events forms a projection image of the distribution of the radio-isotope and hence of the chemical compound. An example of such a procedure would be a cardiac study using thallium-201. Image intensity is indicative of cardiac perfusion and can be used to diagnose defects in the blood supply. It is widely used to screen for bypass surgery.

Planar imaging with gamma cameras has three major shortcomings. First, the images are projection images, so the organ of interest can be obscured by activity in front of or behind the organ of interest. Moreover, photons originating in the organ of interest can be attenuated by overlying tissue. This is a problem, for example, in scans of obese women, where attenuation in the breast can be misinterpreted as a cardiac defect. Second, the radio-pharmaceuticals must incorporate relatively heavy isotopes such as thallium-201 and technetium-99m. Since these elements do not occur naturally in biologically active molecules, the synthesis of physiologically useful tracers incorporating them is a challenging technical problem. This restricts the number of available radio-pharmaceuticals. Finally, the lead collimator absorbs many photons, thereby reducing the sensitivity of the camera.

These issues are being addressed. The problems with projection imaging can be overcome by acquiring tomographic data with a rotating gamma camera and then correcting for attenuation in a tomographic reconstruction. This method is called single-photon emission computed tomography (SPECT) [4,5]. Continuing research in radio-chemistry has made more radio- pharmaceuticals available. Finally, newer SPECT cameras with two or three rotating heads have improved the sensitivity Nevertheless, single photon imaging still suffers from problems of poor sensitivity and poor quantitative accuracy [6]. SPECT images of the brain can be performed using either multidetector or rotating gamma camera systems. Each imaging system has its own advantages; the choice of equipment depends on the level of utilization and on the purposes for which the technique will be applied [7]. SPECT images are generated using gamma cameras or ring-type imaging systems that record photons emitted by tracers trapped in the brain. SPECT results in better image quality than planar (2-D) imaging because focal sources of activity are not superimposed upon each other; hence the signal-to-noise-ratio (i.e. the contrast between the target and the background activity) becomes greatly increased.

The high collection efficiency of the multidetector system makes rapid scanning of an entire slice possible. The primary advantage of this system is its high sensitivity, resulting in high spatial resolution and rapid imaging of the organ. As a result, SPECT perfusion images of the brain can be obtained with a spatial resolution of 10 mm FWHM (full width at half maximum) in the plane of the slice. The multidetector system would, therefore, be the preferred instrument for studies requiring high spatial resolution, regional quantification, or rapid sequential imaging. The rotating gamma camera approach is preferable for routine clinical imaging because of its availability and because it can be used for other types of the tomographic and non-tomographic imaging. The major constraint on rotating tomographic gamma camera tomography is sensitivity. The low sensitivity on each tomographic slice is compensated for by the fact that the gamma camera collects volumetric information obtained with multidetector system.

Improvements in camera designs, in collimator design (e.g. the slant hole, long bore, or the fan beam collimators) and in reconstruction algorithms have substantially improved the quality of SPECT perfusion images using the Anger type gamma camera. Satisfactory tomographic imaging has been achieved with the rotating gamma camera using all brain perfusion agents. With the rotating gamma camera, data is collected from multiple views obtained as the sodium iodide detector rotates about the patient's head. Since spatial resolution and image quality are dependant on the total number of primary, unscattered photons recorded by the detector, gamma cameras have been designed with multiple detectors to improve instrument sensitivity. Three and four-head cameras allow a marked improvement in spatial resolution (6 to 10 mm FWHM) compared with 14 to 17 mm for single head systems without increasing the examination time. Special purpose ring-type imaging systems are also designed to maximize the amount of detector recording activity from the brain. They use multiple detectors or a single sodium iodide ring and collect activity simultaneously from either a single or multiple slices (multidetector systems) or from all regions of the brain (annular detectors). Special purpose systems produce high quality images with a spatial resolution of 5 to 6 mm FWHM.

The volume imaging capacity of most SPECT systems permits reconstruction at any angle, including the axial, coronal and sagittal planes, or at the same angle of imaging obtained with CT or MRI to facilitate image comparisons. SPECT images can be merged with MRI and CT, creating a single image that combines anatomy and physiology (morphological and functional correlation). Three-dimensional surface and volume rendered images add perspective and facilitate the localization and sizing of lesions.

Fig 2. Brain images acquired with different imaging modalities
Fig 2. Brain images acquired with different imaging modalities.

Fig 3. Multimodality 2D slices fusion (MRI-SPECT)
Fig 3. Multimodality 2D slices fusion (MRI-SPECT)

Positron emission tomography (PET) [8], has inherent advantages that avoid these shortcomings. Attenuation correction is easily accomplished; positron-emitting isotopes of carbon, nitrogen, oxygen, and fluorine (fig 4) occur naturally in many compounds of biological interest, and can therefore be readily incorporated into a wide variety of useful radio pharmaceuticals; and collimation is done electronically, so no collimator is required, leading to relatively high sensitivity. The major problem with PET is its cost. The short half-life of most positron emitting isotopes requires an on-site cyclotron, and the scanners themselves are significantly more expensive than single-photon cameras. Nevertheless, PET is widely used in research studies and is finding growing clinical acceptance, primarily for the diagnosis and staging of cancer.

Fig 4. 18F-labeled 2-deoxyglucose (FDG)

Fig 4. 18F-labeled 2-deoxyglucose (FDG) is used in neurology, cardiology and oncology to study glucose metabolism. In cardiology, 18F-labeled FDG can be used to measure regional myocardial glucose metabolism. Although glucose is not the primary metabolic fuel of the myocardium, glucose utilization has been extensively studied as a metabolic marker in both diseased and normal myocardium. Because 18F-labeled FDG measures glucose metabolism it is also useful for tumor localization and quantitation. FDG is potentially useful in differentiating benign from malignant forms of stimulated osteoblastic activity because of the high metabolic activity of many types of aggressive tumors.

A photograph of a PET scanner is shown in fig. 3. The subject is surrounded by a cylindrical ring of detectors with a diameter of 80-100 cm and an axial extent of 10-20 cm. The detectors are shielded from radiation from outside the field of view by relatively thick, lead end-shields. Most scanners can be operated in either a slice-collimated mode, where axial collimation is provided by thin annular rings of tungsten called septa, or in a fully three-dimensional mode where the septa are retracted and coincidences can be collected between all possible detector pairs

Fig 5. Photograph of a commercial PET scanner

Fig 5. Photograph of a commercial PET scanner. A sketch of positron-electron annihilation and subsequent coincidence detection is also shown. Coincidence events are acquired at all possible azimuthal angles.

The most critical components of a PET camera are the detectors [9]. In some cases these are similar to those used in single-photon imaging: large crystals of sodium-iodide coupled to many photo-multiplier tubes (PMTs) [10]. The more commonly used configuration is shown in fig. 5. In these detectors a rectangular bundle of crystals, a block, is optically coupled to several PMTs. When a photon interacts in the crystal, electrons are moved from the valence band to the conduction band. These electrons return to the valence band at impurities in the crystal, emitting light in the process. Since the impurities usually have metastable excited states, the light output decays exponentially at a rate characteristic of the crystal. The ideal crystal has high density so that a large fraction of incident photons scintillate, high light output for positioning accuracy, fast rise-time for accurate timing, and a short decay time so that high counting rates can be handled. Most current scanners use bismuth-germanate (BGO), which generates approximately 2500 light photons per 511 keV photon, and has a decay time (i.e., time-constant) of 300 ns. One such block, for example, couples a 7x8 array of BGO crystals to four PMTs where each crystal is 3.3 mm wide in the transverse plane, 6.25 mm wide in the axial dimension, and 30 mm deep. The block is fabricated in such a way that the amount of light collected by each PMT varies uniquely depending on the crystal in which the scintillation occurred [4]. Hence integrals of the PMT outputs can be decoded to yield the position of each scintillation. The sum of the integrated PMT outputs is proportional to the energy deposited in the crystal.

If the data are acquired in the slice-collimated (2D) mode, the lines-of-response connecting crystals can be binned into sets of parallel projections at evenly spaced angles as shown in fig. 6. Two characteristics are evident. First, samples are unevenly spaced, with finer sampling at the edges of the field-of-view than at the center. Second, the samples along the heavy solid line at angles one and three are offset by one-half of the detector spacing from samples at angle two. Therefore, adjacent parallel projections can be combined to yield one-half the number of projection angles with a sampling distance of one-half the detector width. A typical block might have 3.3 mm thick crystals, so the resulting sampling distance would be 1.65 mm (fig. 6).

Fig 6. Projection geometry (left) and sampling pattern in the transaxial plane for a PET scanner (right)

Fig 6. Projection geometry (left) and sampling pattern in the transaxial plane for a PET scanner (right). Each segment in the detector ring represents one crystal. The solid lines show the parallel projections for the first angle, the dotted lines for the second angle, and the dashed lines for the third angle.

The Nyquist criterion is usually stated in medical imaging applications as requiring that the sampling distance be one-half the spatial resolution expressed as the full-width-at-half-maximum (FWHM). Hence, this block would support a spatial resolution of 3.3 mm. In fact, a scanner with this crystal size has a measured resolution that is somewhat worse, varying from 3.6 mm at the center of the field-of-view to 5.0 mm at 20 cm from the center. This occurs because scintillations usually consist of one or more Compton interactions followed by photoelectric absorption (assuming the photon is not scattered out of the crystal). Since a 511 keV photon travels on average 7.5 mm in BGO before interacting, the light output is spatially distributed, especially at large radial distances where it is often distributed across two crystals.

The best obtainable resolution is termed the intrinsic resolution. This resolution is rarely achieved in practice because unfiltered images are usually very noisy. Although current scanners have intrinsic resolutions of less than 5 mm, the final resolution of the image is usually greater than 8 mm because the reconstruction algorithms trade-off resolution for reduced image variance. This final resolution is called the reconstructed resolution. Therefore, the resolution of PET images as they are typically used is not determined by the detectors, but by the degree to which resolution must be degraded to achieve an acceptable image variance. Since the variance is determined by the numbers of counts that can be collected during the scan, the constraints that govern the clinically useful resolution of PET images are the dosage of the radio-pharmaceutical, the duration of the scan, the sensitivity of the scanner, and the count-rate capability of the scanner [11].

4. Image visualization

Volume visualization of medical images has become a standard tool, which can be applied for many different purposes. While in some fields, such as planning of craniofacial surgery on basis of CT images, 3D visualization is frequently applied, other fields are still suffering from some difficulties, which prevent a wide use of these methods. For clinical applications the main difficulties arise from the segmentation needed for every single case to define the structures to be visualized. Although countless segmentation approaches are existing, none of them solves the problem in general, and the time effort for segmentation usually remains high. Thus in clinical routine many applications, for which 3D visualization might be useful, do not use thesev methods. The situation changes, if we consider teaching applications in radiology or anatomy, for which segmentation has to be done only once for a few typical cases. Nevertheless, these systems are suffering from a lack of realism and detail, because their typical basis are radiological cross sectional images from CT or MRI, which have limited spatial resolution of about 1 mm only, and which do not yield any color values for realistic visualization. Theses difficulties can be overcome, if the high resolution color images of the Visible Human Project [12] are used. These are basically anatomic images, whose important link to radiology is established by corresponding stacks of CT and MRI slices.

Segmentation is a prerequisite for 3D visualization of image volumes and high quality 3D réndering. It has turned out to be extremely difficult to formalize for automatic computation. Segmentation tools implemented so far are simple thresholding and morphological operations. During the past years various visualization algorithms for image volume data have been developed that deliver very realistic images. None of the so far published algorithms is able to succeed automatically on any data set or object, although there are many methods suitable for special objects or for input data, that have to be obtained with precisely prescribed parameters.

For simple objects, such as bone from CT data, thresholding might be sufficient. For other objects or modalities, there will be different objects within the same intensity range, so that pure thresholding fails. Region growing can be applied successfully in some cases, but different objects might be connected by small links, so that they would be segmented together. Morphological operators [13] can be useful for the refinement of object contours detected in a previous segmentation step, but usually manual removal of incorrectly linked boundaries will be necessary. Morphological operations can also be applied to regions generated from thresholding in order to split the regions into meaningful objects by application of fixed sequences of operators [14]. The difficulty here is to define a sequence that is valid for every data set. The same difficulty occurs in rule-based systems defining globally valid rules. The drawback of all the above methods is that the complete knowledge necessary for successful segmentation cannot be included.

Once initially reconstructed as slices (as it is now, most biomedical imaging machines are tomographs), a 3D representation of the volume seen through the sensor's physical principles can be created by stacking the slices together. As most modalities also have the capability of sampling acquisition data through time, they add-up the time dimension (the famous 4th one) to this 'cube'. The sensor dimension itself adds-up one more axis to this already complex enough space and we finally end-up with a 5 dimension space. In such a situation, there was no real choice left but to coin the even more generic name 'MultiDimensional' as the most likely to include all the application space within the scope of the related projects. Several segmentation techniques may be used which produce contours or surfaces (both from points) representing as well as possible the region of interest under scrutiny. These geometric components can then be visualized and manipulated in 3D, in (near) real-time depending on their complexity and the equipment used. Movies can also be created to record the interactive or programmed motion of the structures or to simulate this functionality and export it to less gifted workstations. Here is a collection of examples of local clinical and research applications in this seemingly ever-expanding field of biomedical imaging which took some time to take off but now seems impossible to leave aside [15].

7. Volume rendering from CT data

Fig 7. Volume rendering from CT data. The color and transparency of every kind of tissue (density for CT) can be adjusted separately before projecting the data in the visualization plan. The volume can be manipulated in 3D, interactively on high-end workstations.

Fig 8. Segmentation of MRI data

Fig 8. Segmentation of MRI data. The data acquired by any type of modality can be segmented by various methods. The techniques generally used range from simple editing or thresholding-windowing on the data values to more complex region growing from a seed and according to various criteria. There are also other techniques which use a-priori knowledge about the structures to segment (e.g. scene recognition).

Fig 9. Ictal SPECT and EMT data
Fig 9. Ictal SPECT and EMT data projected on the corresponding anatomy extracted from MRI.

Fig 10. Multiview simulated neurosurgical stereotaxy based on data from CT
Fig 10. Multiview simulated neurosurgical stereotaxy based on data from CT.

Fig 11. 3D reconstruction from gated MR data
Fig 11. 3D reconstruction from gated MR data.

Fig 12. 3D reconstruction from CT data of a broken cotyloid cavity
Fig 12. 3D reconstruction from CT data of a broken cotyloid cavity for the planning of an orthopedic surgery intervention.

Fig 13. 3D reconstruction from PET data of a breast tumor
Fig 13. 3D reconstruction from PET data of a breast tumor. The bust was reconstructed from transmission data, while the tumor was segmented from emission data.

The OSIRIS Imaging Software.

The OSIRIS software has been designed as a general medical image manipulation and analysis software [16]. The design is mainly based on the following criteria: portability, extendibility and suitability for any imaging modality. OSIRIS is designed to deal with images provided by any type of digital imaging modality to allow physicians to easily display and manipulate images from different imaging sources using a single generic software program. Portability ensures the software implementation on different types of computers and workstations. Thus, the user can work in the same way, with exactly the same graphical user interface on different stations. Also by supporting standard file formats, the OSIRIS software provides access to images from any imaging modality. The OSIRIS program was developed as part of the Geneva PACS project and is intended for physicians and non computer-oriented users allowing them to display and manipulate medical images. Its standard original version included only basic image manipulation tools accessible through a convenient and user-friendly graphic interface. In addition to being used at the University Hospital of Geneva, it was widely distributed around the world and was adjusted according to user's comments and suggestions. This program was also designed to serve as a development of more advanced image processing and analysis tools.

Portability.
The initial development of OSIRIS program was undertaken simultaneously on UNIX based X/window graphic environment as well as Apple Macintosh native platform. The UNIX version was further tested on a variety of workstations namely SUN Sparc series , DEC alpha series, HP 7000 series and IBM Risc-6000 series machines. Recent evolution in the desktop computing environment lead us to develop a new kernel of the OSIRIS software to be compatible with the Windows 95 graphic interface for PC compatible computers. This new version is also directly compatible with Windows NT and may be used on Windows 3.11 with some restrictions in performance. Finally we also ported our code to run native on the PowerPC RISC computers to fully benefit from the enhanced performance of this new generation of processors.
Enhancement of basic features of the program.
Through almost three years of clinical use in Geneva as well as other institutions were the program was tested, we collected a large number of suggestions and wishes from the users that allowed us to significantly improve the basic features of the program. Display of image sets: In the original design we allowed sets of images to be displayed in a single window according to two modes: the stack mode and the tile mode. In the tile mode images were laid out on a grid where the number of rows and columns were determined by the program. Several users requested the possibility to modify the predetermined layout by selecting the number of rows or columns. This allows sets of images to be displayed on a single row on a continuous strip allowing multiple sequences to be displayed side by side on the screen. Regions Of Interest: The notion of ROI (Region Of Interest) is the basis for most processing and analysis tools provided by OSIRIS. An extension of this notion has been added, the notion of "volume" ROI. A volume can be described by a set of ROI's assigned on several image in a contiguous set of slices. ROI's are identified by their type and their name. A specific data window displays the results and a 3D representation of the volume outlined. Finally for automatic outline of regions of interest, new tool has been provided based on automatic segmentation using a region growing algorithm .
Image display facilities.
The zoom coefficient, located at the left bottom of the main window, is now a popup menu which allows to directly select the desired zoom factor. A more direct way of adjusting contrast and intensity window has been provided. A new button was added to the left panel of the main window allowing to directly adjust the windowing with the mouse motion. Horizontal movements modify the window width while vertical movements modify the window level.
Dynamic links to other programs.
In order to facilitate the access to text documents such as interpretation reports associated with a set of images a special features was added to access word processing files associated with a PAPYRUS file. When this feature is activated, OSIRIS looks for a text editor or word processing application available and tries to open the document file with the same name as the PAPYRUS file currently displayed but identified by the extension ".doc". If such file does not exist, OSIRIS asks if it must be created.
Extended processing and analysis tools.
OSIRIS has been designed has an extensible platform. It means that new tools can be easily developed and integrated into the software. As these tools must be portable on the different workstations supported by OSIRIS, it was important to provide the potential developer with a library independent from the underlying system and more specially the windowing system. Indeed, the most system dependent part of a tool is certainly the user interface. Two basic components allow to create system independent tools: the dialog manager and the data window manager. The dialog manager allows to create dialog windows containing different types of elements to which call back operations can be attributed. The data window manager allows to display results in separate windows in which mouse interaction is possible.
Quantitative analysis tools.
ROI's can be used to develop analysis tools. It is what has been done with the ejection fraction measurement tool for example. This tool allows to compute the ejection fraction from the heart left ventricle according to five different methods. For that, the left ventricle must be identified by a polygonal ROI for the diastolic and systolic phases. According to the number of different views and their orientation, one of the methods is automatically applied.

After six years of development and clinical evaluation, the OSIRIS software has gained maturity and stability and is now closer to the real user requirement than the initial prototype. It has now entered a phase of refinements and clinical enhancement through the development of specific analysis and processing tools. Its portability among different hardware platform ensures a constant evolution in performance through the best available hardware on the market.

6. Picture Archiving and Communication Systems

With the rapid development of digital imaging modalities in medicine, there is an increasing need for an efficient management and archival of medical images in digital form. Such systems often referred to as Picture Archiving and Communication System (PACS). At the HCUG, we currently use biomedical imaging equipment from various vendors: spiral CT (PQ2000) and MR (Edge 1.5T) are from Picker, our PET machine is a prototype RPT from Siemens, our SPECT machines are principally a two-head system from Picker and a three-head system from Toshiba. We are also building our radiochemistry lab with a cyclotron from IBA. All machines are connected to a global hospital wide network and CT and MR images are accessible through our PACS developments.

Because of the availability of commercial medical imaging workstations with 3D imaging options, basic 3D visualization capabilities are now available directly out of the acquisition machines. Despite the generalization of the DICOM standard for transferring and storing the images, these machines still do not generally take proper care of the multimodality concepts and of data coming from a different environment. For these reasons, the projects belonging to the MD-MS class still have to be taken care of try more research oriented teams and workstations. In any case, to try distributing the MD capability closer to the acquisition and reviewing wards, we acquired several commercial 3D workstations attached to specific tomographs: one VistarXL from Picker to the MR, one VoxelQ from Picker to the CT, one Odyssey VP from Picker to the double-head Picker SPECT, one GMS-5500 from Toshiba to the three-head GCA-9300 SPECT. Various Sun's, Mac's and PC's are also distributed on the network and may act as reviewing stations with the help of Osiris, our PACS display window.

The University Hospital of Geneva has initiated a hospital-wide PACS project with the aim of developing an integrated Image management system for radiological as well as nonradiological medical images [17]. It is based on a multi-vendor open architecture and a set of widely available industry standards, namely: UNIX as operating system, TCP-IP as network protocol and SQL-based distributed database (INGRES). The main characteristic of this PACS is that it is directly part of a large scale Hospital Information System (HIS) called the DIOGENE system. The PACS is based on a distributed architecture of clusters of Archive Servers equipped with large optical disk libraries (Juke Boxes) and Display Servers distributed over the hospital. A standard image storage format called the PAPYRUS format was developed based on the ACR-NEMA standard. In order to provide a more uniform user interface on a variety of different workstations a common platform for image display and manipulation called OSIRIS is developed and is now accessible on computer running UNIX and X-Window as well as Macintosh and personal computers running MS Windows. The software is written in the object oriented language C++, and is easily expandable and adaptable to different needs.

The current configuration comprises a first Archive Server that is operational since January 1992 and for the systematic storage of all images acquired from two CT scanners, an MRI unit and two ultrasound suites. A second archive unit was put in clinical operation in January 1994 and is collecting Computed Radiography (CR) images from the hospital's emergency rooms as well as images obtained from a digital fluoroscopy unit dedicated to GI and contrast enhanced radiological investigations. Each archive unit has a storage capacity of over one Terabytes on optical disks as well as magnetic disk cache of several Gigabytes for faster access to recent images. Other developments that were completed so far include standard interface units for image acquisition from the Nuclear Medicine department (6 imaging units including planar, SPECT and PET scintigraphic images) as well as from the cardiology division (Ultrasound and angiography images). Future plans of the project include the implementation of additional archive units for the storage of these images that are now accessible on the PACS network in our internal standard format.

The PACS developed at the University Hospital of Geneva (fig. 14) is based on a distributed architecture with hierarchical archive of images and related data with multiple archive and display servers. This type of architecture seems to be nowadays the only viable solution for large scale PACS where images from a variety of imaging modalities are accessed from a large number of consultation points. The choice of a very modular architecture with clusters of archive and display servers allows for a progressive planning of the PACS implementation on a hospital-wide scale. Different PACS modules will be progressively implemented (for intensive care units, molecular biology images, nuclear medicine, cardiology etc...). All these modules will be developed on a similar architecture even if the hardware choices may differ depending on the availability of new and more performant equipment. A distributed database allow to access all the images from the different modules. Prefetch algorithms are being developed to anticipate the needs and regulate the traffic of images between the archive servers and display servers.


Fig 14. The Geneva Hospital Picture Archiving and Communication System.

The evolution of DIOGENE HIS from a centralized system to a distributed architecture has promoted the concept of the integration of the PACS directly as a part of the HIS. In this architecture the images are viewed as another type of information to be handled by the HIS. This leads to a new concept of Ward Information System (WIS) where physicians and nurses could review and update their patient's medical record directly in digital form. After an initial phase of the project where images were only accessible in the Radiology Department we are now progressively expanding image access to clinical units and wards. The PACS is intended as an added value to film-based routine operation. The convenient access to images in digital form allows simultaneous consultation of the same document in different locations and reduces significantly the amount of time wasted in retrieving radiological documents. Additionally, it offers new images manipulation and processing capabilities that are progressively being adopted by clinicians in a variety of instances. A fair amount of development effort is now being invested in the extension of the OSIRIS software for specific clinical needs by adding dedicated processing and analysis tools to the basic image manipulation functions. Additionally the progressive integration of images with other elements of the patient record such as radiology reports, discharge letters and lab results offers an attractive and efficient alternative to paper-and film-based records.

So far the access to images from the PACS is provided to physicians of the medical intensive care unit (MICU) as well to the doctors offices of the emergency rooms (ER). These two areas were s elected because of the specific need s for urgent access to radiological documents in the shortest laps of time. It is in these clinical sections with heavy workload and rapid patient turnaround that PACS can offer an attractive alternative to film-based operation by eliminating the amount of misplaced and inaccessible documents. Direct and reliable access to images minimizes the amount of time spent in retrieving these documents and therefore allows a more efficient patient management.

PACS workstations are also being progressively provided to physicians of other clinical units. In particular the cardiology division and the orthopedic clinic have adopted the OSIRIS imaging platform for different specific usage. In cardiology, this platform allows to integrate images from different imaging modalities on a common image processing environment where quantitative analysis and processing can be performed. Special tools were developed to allow routine calculations of functional parameters such as ejection fraction, wall motion analysis and flow quantification from a variety of imaging modalities (ultrasound, MRI, nuclear medicine and angiography). In orthopedic similar applications are being explored. Quantitative measurements obtained from digital CR images allow direct treatment planning and selection of appropriate prosthetic devices prior to the surgical interventions.

The access to medical images through PACS workstation is progressively entering in clinical routine and will soon become an essential part of physician's diagnostic and therapeutic activities. An effort is now being geared toward more efficient and convenient image distribution to large number of users having different needs and requirements. Recent studies demonstrated that the cost-benefit of PACS should not be evaluated as a resource of the radiology department alone but should extend to the entire hospital. The main benefit of PACS is expected outside the radiology department where clinicians will benefit from a more efficient access to the images. A combined access to the images and related data such as the radiological report, annotations and analysis results is a key feature of the beneficial effect of PACS on the clinical routine. A substantial added value also comes from the image manipulation and processing features that can be provided to the clinical users outside the radiology department. It is therefore important to allow for a flexible design of such tools for easy customization of the workstation software that must be adapted to the different users' needs.

7. Discussion and Conclusion

Biomedical imaging has seen truly exciting advances in recent years. New imaging methods can now reflect internal anatomy and dynamic body functions heretofore only derived from textbook pictures, and applications to a wide range of diagnostic and therapeutic procedures can be envisioned. Not only can technological advances create new and better ways to extract information about our bodies, but they also offer the promise of making some existing imaging tools more convenient and economical.

While exponential improvements in computing power have contributed to the development of today's biomedical imaging capabilities, computing power alone does not account for the dramatic expansion of the field, nor will future improvements in computer hardware be a sufficient springboard to enable the development of the biomedical imaging tools described in this report. That development will require continued research in physics and the mathematical sciences, fields that have contributed greatly to biomedical imaging and will continue to do so.

The major topics of recent interest in the area of functional imaging involve the use of MRI and positron emission tomography (PET) to explore the activity of the brain when it is challenged with sensory stimulation or mental processing tasks, and the use of PET to investigate the physiological basis of societal health problems such as drug abuse and craving. As we begin to apply modern biology in gene therapy trials, dynamic and functional imaging methods are being called on to aid in evaluating the appropriateness and efficacy of therapies, as has been done for Parkinson's disease and is proposed for Alzheimer's disease. The emerging imaging methods have the potential to help unravel major medical and societal problems, including the mental disorders of depression, schizophrenia, and Alzheimer's disease and metabolic disorders such as osteoporosis and atherosclerosis.

An example of an entirely new development is the integration of real-time MRI as a means for monitoring interventional procedures ("interventional MRI") a capability that would be particularly appealing for use in conjunction with the emerging methods of minimally invasive surgery such as ablative procedures using lasers, cryoprobes, or focused ultrasound. Ultrasound in fact could be a completely non-invasive technique; applying it does not involve surgery. While ultrasound has been used in the past as a source for heat destruction, its full potential cannot be realized without a capability for remote sensing of temperature. Fortunately, MRI is uniquely suited for on-line monitoring of focused ultrasound because of the temperature sensitivity of the signal. Interventional MRI systems incorporating new technology such as superconducting magnets that allow physicians to have access to their patients during a scan are already undergoing trials and are predicted to radically alter the ways surgical procedures will be performed in the 21st century. Some of the research challenges that could contribute to realization of this vision are described in [2].

Many of the envisioned innovations in medical imaging are fundamentally dependent on the mathematical sciences. Equations that link imaging measurements to quantities of interest must be sufficiently complex to be realistic and accurate and yet simple enough to be capable of solution, either by a direct "inversion formula" or by an appropriate iterative algorithm. In the early 1970s computer methods and algorithms became powerful enough to allow some equations to be solved for practical situations. But there is invariably noise in the measurements, and errors also arise because of the impossibility of giving an exact inversion solution to the equations , either because the equations are only approximate or because the solution technique involves approximation. The development of mathematical methods for producing images from projections thus also requires a capability for overcoming errors or artefacts of the reconstruction method that arise from different sources, and much remains to be done. The result is the need for approximate reconstruction strategies or the incorporation of prior or side information. In addition, computer simulation of imaging methods plays an essential role in separating errors of noise from errors in the design of the mathematical methods, and simulation allows the mathematician and physicist to critically evaluate new ideas in the emerging field of dynamic biomedical imaging.

Since computers are continually increasing in speed and memory, it might seem at first that it is only a matter of time before iterative reconstruction methods become used routinely. However, the same advances in technology that lead to faster computers also lead to bigger and harder problems ! For example, although computing speed certainly has reached the point where iterative methods are clinically feasible for 2D problems, the focus is now on 3D PET where the size of A is 11-15 times larger than in 2D (after exploiting symmetries). Similar considerations apply to cone-beam SPECT, or even to parallel collimator SPECT with 3D compensation for detector response. Thus there is continuing need for new ideas in image reconstruction algorithm development. Although some of those ideas will undoubtedly be borrowed from signal and image processing work, the algorithm must be based on accurate models of the physics and statistics of PET if they are to be fully effective. Convincingly demonstrating that new methods are truly more effective than previous methods requires careful matching of the resolution or noise properties of the methods compared. The medical imaging community is generally unconvinced by the type of anecdotal single-image comparisons often found in image processing papers. There is increasing emphasis on formal statistical evaluations of different image reconstruction methods, which are also being applied to image processing.

Finally, it is worthy to point out the explosion in the use and utility if the Internet including some resources of specific interest to the medical imaging community. The World Wide Web offers great potential for education and teaching and may become the major method of sharing and communicating medical information. The creation of "digital departments" [18,19] provides access to multimedia reporting (text, images, cines) from inexpensive client systems. Further, the WWW will allow use of Java applets to provide additional functionality such as analysis, to be implemented on Java compliant browsers.

REFERENCES

[1]
Webb S. The Physics of Medical Imaging. Medical Science Series, New York 1988.
[2]
Mathematics and Physics of Emerging Biomedical Imaging. National Academy Press, 1996.
[3]
Brooks RA and Dichiro G. Principles of computer assisted tomography (CAT) in radiographic and radioisotopic imaging. Phys Med Biol 1976; 5: 689-732.
[4]
Mettler FA and Guiberteau MJ. Essentials of nuclear medicine imaging. Third edition, WB Saunders Company, 1991.
[5]
J. A. Sorenson and M. E. Phelps, Physics in Nuclear Medicine, 2nd ed. Orlando: Grune & Stratton Inc, 1987.
[6]
H. Zaidi. "Quantitative SPECT: Recent developments in detector response, attenuation and scatter correction techniques". Physica Medica, Vol. XII, No. 3 (1996) 101-117.
[7]
H. Zaidi. "Organ volume estimation using SPECT". EEE. Trans. Nucl. Sci, Vol 43, No. 3, June 1996, 2174-2182.
[8]
Phelps, ME, Mazziotta, J. Schelbert, H (Eds): Positron Emission Tomography and Autoradiography. Raven, New York, 1986
[9]
Dahlbom, M, Hoffman, EJ, Hoh, CK, Schiepers, C, Rosenqvist, G. Hawkins, RA, Phelps, ME. Evaluation of a positron emission tomography (PET) scanner for whole bodv imagine. J. Nucl. Med. 33:1191-1199, 1992.
[10]
Knoll G. Radiation Detection and Measurements. John Willey & Sons, New York 1979
[11]
Phelps, ME. Positron emission tomography (PET). In: J. Mazziotta & S. Gilman, Eds., Clinical Brain Imaging: Principles and Applications., FA Davis, Philadelphia, 1992, pp. 71-107.
[12]
National Library of Medicine (U.S.) Board of Regents. Electronic imaging: Report of the Board of Regents. U.S. Department of Health and Human Services, Public Health Service, National Institutes of Health, 1990. NIH Publication 90-2197.
[13]
S. P. Raya and J. K. Udupa, "Low-level segmentation of 3-D magnetic resonance brain images: A rule-based system," IEEE Trans. Med. Imaging, vol. MI-9, no. 3, pp. 327-337, 1990.
[14]
K. H. Hohne and W. A. Hanson, "Interactive 3D-segmentation of MRI and CT volumes using morphological operations, "J. Comput. Assist. Tomogr., vol. 16, no. 2, pp. 285-294, 1992.
[15]
Bidaut L. Multidimensional, multisensor, biomedical imaging in clinical use. Physica Medica. Vol XII, Suplement 1, 1996: 94-102
[16]
Ligier Y., Ratib O., Logean M., Girard C. OSIRIS: A Medical Image Manipulation System. M.D. Computing Journal. 1994, 4: 212-218.
[17]
Ligier Y, Ratib O, Funk M, Perrier R. Girard C, Logean M. Image manipulation software portable on different hardware platforms: What is the cost? Medica Imaging VI: PACS design and evaluation. Newport Beach. California SPE 1992: 341-348.
[18]
Wallis J R. Miller M M, Miller T R. and Vreeland T H An Internet-based nuclear medicine teaching file. J Nucl Med 36: 1520-1527, 1995.
[19]
Parker J A, Wallis J W. Halama J R. Brown C V, Ceadduck T D, Graham M M, Wu E. Wagenaar D J. Mammone G L, Greenes R A, and Holman B L. Collaboration using Internet for the development of case-based teaching files: Report of the Computer and Instrumentation Council Internet Focus Group. J. Nucl. Med. 1996: 37; 178-184