In orthohinolarynological surgery endoscopes are used since about two decades. Much progress in therapy could be
achieved by applying this minimally invasive surgical technique, but it demands highly experienced and skilled
surgeons: The endoscopic view is totally different, no direct view into the operational area, distortions caused by
the lens system of the endoscope, and very limited space for surgical instruments within the body cavities, e.g. the
sinus.
In order to minimize potential damage of sensible anatomical structures, e.g. the optical nerve or the internal
carotid artery, minimally invasive surgery has a great potential to profit from modern surgical navigation. Induced
by the increasing quality and availability of 3D imaging modalities, the position of surgical instruments is
monitored by the means of modern virtual reality techniques, relative to 3D scenery.
This project aims at the development of tools for virtual endoscopy based on multi-modal image volumes.
Algorithms employing moderns surface-to-points matching methods are developed for the registration MR and CT
images. Endoscopy is simulated from these data volumes by calculating 3D images for the viewing geometry of the
endoscope. Perspective volume rendering, implementing a general camera model, adaptable to the geometrical
properties of various lens systems, is used to model optical distortions. Additionally a 3D visualization tool,
showing the endoscopes position relative to a rendered total view of the operation field is developed to enable
intuitive orientation for the surgeon.
This project combines complementary information from several medical imaging modalities for efficient navigation
to meet the demands of a modern environment for minimally invasive surgery.