@ RIT, 2016
building standing posture panorama x-ray images using an automated image processing pipeline
Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in the standing position. These images are then taken and stitched together to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing the users input, optimizing workflow, and reducing human error.
The process begins with pre-processing the input images by removing artifacts, filtering out the extra pixels, and amplifying a seamless bone edge. The binary images are then registered together using a rigid-body intensity based mutual information algorithm. As a result, our methods focus primarily on the automated anatomical content of the image. We continued to test the robustness of our method by comparing the resulting automatic translations to the manually stitched ground truth translations along with the means and standard deviations of the overall image.
Keywords: visualization, biomedical imaging, medical imaging and image processing, image rendering, detection, medical illustration, imaging systems, biomechanics.
Over the last decade significant effort has been dedicated to the development of Computer-aided Diagnosis (CAD) systems that can help increase radiologists' work-flow efficiency and reduce their workload. CAD systems are in clinical practice to improve performance, enabling real-time operations for applications such as image-guided interventions and computer-assisted navigation and visualization. The ultimate goal is not only to increase work-flow efficiency, but also to increase consistency, precision, automation, and user bias.
Our project collected data from more than 200 different sites for screening patients based on the Hip-Knee-Ankle (HKA) angle. These sites unsuccessfully tried to place external markers, which was found to be misplaced and could not follow the necessary protocol for obtaining the images. There is an extremely wide variant between the different sites in the X-ray energy that is used for the acquisition, the different external landmarks in any location (even within the bone or on the bone edge), and the intensities, to a point where even the leg edge was not reliable. We had to rely only on the edges of the bones and ignore all the rest. As a result, any intensity based registration, or active shape, and the appearance model for the bone segmentation and registration could not work across the collected data. Therefore, our feature was the medial tibia and the femur edges. Feature-based registration is employed once known features are identified and provides a smaller search area for the registration application.
These datasets were stitched manually by an expert and a trained student. For this work, we used only the lower stitch between the knee and the foot images. The image parameters varied between cases, providing a heterogeneous mix of scanner types and imaging parameters consistent with typical clinical cases. The population features a wide variety of image resolution, localization, imaging artifacts, image quality and clinical conditions. Lastly, the translations from the manual stitching of the two images were compared without significant visual differences. Similarly the automated stitching was compared to the manual and was also found to be without significant visual differences (within ± 15 mm difference for the vertical translation and ± 5 mm for the horizontal translation). This difference is very reasonable, especially when the overlap is only within the shaft of the tibia. The proposed method has the capacity to detect the medial edges and can be easily used to detect a mask around the lateral edge for segmenting the bone.
Many thanks to Dr. Cristian A. Linte and Kfir Yehuda Ben-Zikri for allowing me to assist them in this project.