Implementation of Image Fusion Algorithm using 2G Curvelet Transform
##plugins.themes.academic_pro.article.main##
Abstract
Image fusion is an important research topic in many related areas such as computer vision, robotics, and medical imaging, etc. Multi-sensor image fusion is the process of combining relevant information from several images into one image. The final output image can provide more information than any of the single images. Image fusion, as opposed to strict data fusion, requires data representing every point on a surface or in space to be fused, rather than selected points of interest. There are numerous medical examples presented of image fusion for registering and combining Magnetic Resonance (MR) and Computer Tomography (CT) into composites to aid surgery. In each of these examples, there are numerous opportunities for image fusion success to help in decision-making and diagnostics. It also contains various potential applications for medical data collection and diagnosis. It assists physicians in extracting features that may not be normally visible in images produced by different modalities. There are surveillance examples of image fusion for combining polariometric Synthetic Aperture Radar (SAR) and Hyper Spectral (HSI) data. Finally, a third field is industrial applications that include Non Destructive Evaluation (NDE) techniques to inspect parts. A variety of techniques have been developed for fuse the images, broadly classified into the spatial and spectral methods. Image fusion algorithms can be categorized into different levels: low, middle, and high; or pixel, feature, and symbolic levels.