Image fusion involves combining details from two or more different imaging techniques, say MRI and PET images and provides a better image for diagnosis and treatment. Despite the fact standard spatial domain methods are being used successfully, including simple late fusion based on min/max fusion and far more complex content-aware pixel-wise mapping, key features are sometimes not well preserved. The domain transforms especially the WT-based fusion process, have brought significant improvements in literature hyper corrigibility, primarily because of its efficient computational performances along with its non-specificity of the image content domain. However, the directionality of the singularities is somewhat lost in the wavelet transform, due to which representation of truly distributed singularities is inherently limited. To overcome this limitation, the present work uses a non-subsampled shearlet transform (NSST) for medical image fusion, as it is effective in multi-directional and multiscale representation. The method proposed here firstly involves applying NSST to the source images to yield their lowpass and high-pass subbands. A pulse-coupled neural network (PCNN) is then used on these subbands to decide the best fusion rule to maintain most of the important structural and textural information. Last but not least, an inverse shearlet transform reconstructs the fused image using the processed sub-bands as inputs. Entropy, standard deviation, and the structural similarity index (SSIM) have been used quantitatively to assess the performance of the proposed fusion scheme. Experimental analysis using brain MRI/PET image databases shows that the proposed fusion method achieves better performance than the existing image fusion techniques and provides higher image quality and improved feature details.