Floods are among the most dangerous natural disasters, and their frequency and severity are rapidly increasing due to climate change. Flood inundation mapping is critical for better understanding which areas are prone to flooding and identifying emerging floods. Recently, deep learning algorithms have been developed to improve the accuracy and timeliness of the production of these maps. In this study a new architecture, named Explainable Flood Inundation Mapping Network (XFIMNet) is presented, which is an explainable deep learning segmentation model for producing flood inundation maps from both synthetic aperture radar (SAR) and multi-spectral optical images. Comparative analysis with state-of-the-art (SOTA) models is performed, where the versatility of the model is demonstrated, in that it achieves superior performance across both image types, while the SOTA models are less consistent. The models are further evaluated with only the red, green, and blue (RGB) bands of the Sentinel-2 images, demonstrating that the full 13-band image is the superior choice for training a deep learning model. XFIMNet achieves an Intersection Over Union (IOU) of 0.59 with SAR images, 0.7 with optical, and 0.47 with the RGB bands, outperforming each of the SOTA models, as well as providing the most detailed and accurate visual segmentation mask, with minimal increase in the computation time for training or inference. The findings of this study highlight that XFIMNet is not only able to provide the most accurate segmentation mask but also prioritizes the most appropriate features when making decisions across both image types.