Among various palmprint identification methods proposed in the literature, Sparse Representation for Classification (SRC) is very attractive, offering high accuracy. Although SRC has good discriminative ability, its performance strongly depends on the quality of the training data. In fact, palmprint images do not only contain identity information but they also have other information such as illumination and distortions due the acquisition conditions. In this case, SRC may not be able to classify the identity of palmprint well in the original space since samples from the same class show large variations. To overcome this problem, we propose in this work to exploit sparse-and-dense hybrid representation (SDR) for palmprint identification. Indeed, this type of representations that are based on the dictionary learning from the training data has shown its great advantage to overcome the limitations of SRC. Extensive experiments are conducted on two publicly available palmprint datasets: multispectral and PolyU. The obtained results clearly show the ability of the proposed method to outperform both the state-of-the-art holistic approaches and the coding palmprint identification methods.