In this paper, a novel decentralized control method is proposed for nonlinear mismatched large-scale interconnected systems subjected to partially unknown dynamics by designing auxiliary control for each subsystem. It is demonstrated that the control sequence consisting of the optimal control policies of auxiliary control can stabilize the system asymptotically, leading to decentralized control of the large-scale system. An integral reinforcement learning (IRL) method is firstly proposed, replacing the traditional policy iterative algorithm to analyze the optimal control problem of each subsystem with partially unknown dynamics. After that, the dynamic event-triggered control algorithm is proposed based on the static event-triggered control method, and an internal dynamic variable characterized by a first-order filter is defined. A single critic neural network (NN) is then designed to learn the approximate optimal control strategy under the dynamic event-triggered mechanism adaptively. The stability analysis is proposed to demonstrate that the state of the event-based pulse system is ultimately uniformly bounded (UUB) and the Zeno behavior is eliminated successfully. Finally, the effectiveness of the proposed algorithm is verified by two simulation examples to realize decentralized control of mismatched large-scale systems.