Learning calligraphy writing skills in robots is regarded as a sophisticated task. Current robotic researchers have proposed many methods to implement various robotic calligraphy systems. However, several limitations of these methods, such as high computational costs and few diversities of generated results constrain the development of calligraphy robots. This article proposes a robotic writing framework based on a robotic hand–eye coordination method to solve these limitations. Inspired by the internal model control (IMC) system, a vision-motor network and a motor-vision network are built to simulate the direct and reverse models, respectively, in the IMC system of a robotic manipulator. The vision-motor network works as an action generator to convert image inputs to robotic actions, and the motor-vision network assists in the training of the vision-motor network. Thus, a pretraining of the motor-vision network is established by random writing movements of a robotic manipulator. Experimental results demonstrate that the proposed method can successfully write strokes of Chinese characters by inputting target stroke images. Although the proposed method is applied to robotic calligraphy, the underpinning research is readily applicable to many other applications, such as human–robot motion mimicking.