TY - CHAP
T1 - Language Modeling and Text Generation Using Hybrid Recurrent Neural Network
AU - Samreen,
AU - Iqbal, Muhammad Javed
AU - Ahmad, Iftikhar
AU - Khan, Suleman
AU - Khan, Rizwan
PY - 2021/10/2
Y1 - 2021/10/2
N2 - The increase in development of machines that have capability to understand the complicated behavior to solve the human brain involvement problems, the auto text generation application also gets the wide attention. The Language modeling or text generation is a task of next character or word prediction in a sequence with analysis of input data. The ATG enable the machines to write and provide the help to reduce the human brain effort. The ATG is also useful for understanding and analysis of languages and provide the techniques that enable the machines to exchange information in natural languages. At the large scale the text data are created everywhere (whatsApp, facebook, and tweets etc.) and freely online available therefore an effective system is needed for automation of text generation process and analysis of the text data for extracting meaningful information from it so in this work, a case study is presented on how develop a text generation model using hybrid recurrent neural network for English language. The explore model find the dependencies between characters and the conditional probabilities of character in sequences from the available input text data and generate the wholly new sequences of characters like human beings writing (correct in meaning, spelling and sentence structure). A comprehensive comparison between these models, namely, LSTM, deep LSTM, GRU and HRNN is also presented. Previously the RNN models are used for text predictions or auto text generation but these models created the problem of vanishing gradient (short memory) when process long text, therefore the GRU and LSTM models were created for solving this problem. The text generated by GRU and LSTM have many spellings error, incorrect sentence structure, therefore, filling this gap the HRNN model is explore. The HRNN model is the combination of LSTM, GRU and a dense layer. The experiments performed on Penn Treebank, Shakespeare, and Nietzsche datasets. The perplexity of HRNN model is 3.27, the bit per character is 1.18 and average word prediction accuracy is 0.63. As compare with baseline work and previous models (LSTM, deep LSTM and GRU), our model (HRNN) perplexity and bit per character are less. The texts generated by HRNN have fewer spelling errors and sentence structure mistakes. A closer analysis of explored models’ performance and efficiency is described with the help of graph plots and generated texts by taking some input strings. These graphs explain the performance for each model.
AB - The increase in development of machines that have capability to understand the complicated behavior to solve the human brain involvement problems, the auto text generation application also gets the wide attention. The Language modeling or text generation is a task of next character or word prediction in a sequence with analysis of input data. The ATG enable the machines to write and provide the help to reduce the human brain effort. The ATG is also useful for understanding and analysis of languages and provide the techniques that enable the machines to exchange information in natural languages. At the large scale the text data are created everywhere (whatsApp, facebook, and tweets etc.) and freely online available therefore an effective system is needed for automation of text generation process and analysis of the text data for extracting meaningful information from it so in this work, a case study is presented on how develop a text generation model using hybrid recurrent neural network for English language. The explore model find the dependencies between characters and the conditional probabilities of character in sequences from the available input text data and generate the wholly new sequences of characters like human beings writing (correct in meaning, spelling and sentence structure). A comprehensive comparison between these models, namely, LSTM, deep LSTM, GRU and HRNN is also presented. Previously the RNN models are used for text predictions or auto text generation but these models created the problem of vanishing gradient (short memory) when process long text, therefore the GRU and LSTM models were created for solving this problem. The text generated by GRU and LSTM have many spellings error, incorrect sentence structure, therefore, filling this gap the HRNN model is explore. The HRNN model is the combination of LSTM, GRU and a dense layer. The experiments performed on Penn Treebank, Shakespeare, and Nietzsche datasets. The perplexity of HRNN model is 3.27, the bit per character is 1.18 and average word prediction accuracy is 0.63. As compare with baseline work and previous models (LSTM, deep LSTM and GRU), our model (HRNN) perplexity and bit per character are less. The texts generated by HRNN have fewer spelling errors and sentence structure mistakes. A closer analysis of explored models’ performance and efficiency is described with the help of graph plots and generated texts by taking some input strings. These graphs explain the performance for each model.
KW - Deep neural networks language modeling
KW - Hybrid RNN
KW - LSTM
KW - Text generation
UR - http://www.scopus.com/inward/record.url?scp=85116802589&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-77939-9_19
DO - 10.1007/978-3-030-77939-9_19
M3 - Chapter
AN - SCOPUS:85116802589
SN - 9783030779382
SN - 9783030779412
T3 - Studies in Computational Intelligence
SP - 669
EP - 687
BT - Deep Learning for Unmanned Systems
A2 - Koubaa, Anis
A2 - Azar, Ahmad Taher
PB - Springer
CY - Cham
ER -