Starting in the early 1940s, artificial intelligence (AI) has come a long way, and today, AI is a powerful research area with many possibilities. Deep neural networks (DNNs) are part of AI and consist of several layers-the input layers, the so-called hidden layers, and the output layers. The input layers receive data; the data are then converted into computable variables (i.e., vectors) and are passed on to the hidden layers, where they are computed. Each data point (neuron) is connected to another data point within a different layer that passes information back and forth. Adjusting the weights and bias at each hidden layer (having several iterations between those layers), such a network maps the input to output, thereby generalizing (learning) its knowledge. At the end, the deep neural network should have enough input to predict results for specific tasks successfully. The history of DNNs or neural networks is, in general, closely related to neuroscience, as the motivation of AI is to teach human intelligence to a machine. Thus, it is possible to use the knowledge of the human brain to develop algorithms that can simulate the human brain. This is performed with DNNs. The brain is considered an electrical network that sets off electrical impulses. During this process, information is carried from one synapse to another, just like it is done within neural networks. However, AI systems should be used carefully, which means that the researcher should always be capable of understanding the system he or she created, which is an issue discussed within explainable AI and DNNs.
|Title of host publication||Handbook on Computer Learning and Intelligence|
|Subtitle of host publication||Deep Learning, Intelligent Control and Evolutionary Computation|
|Editors||Plamen Parvanov Angelov|
|Publication status||Published - 1 Sep 2022|