Cascade-Forward Neural Network for Volterra Integral Equation Solution

The method of solving volterra integral equation by using numerical solution is a simple operation but to require many memory space to compute and save the operation. The importance of this equation appeares new direction to solve the equation by using new methods to avoid obstacles. One of these methods employ neural network for obtaining the solution. This paper presents a proposed method by using cascade-forward neural network to simulate volterra integral equations solutions. This method depends on training cascade-forward neural network by inputs which represent the mean of volterra integral equations solutions, the target of cascade-forward neural network is to get the desired output of this network. Cascade-forward neural network is trained multi times to obtain the desired output, the training of cascade-forward neural network model terminal when there is no enhancement in result. The model combines all training cascade-forward neural network to obtain the best result. This method proved its successful in training and testing cascade-forward neural network for obtaining the desired output of numerical solution of volterra integral equation for multi intervals. Cascade-forward neural network model measured by calculating MSE to compute the degree of error at each training time.


Introduction
Volterra Integral Equation (VIE) represents as a special state in mathematics science and various methods are introduced to solve it, base of this equation depends on it a model of several science field such as in physics, engineering, biology etc. VIE solved by multi numerical methods which are based as a linear or nonlinear converting system of integral equation that have direct or iterative methods solutions. Many different techniques have been presented to solve VIEs of the second kind by adopting the linear property of it. The base technique of integral equation field depends on numerical quadrature rules, while the trapezoidal rule is the most popular technique to solve numerical integration which gives authenticated solutions [1]. The development in parallel processing affect all fields of knowledge, one of them is mathematics science methods, there are multi parallel techniques employed to take the typical solutions, examples of these are: genetic learning systems, artificial neural network, simulated annealing systems, associative memories, and fuzzy learning systems [2].
The recent papers that are interested of solving the integral equations have been employed some models of artificial neural network (ANN) to take the desired solutions. The comparison between ANN and numerical methods as techniques used to solve VIE appears many features of ANN, the solution by ANN models is characterized as different and continuous [3]. The structure of (ANN) inspired from information processing by getting the form of directed-graph structures. Simple mathematical operations managed in nodes to be processed, while the numerical weights with the links between nodes symbolized to be associated within them to create information. The nodes in one layer connected to nodes in next layer to flow the information through the structure and patterned by using activated function to get the outputs of nervous systems [2]. Cascade Forward Neural Network (CFNN) is a model inspired from Feed Forward Neural Network (FFNN) model. There is a major distinction of CFNN from other FFNN; CFNN contains direct connection between the input layer and the output, in addition to the indirect connection through the hidden layer, which means "in CFNN each neuron in the input layer is attached to each neuron in the hidden layer and each neuron in the output layer" [3]. These connections of CFNN are useful by its adapt nonlinear relationship of input and output and not eliminating the linear relationship between them that depended on the background, and this feature is not available in FFNN that contains only nonlinear relationship of input and output, by this feature CFNN conforms to be suitable to solve problems that requires to build prediction time series data that depends on the probable state with a high level of accuracy [4].
. In this paper, CFNN model focuses on simulation procedure, it is used to predict the time series data of VIE from both the generated data and the real data. The organization of this paper contains six component: literature review in section 2, Theoretical in section 3 & 4, & methodology, performance measure & experimental results in section 5, and conclusions in section 6.

Literature Review
Several methods were used to solve VIE and extract the best results, where for each method features which differs from the rest of the methods but the evolution in the use of parallel processors showed new methods to solve the equation. Tahmasbi [5] utilize the power series method to solve linear VIE of the second kind, the method used the exact solution of Taylor expansion of the integral equation computes with approximate solution acceptable, the method proved its effectiveness and appropriately compared with other methods. Isaacson et al. [6] develop collocation methods to solve linear, scalar, VIE of the second kind by employee partitioned quadrature depending on the qualocation framework, with smooth kernels containing sharp gradients and many examples were examined, the method appears the efficient of getting the numerical solution of equation. Mirzaee [7] presents the adaptive simpson's quadrature method for solving linear VIE of the second kind, its considered as simple method and has proved its efficiency and accuracy in obtaining results. Rahman et al. [8] use the Galerkin weighted residual method to solve a VIE of first and second kind by using a very few "Laguerre polynomial", the approximate results of tested examples present the convergence from the numerical solutions. Kolk et al. [9] utilize suitable smoothing methods and polynomial splines on mildly graded or uniform grids to solve linear VIE of the second kind, the method obtained the appropriate numerical results. Aigo [1] applied the quadrature method to solve linear integral equations of the second kind by utilizing simpson's and trapezoidal quadrature rule to get accurate solution, this method achieved a good degree of accuracy according to obtained results of the illustrative examples. Costarelli et al. [10] develop numerical collocation method depending on approximate of the exact solution of superposition of sigmoidal functions to solve linear and nonlinear VIE of the second kind, the algorithm is characterized by low cost that allowing addition of more number of collocation points N , thereby this method increasing the accuracy of the obtained results.

Volterra Integral Equation Solution
The most standard form of Volterra integral equation is given as in equation (1)

) When assuming that the solution is required over a finite interval [0, T] that t belongs to this
With these assumptions, the solution y(t) to (1) exists, is unique and y(t) is continuous in [0,T] [6]. Trapezoidal Rule is a distinguished case of Newton-Cotes' formulas based on its Formula that states. The trapezoidal rule is considered more saving to compute a sequence of approximations. VIE solving by the Trapezoidal approximation by dividing the interval [a,b] into equal subinterval, then the trapezoidal rules will be applied on equation (2): Where h represents equal subinterval of [a,b], h calculated by

Cascade-Forward Neural Network (CFNN)
CFNN structure component of input, output and one or more hidden layers, there is a connection from the input to each layer in the network, and a connection from each layer to the successive layers in the network as shown in Figure (1) [11]. This allows the inputs to directly, influence the output nodes by embedding additional information and features to it [12]. CFNN are similar in structure to multilayer feedforward neural network except CFNN which has a direct weighted connection from its input to output layer with two additional connections. CFNNs with more layers might learn complex relationships, while in FFNN the connection formed between input and output is indirect relationship of sample relationships. [11] CFNN model equation can be formed as equation (3): Where A i is the activation function from the input layer to the output layer, xi is an input sample, wi i is weight from the input layer to the output layer, A o is the activation function of output layer, and the activation function of the hidden layer is A h . when the bias w b is added to the input layer and the activation function of each neuron in the hidden layer A h then equation formed as equation (4) [11]: After construct CFNN, the training process of the CFNN will be done, in the train state the samples inputs to the network to obtain the target output. In this state the weight and the bias based to be used, CFNN need inputs, target result, weight and bias to begin training process.
There are many types of activation functions used in training process of neural networks, the training activation function "trainlm" considers the fastest backpropagation algorithm in network training function that updates weight and bias values according to Levenberg-Marquardt optimization. The number of hidden layers depend on the size of input samples and the number of neurons in each hidden layer depend on the number of target output [13]. CFNN performance like any neural network can be measured, thus the mean squared error (MSE) is very commonly used as a perfect general purpose error metric for numerical predictions [12].

Methodology
The mechanism of the proposed method is in three steps: the first step extracts the exact numerical solution of VIE using trapezoidal quadrature rule and then gets the mean of each solution which represents features that input to CFNN, these features saved in fixed database to use it in the second step. The second step, CFNN model is created then training multi times to obtain the same numerical solutions of the quadrature method. In the third step, the simulation CFNN model test the values to obtain the VIE solutions. The model of this method is illustrated in block diagram as in Figure (2).

5.1.Design and Implementation
In the proposed method, selected values from 1 to 20 to extract the numerical solution of VIE to be an example to prove the work, solutions of VIE for 20 values are calculated, then the mean of each of solution is calculated to be a vector of solution for each value. These vectors will saved to be inputs for CFNN, this process is explained in algorithm (1) (3), it depends on train CFNN multi times then combine the results of training CFNNs to exceed the errors in training results of CFNN and gets the optimum performance, the training process terminated after there is no enhancement in their performance. In this work the performance of CFNN stopped improving after four training of CFNN, Four CFNN training is combined and saved to be simulated the result in test state. The training process of CFNN can be seen in figure (4) which appears the training progress of CFNN for each time.   If result of net-4 equal the input value, the solution of VIE return and go to the 5 step. -Step5. Get the solution of VIE for this value.

5.2.Performance Measures
This paper shows the efficiency of CFNN in obtain the target solution of VIE by measuring the performance of CFNN model to test their work, MSE used to measure the quality of neural network, the MSE of training CFNN for four training attempts appear in Table (1).

Conclusions
This paper used a proposed technique for getting volterra integral equations (VIE) solutions by useing cascade-forward neural network model. CFNN would have been built to simulate VIE solutions, CFNN represents an appropriate network to solve integral equation because its structures is nonlinear and nonparametric, thus makes it more flexible to get the predictions of time series. In this work, CFNN is trained four times to get the target results with the best performance, then get the results of CFNNs training models to be in one CFNN simulation model for simulating values of VIE solutions. In this method exceeding the training errors in output result, so the simulation process of CFNN with values of VIE would been more reliability. CFNN model that is proposed in this search can simulate more values of VIE that training on it with no errors percentage because CFNN is trained multi times until the target results would be getting, then all the CFNNs training combined to be one CFNN model in simulation process. Thus the results of the proposed method appeared that CFNN model simulated the values of VIE solutions with 100% percentage.

7.Acknowledgment
I would like to thank the university that facilitated the task of researcher prepared for the research and affiliated to it, which are: Middle Technical University (www.mtu.edu.iq).