Artificial Neural Network for TIFF Image Compression

The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each compression process the number of neurons in the hidden layer was changing and calculating the compression ratio, mean square error and peak signal-to-noise ratio to compare the results to get the value of original image. The findings of the research was the desired results as the compression ratio was less than five and a few mean square error thus a large value of peak signal-to-noise ratio had been recorded.


Introduction
The use of images and exchanging it become an ordinary matter during our dealings with the computer or the Internet, because of the wide use of computer and Facebook and other social media needed to be compressed according to the requirements of work.
Compression, its goal is the process of reducing the amount of data that we need to storage it in one of the storage media, or that need to be transferred over the network, in order to speed up the transmission process .The measurements used to compress the data are different according to the difference of the kind of compression used by and the method of application, which are used where the style of compression used is classified in two types: The first type is called "Lossy data compression" the compressed file as we decompressing it, we will not get a copy of it which will be identical to original file completely, but we can get, for example, 90% or 80% of it so that we have the important information only that means we are going to get a file similar to the original file ,but its quality is less than the quality of the original file and this type of compression is well suited for multimedia files as audio, image and video files.This method is used if you want to get a very high compression ratio and there is no essential need that the output after the compression process should be identical to the original file completely [1].
The second type is "lossless data compression", in this type the compressed file should be identical to the original file after opening it, that means there is no loss of information hence the name came of this calling.This type must be used with files as executable files "EXE" and text files "TXT, DOC .... etc ".
Image compression is used to reduce the irrelevance and repetition of the image data in order to be capable of storage or transferring data in an effective form for using it in many of the important fields, including the areas of communications and data storage [2].
There are plenty of available techniques for image compression, including: LZW, ZIP,DCT which are working to encrypt the information to reduce the image data to be able for storage and transmitting which used coder and decoder algorithm for compressing the image but in this paper we will discuss another type of compression which use neural networks which is multilayer perceptron where do not need coder and decoder algorithm hence it uses back-propagation algorithm alternative to them for image compression.(See Figure .1).
Multilayered perceptron (MLP) is an artificial neural network (ANN) which uses the back propagation algorithm for the compression of an image [3].If we made the network depend upon the number of neurons in hidden layer it could reaches to the desired output and how many numbers of neurons do we really need at least to be capable for the compression then in case the numbers of neurons are not enough to compress the image and how we avoid this.
At the beginning we must describe the structure of certain method used as well as the dependable standard criteria to measure the method competence.
We have selected a group of TIFF images (256 × 256) then compress them by using MLP, where format TIFF is non-famous formula with ordinary users, but most well-known among graphic designers and photographers and workers in the field of desktop publishing, as it is famous among Apple users, which is a very flexible format and supports the compression.

Multilayer perceptron model with back propagation learning for image compression
The human brain consists of about ten billion neurons which have complex functions and high structural regulation.Those dense interconnected units, resulting in a very complex structure and the level of intelligence that has not yet been achieved by any artificial system.In this direction many mathematical models have been emerged to represent the neurons and communicate with each other.Including artificial neural networks have emerged to try to reproduce the human brain potentials, particularly the capability to learn [4].
Neural network artificial is a powerful tool for modeling which is used to solve optimization problems, and generally composed of a large number of processing non-linear elements is called neurons for extracting and modeling the prototype of the system of the human nervous to pick up strength calculation.[5] Multilayered perceptron (MLP) is a kind of Artificial Neural Network.This network have three layers are: input, output and hidden layer.The same number of neurons (N) must be in both the input and output layers and must connect them fully to the hidden layer.To achieve the compression by placing the value of the number of neurons in the hidden layer, K, less than that of neurons in both input and output layers (K ≤ N). [6] In the neural network if there ( N i , N h , N o ) are neurons in the input , hidden and output layer respectively then the total number of connections is given by the equation: Network Size: (2) e. Input the data which is compressed and propagating it to the first hidden layer then to the output layer, using Sigmoid function which is one of the transference functions and it introduces non-linearity in the network which its derivative is very fast to compute [8], then staring from the output to backward .(see Figure .2)

Training and application
The most popular supervised algorithm is training Back Propagation algorithm which is used to train an artificial neural networks as well as to help in decreasing the convergence time to train neural networks and increase the system performance [9 and10].To compress an image at first the input of image which is divided into square blocks 8 × 8 pixels which is changed to a column vector and fed into the network input layer [11].The network training amount of information which is available in each block until it reaches the required number of neurons in the hidden layer.When training process is finished, we get the compressed image and the standard criterions for it.( Where:  N and K are neurons/pixels available in input and hidden layers .  BI and BH are the number of bits needed to encode the input and hidden layers.The compression ratio equals the number of neurons in the input layer to hidden layer when BI and BH are be the same .2-Mean Square Error (MSE) must be as small as possible between the reconstructed image and the target image with maintaining the quality of reconstructed image which would be near to the target image, in typical cases MSE equal to zero for decompression.Given Image X which is n × m monochrome with noisy approximation Y.Is defined as follows: 3-Peak signal-to-noise ratio (PSNR) is term for the ratio between the maximum possible of a signal power and the corrupting noise power which impacts the accuracy of its representation.And it defines simply through squared error the mean squared error (MSE) as followed[14]:  = 20 .log 10 (   )-10 .log 10 () (5) Where MAXx is the maximum possible pixel value of the image.Here, its value is 255 resulted from this equation 2 B -1, thus the value of B is 8bits.

Experimental Results
We have selected a group of TIFF images (256 × 256) (see Figure ( 4)) then we trained them by using Matlab.Each TIFF image is divided into square blocks 8 × 8 pixels and transformed into group of 64 size column vector.It includes 64 input neurons and 64 output neurons.The middle layer is N which is pointing to number of neurons in the hidden layer which is less than 64.The Multilayer Perceptron (MLP) network is represented with 64-N-64 configuration.
After obtaining the output results (in Figure (5) and Table (1) we analyzed them as follows: 1.When we used only one neuron in the hidden layer (N=1), the CR become very big as well as MES and successively PSNR become small.That's why the images do not appear after the compression process.
2. When we increase the number of neurons in the hidden layer the images will appear and the CR becomes less as well as MES versus PSNR which its value started to increase.
3. The images clearly showed when the number of neurons (N) become from 5 to 12 but they don't' reach to desired output.
4. More details of images are clearly observable when we used 13 neurons and above.
Good quality of reconstructed image will be obtained when the value of compression ratio less than five, therefore the 13 neurons and above in the hidden layer will be reached to the desired output.To improve the algorithm and avoid the emergence of the three paragraphs (points) the above mentioned, we put the condition on that CR<5 in the algorithm (see figure (6)).
In the case of compression ratio is greater than 5, the value of neurons in the hidden layer will be improved through multiplying it by the number two to speed up the training process and get a better result.(seefigure (7)).

Conclusion.
1.When the number of neurons in the hidden layer is increased, the efficiency of the method which used is also increased.2. When the multilayer perceptron depends upon the number of neurons in the hidden layer only, this method will not be quite enough to reach the desired results, therefore if the number of neurons in hidden layer is little, the image will be compressed more and we will obtain large value of compression ratio but poor quality of reconstructed image.3. The image is compressed successfully when multilayer perceptron depends on the value of the compression ratio to be less than 5. 4. The factors which effect on the quality of the image are peak signal-to-noise ratio (PSNR) and mean square error (MSE).The reconstruct image will be successively with PSNR and inversely with MSE.
)In compression methods, the image is split into non-overlapping sub-images.TIFF images (256*256) will be split into 4 x 4 or 8 x 8 or 16 x 16 pixels[7].Algorithm of multilayered perceptron is based on back propagation algorithm.The sequence steps of back propagation algorithm are as follows: a. Initialize the weights and set it to small random number.b.Identify a set of inputs and outputs ( xp , yp) sequentially, where   =  0 ,  1 ,  2 , … .,  −1 , n is the number of input nodes .  =  0 ,  1 ,  2 , … .,  −1 , m is the number of output nodes.c.Set x0 to be always 1. d. Calculate the actual output and pass it to next layer as input. =  0  0 + ∑ ∑  ,    =1  =1 Ibn Al-Haitham J. for Pure & Appl.Sci.Vol.03 (1) 2017