

The images of this specified dimension are now segmented into sub images, of the number L (for the purpose of this project, L=100). For this project, images of various dimensions (256 x 256, 512 x 512 etc) have been scaled down to a dimension of 50 x 50. The images that are fed into the neural network must all be of the same dimension, irrespective of them being training images or test images. The normalization is simply done by dividing theĮlements of the bias vector by the maximum value of the elements of the vector.

The keys obtained from the multiplicative neural network are first normalized to Number of epochs has been reached, or the error becomes less than a value that These steps are carried out and repeated in each epoch, until the maximum Unchanged, and $\alpha$ is changed to $0.7 \alpha$Īll the variables except the keys are updated to their new values,Īnd $\alpha$ is modified to $1.05 \alpha$ The new weights, output, keys (always constant), error are

Whether to decide if the weights are being updated or not, is: The weight vector w 1(i,p)(n) is calculated similarly as the aboveįormula for the adjustment of weights, with z p being replaced withĪfter one epoch, let $\Delta(n)$ and $\Delta(n+1)$ be the previous and current errors of the outputs of the neural network respectively. Ω is a multiplicative operator and has the formula The weights vector is (w 1, w 2, …., wn) and the bias vector (for this multiplicative neural network) is (b 1, b 2, …., bn). The input vector is (x 1, x 2 ,….,x n ) which is a permutation of the initial specified bias vector.

Multiplicative Neural NetworkĪ general structure for the multiplicative neuron is given below: Since theīias vector is now a constant that is entirely dependent on the way the initialĦbias vector is arranged, it provides an additional level of security over theĮxisting paradigm that employs a sender specified bias vector without anyĪll experiments have been done on, and results have been obtained from The output of the multiplicative neural network will beĪdded to the initial bias vector specified by the sender of the images. Into subvectors and subsequent permutations of this vector are fed into a Vector of the same size as the layer it is a bias of. To get the constant bias vectors, the sender of the image specifies a numeric Using the backpropagation algorithm for learning, while the bias vector remains The output layer gives the decrypted image. The output of the hidden layer gives the cipher. The images are sent into the neural network and Networks are used to help generate this constant vector, which is derived from a The keys used for decryption are the fixedīias vectors, which remain constant throughout training. A multilayer perceptron network is used for both theĮncryption and decryption of images. This project is inspired from the schematic described in the paper by I.
