Jump to content

How to make sigma-pi neural networks perform perfectly on regular training sets

Fast facts

Quote

B. Lenze, "How to make sigma-pi neural networks perform perfectly on regular training sets," Neural Networks, vol. 7, no. 8, pp. 1285-1293, 1994.

Content

In this paper, we show how to design three-layer feedforward neural networks with sigma-pi units in the hidden layer in order to perform perfectly on regular training sets. We obtain real-time design schemes based on massively parallel sampling and induced by so-called hyperbolic cardinal translation-type interpolation operators. The real-time nature of our strategy is due to the fact that in the neural network language our approach is nothing else but a very general and efficient one-shot learning scheme. Moreover, because of the very special hyperbolic structure of our sigma-pi units we do not have the usual dramatic increase of parameters and weights that in general happens in case of higher order networks. The final networks are of manageable complexity and may be applied to multigroup discriminant problems, pattern recognition, and image processing. In detail, the XOR-problem and a special multigroup discriminant problem are discussed at the end of the paper.

Notes and references

This site uses cookies to ensure the functionality of the website and to collect statistical data. You can object to the statistical collection via the data protection settings (opt-out).

Settings(Opens in a new tab)