A Neural Network Approach to Parameterizing Nonlinear Interactions in Wind Wave Models.

Vladimir Krasnopolsky, Dmitry V. Chalikov, and Hendrik L. Tolman2,

Ocean Modeling Branch, Environmental Modeling Center, NOAA/NCEP

5200 Auth Rd., Camp Springs, MD 20746; Vladimir.Krasnopolsky@noaa.gov


Ocean wind wave modeling for hind- and forecast purposes has been at the center of interest for many decades. Numerical prediction models are generally based on a form of the spectral energy or action balance equation


where F is the spectrum, Sin is the input source term, Snl is the nonlinear interaction source term, Sds is the dissipation or 'whitecapping' source term, and represents additional (shallow water) source terms. The SWAMP study (SWAMP Group 1985) identified the critical role of the nonlinear interactions, and the need for explicit modeling of Snl in wave models. State-of-the-art or so-called third generation wave models therefore explicitly model this source term.

In its full form (e.g., Hasselmann and Hasselmann 1985), the calculation of the interactions Snl requires the integration of a six-dimensional Bolzmann integral. This integration requires roughly 103 to 104 times more computational effort than all other aspects of the wave model combined. Present operational constraints require that the computational effort for the estimation of Snl should be of the same order of magnitude as the remainder of the wave model. This requirement was met with the development of the Discrete Interaction Approximation (DIA, Hasselmann et al 1985).

The development of the DIA allowed for the successful development of the first third generation wave model WAM (WAMDI Group 1988, Komen et al. 1994). Nearly two decades of experience with the WAM model and its derivatives have identified shortcomings of the DIA. The DIA tends to unrealistically increase the directional width of spectra, has a systematic spurious impact on the shape of the spectrum near the spectral peak frequency, and has a much too strong signature at high frequencies. In present third generation wave models, these deficiencies can be countered at least in part by the dissipation source term Sds, which is generally used as the closure term in the equation (1). Although this approach gives good results, it is counterproductive, because it prohibits development of dissipation source terms based on solid physical considerations. With our increased understanding of other source terms, this becomes a bigger obstacle for the further development of third-generation wave models.

Considering the above, it is of crucial importance for the development of third generation wave models to develop a cheap yet accurate approximation for Snl. We present here a Neural Network Interaction Approximation (NNIA) to achieve this goal.

Applying Neural Networks to nonlinear interactions

Neural networks (NNs) are well suited for a very broad class of nonlinear approximations and mappings. Neural networks consist of layers of uniform processing elements, nodes, units, or neurons. The neurons and layers are connected according to a specific architecture or topology. A so-called multilayer perceptron is a simple architecture which is sufficient for any continuous nonlinear input to output mapping. The number of input neurons (n) in the input layer is equal to the dimension of input vector X. The number of output neurons (m) in the output layer is equal to the dimension of the output vector Y. A multilayer perceptron always has at least one hidden layer with k neurons. Without going into detail, NNs perform a nonlinear mapping of an input vector X Î Â n on an output vector Y Î Â m

Y = fNN(X) (2)

where fNN denotes the neural network mapping which nonlinearly links each individual component of Y to all components of X (e.g., Chen 1996). It has been shown (e.g., Chen and Chen 1995a,b, Hornik 1991, Funahashi 1989, Gybenko, 1989) that a NN with one hidden layer can approximate any continuous mapping defined on compact sets in  n.

Thus, any problem which can be mathematically reduced to a nonlinear mapping can be solved using a NN. NN solutions for different problems will differ in several important ways. For each particular problem, n and m are determined by the dimensions of the input and output vectors X and Y. The number of hidden neurons k in each particular case should be determined taking into account the complexity of the problem. The more complicated the mapping, the more hidden neurons that are required (Attali and Pages 1997). After these topological parameters are defined, the weights and biases can be found, using a procedure called NN training (e.g., Beale and Jackson 1990, Chen 1996). Although NN training is often time consuming, NN application is not. After the training is finished (it usually needs to be performed only once), each application of the trained NN is practically instantaneous and yields an estimate for Eq. (2).

The nonlinear interaction source term can be considered as a nonlinear mapping between a continuous source term Snl and a continuous spectrum F

Snl = T(F) , (3)

where T in is the exact nonlinear operator given by the full Bolzmann interaction integral (Hasselmann and Hasselmann 1985, Resio and Perrie 1991). Discretization of S and F (as is necessary in any numerical approach) reduces (3) to continuous mapping of two vectors of finite dimensions.

In order to convert the mapping (3) to a continuous mapping of two finite vectors (independent of the actual spectral discretization), the spectrum F and source function Snl are expanded using systems of two-dimensional functions each of which (F i and Y q) creates a complete and orthogonal two-dimensional basis


where for xi and yq we have

, (5)

where the double integral identifies integration over the spectral space. Because both sets of basis functions

{F i }i=1,�, and {Y q}q=1,�, are complete, increasing n and m in (4) improves the accuracy of approximation, and any spectrum F and source function Snl can be approximated by (4) with a required accuracy. Substituting (4) into Eq. ( 3) we can get

Y = T (X) , (6)

which represents a continuous mapping of the finite vectors X Î Â n and Y Î Â m , and where T still represents the full nonlinear interaction operator. As described in the previous section, this operator can be approximated with a NN with n inputs and m outputs and k neurons in the hidden layer

Y TNN (X) . (7)

The accuracy of this approximation (TNN) is determined by k, and can generally be improved by increasing k (see above).

To train the NN approximation TNN of T, a training set has to be created which consists of pairs of vectors X and Y. To create this training set, a representative set of spectra Fp has to be generated with corresponding (exact) interactions Snl,p. For each pair (F, Snl)p, the corresponding vectors (X,Y)p are determined using Eq. (5). All pairs of vectors are then used to train the NN to obtain TNN.

After TNN has been obtained by training, the resulting NN Interaction Approximation (NNIA) algorithm consists of three steps :

The above describes the general procedure for developing an NNIA. Development of an actual NNIA requires the following steps:

The first three points all have a significant impact on both accuracy and economy of a NNIA. Unfortunately, there is no pre-defined way to tackle these issues. It is therefore unavoidable that the development of a NNIA involves many iterations. The first requirement of an NNIA to be potentially useful in operational wave modeling, is that the exact interactions Snl are closely reproduced for computational costs comparable to that of the DIA. The following section shows the potential of this approach with the design of a simple ad-hoc NNIA.


To address the basic feasibility of a NNIA, we have considered an NNIA to estimate the nonlinear interactions Snl(f,q ) as a function of frequency f and direction q from the corresponding spectrum F(f,q ). We first will also consider deep water only. To train and test this NNIA, we used a set of about 20,000 simulated realistic spectra for F(f,q ), and the corresponding exact estimates of Snl(f,q ) (Van Vledder, Herbers, Jensen, Resio, and Tracy 2000). Simulation has been performed using a generator that calculated a spectral function composed of several Pierson-Moskowitz (1964) spectra for different peak frequencies oriented randomly in [0,2p ] interval. Comparison of simulated spectra with spectra simulated by WAVEWATCH model (Tolman 1999, Tolman and Chalikov 1996) shows that this approach allowed us to simulate sufficiently realistic and complicated spectra describing a broad range of wave systems. Spectra with four peaks were used in calculations below. Separate data sets have been generated for training and validation.

As is common in parametric spectral descriptions, we choose separable basis functions where frequency and angular dependence are separated. For F i this implies


A similar separation is used for Y q. Considering the strongly suppressed behavior of F and Snl for f ® 0, and the exponentially decreasing asymptotic for f ® , generalized Laguerre's polynomials (Abramowitz and Stegun 1964) are used to define f f and y f. Considering that no directional preferences exist in F and Snl, a Fourier decomposition is used for f q and y q . The number of base functions is chosen to be n = 51 and m = 64 to keep the accuracy of approximation for F on average better than 2% and for Snl - better than 5-6%. The number of hidden neurons was taken k = 30 which allows a satisfactory approximation (7) for the mapping (6).

Table 1. RMSE statistics for 10,000 Snl













Table 1 compares three important statistics for source function RMS errors (with respect to exact solution) calculated using DIA and NNIA for 10,000 spectra (independent validation set). NNIA improves accuracy about twice as compared with DIA.

Fig. 1. RMSE as functions of frequency f and angle. Dashed line � error of approximation (lower bound for all other errors). Solid line � DIA, line with squares � NNIA (51:20:64), and line with triangles � NNIA (51:30:64)

Figure 1 shows mean RMSE as function of the frequency f (left) and the angle q (right). Numbers in Table 1 correspond to NNIA with NN with 30 neurons in the hidden layer (51:30:64).

Fig2 shows 3 pairs (one row in the figure corresponds to one pair) of one dimensional, integrated over q , source functions Snl (f) (left column) and one dimensional, integrated over f, source functions Snl(q ) (right column) from the validation data set. Thick solid curves correspond to the exact Snl. Dashed curves correspond to DAI of Snl. Curves with triangles correspond to the NNIA estimate of Snl. Numbers inside the panels show DIA and NNIA errors in percents with respect to exact solution.

Fig.2. See explanations in the text above.

The results in Fig. 2 are fairly representative for the validation data set. In general, the NNIA reproduces the exact Snl accurately. Even if clear oscillations are present in the decomposed spectrum (e.g., line in middle panel on left), the NNIA shows no spurious oscillations, and gives reasonable results. Note that many DIA source functions exhibit complicated behavior and spurious oscillations. Major peaks in these functions coexist with more or less random small-scale fluctuations. These fluctuations are probably an artifact produced by a simplified nature of DIA. Exact interactions are the result of averaging over much lager number of resonant sets of wave numbers, and are therefore much smoother than the results of the DIA.


The present NNIA calculations improved twice the accuracy of Snl calculations requiring roughly 3 times more computational effort than the DIA calculations with less than 5% of this time spend in the actual NN part of the algorithm [i.e., Eq. (7)]. Decomposition of the input spectra F and composing the source function Snl from the NN output takes the rest. Considering that no optimization has been considered in the development of the NNIA, in the composition and decomposition procedures, it appears reasonable to expect a final NNIA algorithm with similar computation requirements as the DIA.

Having established that an NNIA has the potential of being both accurate and efficient, we intend to take the following steps towards developing a NNIA for application in operational wave models.

  1. Optimize the NNIA by successive integration of physical properties in the base functions, normalizing F and Snl, optimizing the number of base functions and network topology, and optimizing numerical aspects of the decomposition / compositions algorithms.
  2. Expand the NNIA to arbitrary water depths, either by expanding the underlying NN or by scaling as in the DIA.
  3. The NN approach in principle allows us to selectively suppress aspects of the nonlinear interactions by filtering the training sets accordingly. This might give us the opportunity to artificially increase the time scales of the processes that stabilize the spectral shape in the equilibrium range of the spectrum. If this is done properly, an NNIA with identical physical properties, but with much smoother numerical integration properties might be obtained. If time permits, we intend to experiment with this possibility.


We thank Gerbrant Ph. Van Vledder for providing us with a code for calculating the exact nonlinear interaction.

We also thank ONR for supporting this research project.


Abramowitz, M. and I. A. Stegun, Editors, 1964: Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. National Bureau of Standards

Attali J-G. and G. Pagès, 1997: Approximations of Functions by a Multilayer Perceptron: A New Approach., Neural Networks, Vol. 6, pp. 1069-81

Beale, R. and T. Jackson, 1990: Neural Computing: An Introduction, Adam Hilger, Bristol, Philadelphia and New York, 240 pp.

Chen, C.H. (Editor in Chief), 1996: Fuzzy Logic and Neural Network Handbook, McGraw-Hill, New York

Chen, T., and H. Chen, 1995a: Approximation Capability to Functions of Several Variables, Nonlinear Functionals and Operators by Radial Basis Function Neural Networks, Neural Networks, 6, pp. 904-910,

-----, and -----, 1995b: Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Function and Its Application to Dynamical Systems, Neural Networks, 6, pp. 911-917

Funahashi, K., 1989: On the Approximate Realization of Continuous Mappings by Neural Networks, Neural Networks, 2, pp. 183-192

Gybenko, G., 1989: Approximation by Superposition of Sigmoidal Functions, in Mathematics of Control, Signals and Systems, 2, No. 4, pp. 303-314

Hasselmann, S, and K. Hasselmann, 1985: Computations and parametrizations of the nonlinear energy transfer in a gravity wave spectrum. Part I: a new method for efficient computations of the exact nonlinear transfer integral. J. Phys. Oceanogr., 15, 1369-77

-----, -----, J.A. Allender, and T.P. Barnet, 1985: Computations and parametrizations of the nonlinear energy transfer in a gravity wave spectrum. Part II: parametrization of the nonlinear transfer for application in wave models. J. Phys. Oceanogr., 15, 1378-91

Hornik, K., 1991: Approximation Capabilities of Multilayer Feedforward Network, Neural Networks, Vol. 4, pp. 251-257

Komen, G.J., et al., 1994: Dynamics and Modelling of Ocean Waves, Cambridge University press, Cambridge, 532 pp.

Resio, D.T. and W. Perrie, 1991: A numerical studyof nonlinear energy fluxes due to wave-wave interactions, 1, methodology and basic results. J. Fluid Mech., vol. 223, 603-629

SWAMP Group, 1985: Ocean wave Modeling, Plenum Press, 256 pp.

Tolman, H. L., 1999: User manual and system documentation of WAVEWATCH-III version 1.18. NOAA / NWS / NCEP /OMB technical note 166, 110 pp.

Tolman, H.L., and D.V. Chalikov, 1996: Source terms in a third-generation wind wave model. J. Phys. Oceanogr., 26, 2497-2518.

Van Veldder, G.Ph., T.H.C. Herbers, B. Jensen, D.T. Resio, and B. Tracy: Modelling of nonlinear quadruplet wave-wave interactions in operational coastal wave models. Abstract, accepted for presentation at ICCE 2000, Sydney

WAMDI Group, 1988: The WAM model - a third generation ocean wave prediction model. J. Phys. Oceanogr., 18, 1775-1810.


OMB Contribution No: 192

Submitted to: IEEE-INNS-ENNS International Joint Conference on Neural Networks, Como, Italy, 2000