Introduction: Method of Weighted Residuals (MWR)¶
Credit!
This chapter is based on lecture notes originally prepared by Prof. Keith A. Woodbury of The University of Alabama. Further details can also be found in the book by [Grandin, 1991].
Suppose we have a linear differential operator \(D\) acting on a function \(u\) to produce a function \(p\).
We wish to approximate \(u\) by a functions \(\tilde{u}\), which is a linear combination of basis functions chosen from a linearly independent set. That is,
where \(a\) are the coefficients of the \(n\) basis functions \(\varphi\). Now, when substituted into the differential operator, \(D\), the result of the operations is not, in general, \(p(x)\). Hence an error or residual will exist:
The notion in the MWR is to force the residual to zero in some average sense over the domain. That is
where the nuber of weight functions \(W_i\) is exactly equal the number of unknow constants \(a_i\) in \(\tilde{u}\). The result is a set of \(\ n\) algebraic equations for the unknown constants \(a_i\).There are several MWR sub-methods, which differ in their choices of the weighting functions \(W\). Here are some popular examples:
Collocation method.
Sub-domain Method.
Least Squares method.
Galerkin method.
Each of these will be explained below. Afterwards we will apply some of them in an example to solve the steady-state advection diffusion equation.
Collocation Method¶
In this method, the weighting functions are taken from the family of Dirac \(\delta\) functions in the domain. That is, \(W_i(x)=\delta (x-x_i)\). The Dirac \(\delta\) function has the property that
Hence the integration of the weighted residual statement results in the forcing of the residual to zero at specific points in the domain. That is, integration of equation (57) with \(W_i(x)=\delta (x-x_i)\) results in
Sub-domain Method¶
This method doesn’t use weighting factors explicity, so it is not, strictly speaking, a member of the Weighted Residuals family. However, it can be considered a modification of the collocation method. The idea is to force the weighted residual to zero not just at fixed points in the domain, but over various subsections of the domain. To accomplish this, the weight functions are set to unity, and the integral over the entire domain is broken into a number of subdomains sufficient to evaluate all unknown parameters. That is
Least Squares Method¶
If the continuous summation of all the squared residuals is minimized, the rationale behind the name can be seen. In other words, a minimum of
In order to achieve a minimum of this scalar function, the derivatives of \(S\) with respect to all the unknown parameters must be zero. That is,
Comparing with equation (57), the weight functions are seen to be
However, the \(2\) can be dropped, since it cancels out in the equation. Therefore the weight functions for the Least Squares Method are just the derivatives of the residual with respect to the unknown constants:
Galerkin Method¶
This method may be viewed as a modification of the Least Squares Method. Rather than using the derivative of the residual with respect to the unknown \(a_i\), the derivative of the approximating function is used. That is, if the function is approximated as in equation (55), then the weight functions are
Note that these are then identical to the original basis functions appearing in equation (55)