[an error occurred while processing this directive]
Skip to the content of the web site.

Laboratory 8

Slides for this presentation are available here: 10.FiniteElements2D.ppt.

Finite Elements in Two Dimensions

This derivation is based on Glyn James's derivation in Advanced Modern Engineering Mathematics, 3rd Ed., on pp. 733-734.

Background

Note: the background section is useful only for a quick skim reading to understand the technique. The section Laboratory demonstrates the software you will use.

Applying the method of finite elements in two dimensions requires a similar approach. For each interior point, we will define a corresponding test function (again, due to Galerkin) and we will use that test function to generate a linear equation which we can then solve.

For example, consider the region shown in Figure 1.


Figure 1. A non-symmetric region with its solution defined on the boundary.

This region is not symmetric in any direction and therefore it is not possible to simplify the problem. It is also not well suited for a grid. For example, Figure 2 shows a grid of 38 points of which 18 are on the boundary.


Figure 2. A poor attempt to impose a grid of 38 points on the region shown in Figure 1.

Consequently, this region a good candidate for a tessellation and finite elements. We can define a number of points, some of which are on the boundary, while others fall on the interior. One such example is shown in Figure 3 which has 22 interior points and 16 boundary points. The value of any boundary points are given in the boundary value problem: our goal will be to approximate $u_k \approx u({\bf x}_k)$. Note that the values we are finding is a scalar and not a vector.


Figure 3. The region with 38 points, 22 within the interior.

In this example, we will attempt to come up with a system of 22 equations in the 22 unknown values which are demonstrated in Figure 4.


Figure 4. The given boundary and the approximations of $u_k \approx u({\bf x}_k)$ we wish to find on the interior of the region.

In order to achieve this, we will first create a tessellation, as is shown in Figure 5. Because the tessellation forms a graph, such an object would be stored as a graph data structure and consequently we will refer to the points as vertices and the connections as edges. Those vertices which appear on the boundary will be referred to as boundary vertices while the balance will be referred to as interior vertices.


Figure 5. A tessellation based on our 38 vertices of the region.

In order to demonstrate this technique, we will select one interior vertex, call it ${\bf x}_k$; however, to generate the system of linear equations, we must apply this to each interior point. This one interior point ${\bf x}_k$ is connected to $n_k$ other vertices (in this case, 6), some of which may be boundary, others of which will be interior vertices. The $n_k$ surrounding triangles will be defined as the region $R_k$. One such region is selected in Figure 6. (It happens to be ${\bf x}_{10}$ in our original labeling, but that is irrelevant.)


Figure 6. Selecting one interior vertex ${\bf x}_k$ and its adjacent (in this case) six vertices and triangles.

We will label the six surrounding vertices ${\bf x}_{k,1}$ through ${\bf x}_{k,6} = {\bf x}_{k,n_k}$ and we will assume that they are numbered in an order. Any reasonably straight-forward graph algorithm (as taught in ECE 250 Algorithms and Data Structures) could trivially list adjacent vertices. We will define the boundary of the region as $\partial R_k$.


Figure 7. Focusing on the vertex ${\bf x}_k$ and its surrounding region.

In addition, we will label the surrounding triangles. Thus, the region $R_k$ is the union of the triangular regions $R_{k,1}$ through $R_{k,6}$. We will assume that region $R_{k,\lambda}$ has the boundary vertices of ${\bf x}_k$, ${\bf x}_{k,\lambda}$, and ${\bf x}_{k,\lambda + 1}$.


Figure 8. Labelling of the surrounding triangles.

Selection of the Test Function

Just as we did previously, we will define the function $V = \nabla^2 u - \frac{\rho}{\epsilon_0}$ which can also be written as

$V(x,y) = \frac{\partial^2}{\partial x^2}u(x,y) + \frac{\partial^2}{\partial y^2}u(x,y) - \frac{\rho(x,y)}{\epsilon_0}$.

We will chose a test function to be $\phi_k(x,y)$ as shown in Figure 9 which satisfies $\phi_k(x_k,y_k) = 1$ and linearly approaches zero as it approaches the boundary $\partial R_k$.


Figure 9. The test function $\phi_k(x,y)$ centred at ${\bf x}_k = (x_k, y_k)$.

From our previous laboratory, we know that we require that

$\int\!\!\!\int_{R_k} V(x,y) \phi_k(x,y) dx dy = 0$.

If we substitute the function $V(x,y)$ into this equation, we get

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} - \frac{\rho}{\epsilon_0} \right) \phi_k dx dy = 0$

and using the rules of integration, we have

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} - \frac{\rho}{\epsilon_0} \phi_k \right )dx dy = \int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right) \phi_k dx dy - \int\!\!\!\int_{R_k} \frac{\rho}{\epsilon_0} \phi_k dx dy= 0$

where the second integral may be easily calculated because the density $\rho$ is a given function. In the special case where $\rho$ is constant (let $\frac{\rho}{\epsilon} = \rho_0$), the equation simplifies to

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} - \frac{\rho}{\epsilon_0} \phi_k \right ) dx dy = \int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right) \phi_k dx dy - \rho_0\frac{A(R_k)}{3} = 0$

(1)

where $A(R_k)$ is the area of the region $R_k$ (recalling that the volume of a cone with base $B$ and height $h$ is $\frac{1}{3} h A(B)$).

At this point, we have run into the same wall as before: we have a second derivative which is almost impossible to approximate! In the one-dimensional case, we used integration by parts. We will use a similar trick here, but it requires an observation:

$\frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) = \frac{\partial}{\partial x} \phi_k \frac{\partial u}{\partial x} + \phi_k \frac{\partial^2 u}{\partial x^2}$

and

$\frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right ) = \frac{\partial}{\partial y} \phi_k \frac{\partial u}{\partial y} + \phi_k \frac{\partial^2 u}{\partial y^2}$.

It therefore follows that we may write

$\left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right ) \phi_k = \frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) + \frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right ) - \frac{\partial \phi_k}{\partial x} \frac{\partial u}{\partial x} - \frac{\partial \phi_k}{\partial y} \frac{\partial u}{\partial y} $.

Integrating this over the region $R_k$, we have the two integrals

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right ) \phi_k dx dy = \int\!\!\!\int_{R_k} \left( \frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) + \frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right )\right) dx dy - \int\!\!\!\int_{R_k} \left( \frac{\partial \phi_k}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_k}{\partial y} \frac{\partial u}{\partial y} \right)dx dy$.

Let us examine the first integral:

$\int\!\!\!\int_{R_k} \left( \frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) + \frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right )\right) dx dy$.

Recall Green's theorem which says

$\oint_{\partial R} {\bf F} \cdot d{\bf s} = \int\!\!\!\int_R \left( \frac{ \partial F_2}{\partial x} - \frac{\partial F_1}{\partial y} \right) dx dy$

and thus, we have

$\int\!\!\!\int_{R_k} \left( \frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) + \frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right )\right) dx dy = \oint_{\partial R_k} \left(-\phi_k \frac{\partial u}{\partial y}, \phi_k \frac{\partial u}{\partial x} \right ) \cdot d{\bf s}$.

By our choice of a test function, $\phi_k$, the vector is 0 at each point on the boundary, and consequently, we have

$\int\!\!\!\int_{R_k} \left( \frac{\partial}{\partial x} \left( \phi_k \frac{\partial u}{\partial x} \right ) + \frac{\partial}{\partial y} \left( \phi_k \frac{\partial u}{\partial y} \right )\right) dx dy = \oint_{\partial R_k} \left(0, 0 \right ) \cdot d{\bf s} = 0$.

Thus, as in the one-dimensional case (see Equation 8.4) where used the fundamental theorem of calculus to in order to remove a similar term, here we used Green's theorem to remove the intractable term. Thus, what we have left is

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right ) \phi_k dx dy = -\int\!\!\!\int_{R_k} \left( \frac{\partial \phi_k}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_k}{\partial y} \frac{\partial u}{\partial y} \right)dx dy$.

Just as we used piecewise-linear test functions in one dimension, we use piecewise planar test functions in two dimensions. To calculate this, we must break the integral apart: Because the six regions are non-overlapping, we may rewrite the right-hand side as

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right ) \phi_k dx dy = -\sum_{\lambda = 1}^{n_k} \int\!\!\!\int_{R_k,\lambda} \left( \frac{\partial \phi_k}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_k}{\partial y} \frac{\partial u}{\partial y} \right)dx dy$

(2)

where $n_k$ is the number of vertices adjacent to the vertex ${\bf x}_k$. We will proceed by looking at how we can calculate the integral over one of these regions, namely, $R_{k,1}$.

Figure 10 shows the test function defined on the region $R_{k,1}$.


Figure 10. The test function $\phi_k$ on the region $R_{k,1}$.

This function is of the form $\phi_{k,1} = \alpha_{k,1}x + \beta_{k,1}y + \gamma_{k,1}$ and consequently, we may replace the two partial derivatives:

$\int\!\!\!\int_{R_k,1} \left( \frac{\partial \phi_{k,1}}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_{k,1}}{\partial y} \frac{\partial u}{\partial y} \right) dx dy = \int\!\!\!\int_{R_k,1} \left( \alpha_{k,1} \frac{\partial u}{\partial x} + \beta_{k,1} \frac{\partial u}{\partial y} \right)dx dy$.

We now must calculate the two other partial derivatives. Recall that we are attempting to approximate the solution at the three points which form the corners of the region $R_{k,1}$. This is shown in Figure 11.


Figure 11. The actual function $u(x)$ on the region $R_{k,1}$.

We can find a function of the form $u(x,y) \approx a_{k,1}x + b_{k,1} y + c_{k,1}$ on that region as is shown in Figure 12.


Figure 12. Approximating the solution $u(x)$ on the region $R_{k,1}$ with a planar function.

Thus, we may rewrite the equation as

$\int\!\!\!\int_{R_k,1} \left( \frac{\partial \phi_{k,1}}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_{k,1}}{\partial y} \frac{\partial u}{\partial y} \right) dx dy = \int\!\!\!\int_{R_k,1} \left( \alpha_{k,1} a_{k,1} + \beta_{k,1} b_{k,1} \right)dx dy$

and because the integrand is now a constant, we may rewrite this as

$\int\!\!\!\int_{R_k,1} \left( \frac{\partial \phi_{k,1}}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_{k,1}}{\partial y} \frac{\partial u}{\partial y} \right) dx dy = \left( \alpha_{k,1} a_{k,1} + \beta_{k,1} b_{k,1} \right)A( R_{k,1})$

(3)

where $A( R_{k,1})$ is the area of the region $R_{k,1}$. The Greek letters are the slopes associated with the function $\phi_{k,1}$ while the Latin letters are associated with the function $u$.

Finding The Values a, α, b, β, and A(R)

For simplicity, let us assume we are working with the points $(x_1, y_1), (x_2, y_2), (x_3, y_3)$. The area of the triangle defined by these points is given by the formula

$\frac{1}{2}\left | \det \begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix} \right |$.

The plane of the form $\alpha x + \beta y + \gamma$ which describes $\phi_{k,1}$ may be found by solving

$\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}\begin{pmatrix} \alpha \cr \beta \cr \gamma \end{pmatrix} = \begin{pmatrix} 1 \cr 0 \cr 0 \end{pmatrix}$.

The plane interpolating the points on the function $a x + b y + c$ which describes $u(x)$ may be found by solving

$\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}\begin{pmatrix} a \cr b \cr c \end{pmatrix} = \begin{pmatrix} u_1 \cr u_2 \cr u_3 \end{pmatrix}$.

Consequently, we have the formulas:

$\alpha = \frac{\det\begin{pmatrix} 1 & y_1 & 1 \cr 0 & y_2 & 1 \cr 0 & y_3 & 1 \end{pmatrix}}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}} = \frac{-(y_3 - y_2)}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}}$
$\beta = \frac{\det\begin{pmatrix} x_1 & 1 & 1 \cr x_2 & 0 & 1 \cr x_3 & 0 & 1 \end{pmatrix}}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}} = \frac{x_3 - x_2}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}}$
$a = \frac{\det\begin{pmatrix} u_1 & y_1 & 1 \cr u_2 & y_2 & 1 \cr u_3 & y_3 & 1 \end{pmatrix}}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}} = \frac{-(y_3 - y_2)u_1 + (y_3 - y_1)u_2 - (y_2 - y_1)u_3}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}}$
$b = \frac{\det\begin{pmatrix} x_1 & u_1 & 1 \cr x_2 & u_2 & 1 \cr x_3 & u_3 & 1 \end{pmatrix}}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}} = \frac{(x_3 - x_2)u_1 - (x_3 - x_1)u_2 + (x_2 - x_1)u_3}{\det\begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix}}$.

Normally, we could simply use Gaussian elimination to calculate all of these; however, in this case, it is easier to cancel the denominators. In all cases, the number with the larger index is subtracted from the number with the smaller index. Thus we have that

$I({\bf x}_1, {\bf x}_2, {\bf x}_3) = \left( \alpha a + \beta b \right)A(R) =$
$\frac{1}{2}\frac{ \left((y_3 - y_2)^2 + (x_3 - x_2)^2 \right) u_1 - \left( (y_3 - y_1)(y_3 - y_2) + (x_3 - x_1)(x_3 - x_2)\right ) u_2 + \left((y_2 - y_1)(y_3 - y_2) + (x_2 - x_1)(x_3 - x_2) \right)u_3 }{\left | \det \begin{pmatrix} x_1 & y_1 & 1 \cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \end{pmatrix} \right |}$

and translating this into the integral in Equation 3, we have

$\int\!\!\!\int_{R_k,1} \left( \frac{\partial \phi_{k,1}}{\partial x} \frac{\partial u}{\partial x} + \frac{\partial \phi_{k,1}}{\partial y} \frac{\partial u}{\partial y} \right) dx dy = I({\bf x}_k, {\bf x}_{k,1}, {\bf x}_{k,2})$

Note that the function $I({\bf x}_1, {\bf x}_2, {\bf x}_3)$ is implemented as fe_coeffs.m which, in addition, returns the area of the triangle.

We could now substitute this into Equation 2 to get the equation:

$\int\!\!\!\int_{R_k} \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right ) \phi_k dx dy = -\sum_{\lambda = 1}^{n_k} I({\bf x}_k, {\bf x}_{k,\lambda}, {\bf x}_{k,\lambda + 1})$

where ${\bf x}_{k,n_k + 1} = {\bf x}_{k,1}$.

Finally, substituting this back into Equation 1, we have the equation

$\sum_{\lambda = 1}^{n_k} I({\bf x}_k, {\bf x}_{k,\lambda}, {\bf x}_{k,\lambda + 1}) = -\rho_0\frac{A(R_k)}{3}$.

In some cases, the ${\bf x}_{k,\lambda}$ may be boundary vertices and consequently the term associated with $u_{k,\lambda}$ will be brought to the right-hand side.

This is just one linear equation this equation will result in $n_k + 1$ entries of that one row of the matrix to be non-zero. Each interior vertex will define one equation. The resulting matrix is called the stiffness matrix.

Laboratory

Supplied are four functions:

  • fe_system.m
  • fe_update.m
  • fe_coeffs.m (internal function—you won't call this directly)
  • fe_solve.m

These functions are designed to allow the user to easily define a set of vertices, the corresponding edges, and to then solve the system.

These attempt to demonstrate how object-oriented programming may be performed in Matlab:

  • The constructor fe_system allows the programmer to create the system by defining the vertices and the constant $\rho$.
  • The member function fe_update allows the programmer to describe the adjacent vertices of a given vertex with an unknown value. This is called once for each vertex with an unknown value.
  • The member function fe_solve returns a solution which is the original set of points with the values of the unknown vertices filled in.

Example 1

To demonstrate, consider the system defined by the points shown in Figure 13.


Figure 13. Seven points numbered 1 through 7 of which two are unknown.

To create such a system, we call the constructor with the vertices. The vertices are passed as 3 × $n$ array where each column is the $x$-value, the $y$-value, and $u(x,y)$. In the cases where we are to solve for the value $u(x,y)$, we use the placeholder -Inf.

To enter the vertices, the example below uses a transposed array to allow the easy numbering of the points. Here, the points are $u(0, -1) = -1$, $u(-1, 0) = 1$, $u(0, 0)$ is unknown, $u(1, 0) = 1$, $u(-1, 1) = 0$, $u(0, 1)$ is unknown, and $u(0, 2) = -4$.

pts1 = [  0 -1 -1        % 1
	 -1  0  1        % 2
	  0  0 -Inf      % 3
	  1  0  1        % 4
	 -1  1  0        % 5
	  0  1 -Inf      % 6
	  0  2 -4]';     % 7
obj1 = fe_system( 0, pts1 );

The object returned is a Matlab structure which stores the relevant information.

Next, we must create a tessellation. One possible tessellation is shown in Figure 14.


Figure 14. A tessellation of the vertices in Figure 13.

We must now describe the adjacent edges in this tessellation. Consider Figure 15 which shows the adjacent vertices of both vertex 3 and vertex 6.


Figure 15. The adjacent vertices of vertices 3 and 6, respectively.

The member function fe_update allows you to add these edges by listing the edges in order clockwise around the vertex:

obj1 = fe_update( obj1, 3, [4 6 2 1] );
obj1 = fe_update( obj1, 6, [4 7 5 2 3] );

Finally, you can solve the system and plot the result:

u1 = fe_solve( obj1 )
u1 =

         0   -1.0000         0    1.0000   -1.0000         0         0
   -1.0000         0         0         0    1.0000    1.0000    2.0000
   -1.0000    1.0000   -0.1053    1.0000         0   -1.4211   -4.0000

plot3( u1(1,:), u1(2,:), u1(3,:), 'ko' )

You will note that in the solution matrix, the two unknown entries (initially set to -Inf) are replaced with approximations found by the finite point method.

A plot of the actual solution versus the approximated points is shown in Figure 16.


Figure 16. The actual solution from which the boundary points were sampled, the boundary points (in black) and the approximation of the two interior points (in red).

Example 2

As another example, consider the points in Figure 17 and assume that they come from the function $u(x,y) = x^2 + y^2$.


Figure 17. Thirteen points of which five are unknown the balance of which fall on the surface $u(x,y) = x^2 + y^2$.

In this case, $\rho = 4$ thus, using the same technique, we may use the code:

pts = [-2 -2    8      %  1
        1 -2    5      %  2
       -3 -1   10      %  3
        0 -1 -Inf      %  4
        3 -1   10      %  5
       -1  0 -Inf      %  6
        2  0 -Inf      %  7
       -3  1   10      %  8
        1  1 -Inf      %  9
        3  1   10      % 10
       -2  2    8      % 11
        0  2 -Inf      % 12
        1  3   10]';   % 13

obj = fe_system( 4, pts );
obj = fe_update( obj,  4, [1 2 7 9 6] );
obj = fe_update( obj,  6, [1 4 9 12 11 8 3] );
obj = fe_update( obj,  7, [2 5 10 9 4] );
obj = fe_update( obj,  9, [4 7 10 13 12 6] );
obj = fe_update( obj, 12, [6 9 13 11] );
u = fe_solve( obj )

     u =
       Columns 1 through 8
        -2.0000    1.0000   -3.0000         0    3.0000   -1.0000    2.0000   -3.0000
        -2.0000   -2.0000   -1.0000   -1.0000   -1.0000         0         0    1.0000
         8.0000    5.0000   10.0000    1.8256   10.0000    1.2883    4.6679   10.0000
       Columns 9 through 13
         1.0000    3.0000   -2.0000         0    1.0000
         1.0000    1.0000    2.0000    2.0000    3.0000
         2.9111   10.0000    8.0000    5.5663   10.0000

plot3( u(1,:), u(2,:), u(3,:), 'o' )

A plot with the original surface is in Figure 18.


Figure 18. The solution plotted against the actual surface with the boundary points (in black), the approximation of the five interior points (in red), and the correct values of the interior points (in blue).

Questions

1. From Laboratory 2, you solved Laplace's equation on a rectangular grid with the boundary points defined by

A1 = [  0    0    0    0
       30 -Inf -Inf   60
       30 -Inf -Inf   60
      100  100  100  100]; 

Using the tessellation in Figure 19, determine approximations of the points on the interior using the 2-dimensional finite element tools provided and compare this with your answer using laplace2d.


Figure 19. Twelve points and a tessellation.

Next, use the laplace2d on a 301 × 301 grid with similar boundary conditions and select the points at (101, 101), (101, 201), (201, 201), and (201, 101) and use these as significantly better approximations of these four points. Which, if either, is better (smaller error): the finite difference method from Laboratory 2 or finite elements?

Note that Laplace's equation is scale independent: you may use any points you wish so long as they form an equally spaced rectangular grid.

2. Does the tessellation necessarily affect the approximations? Use the tessellation shown in Figure 20 and compare your answer with those that you found in Question 1.


Figure 20. Twelve points and a tessellation.

3. Normally, one need not define the entire tessellation, as it is possible to refine a tessellation by taking the largest triangle and adding a point in the middle thus forming three triangles. Assuming that you start with a single triangle and at each step you refine every existing triangle, what is the formula for

  1. The number of triangles, and
  2. The number of points.

Why would a refinement not only require that existing triangles be subdivided? To answer this, choose an odd shape, place three points on the boundary, and determine whether or not straight subdivision is appropriate for finding an approximate solution on the region.