## FEM and FEA – Finite Element Method and Analysis

**Finite Element Method & Finite Element Analysis**

The finite element method (FEM) is the most widely used method for solving problems of engineering and mathematical models. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The FEM is a particular numerical method for solving partial differential equations in two or three space variables (i.e., some boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function.

Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA).

### Basic concepts

The subdivision of a whole domain into simpler parts has several advantages:

- Accurate representation of complex geometry
- Inclusion of dissimilar material properties
- Easy representation of the total solution
- Capture of local effects

A typical work out of the method involves (1) dividing the domain of the problem into a collection of subdomains, with each subdomain represented by a set of element equations to the original problem, followed by (2) systematically recombining all sets of element equations into a global system of equations for the final calculation. The global system of equations has known solution techniques, and can be calculated from the initial values of the original problem to obtain a numerical answer.

In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are often partial differential equations (PDE). To explain the approximation in this process, Finite element method is commonly introduced as a special case of Galerkin method. The process, in mathematical language, is to construct an integral of the inner product of the residual and the weight functions and set the integral to zero. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally with

- a set of
**algebraic equations**for*steady state problems*, - a set of
**ordinary differential equations**for*transient problems*.

These equation sets are the element equations. They are linear if the underlying PDE is linear, and vice versa. Algebraic equation sets that arise in the steady state problems are solved using numerical linear algebra methods, while ordinary differential equation sets that arise in the transient problems are solved by numerical integration using standard techniques such as Euler’s method or the Runge-Kutta method.

In step (2) above, a global system of equations is generated from the element equations through a transformation of coordinates from the subdomains’ local nodes to the domain’s global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out by FEM software using coordinate data generated from the subdomains.

FEM is best understood from its practical application, known as finite element analysis (FEA). FEA as applied in engineering is a computational tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into small elements, as well as the use of software program coded with FEM algorithm. In applying FEA, the complex problem is usually a physical system with the underlying physics such as the Euler-Bernoulli beam equation, the heat equation, or the Navier-Stokes equations expressed in either PDE or integral equations, while the divided small elements of the complex problem represent different areas in the physical system.

FEA is a good choice for analyzing problems over complicated domains (like cars and oil pipelines), when the domain changes (as during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource as they remove multiple instances of creation and testing of hard prototypes for various high-fidelity situations. For instance, in a frontal crash simulation it is possible to increase prediction accuracy in “important” areas like the front of the car and reduce it in its rear (thus reducing cost of the simulation). Another example would be in numerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena (such as tropical cyclones in the atmosphere, or eddies in the ocean) rather than relatively calm areas.

#### – Various types of finite element methods

- AEM
- Generalized finite element method
- Mixed finite element method
- Variable – polynomial
- hpk-FEM
- XFEM
- Scaled boundary finite element method (SBFEM)
- S-FEM
- Spectral element method
- Meshfree methods
- Discontinuous Galerkin methods
- Finite element limit analysis
- Stretched grid method
- Loubignac iteration

##### * Link with the gradient discretization method

Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of the gradient discretization method (GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear and degenerate parabolic problems), hold as well for these particular finite element methods.

### History

While it is difficult to quote a date of the invention of the finite element method, the method originated from the need to solve complex elasticity and structural analysis problems in civil and aeronautical engineering. Its development can be traced back to the work by A. Hrennikoff and R. Courant in the early 1940s. Another pioneer was Ioannis Argyris. In the USSR, the introduction of the practical application of the method is usually connected with name of Leonard Oganesyan. In China, in the later 1950s and early 1960s, based on the computations of dam constructions, K. Feng proposed a systematic numerical method for solving partial differential equations. The method was called the finite difference method based on variation principle, which was another independent invention of the finite element method. Although the approaches used by these pioneers are different, they share one essential characteristic: *mesh discretization of a continuous domain into a set of discrete sub-domains, usually called elements.*

Hrennikoff’s work discretizes the domain by using a lattice analogy, while Courant’s approach divides the domain into finite triangular subregions to solve second order elliptic partial differential equations (PDEs) that arise from the problem of torsion of a cylinder. Courant’s contribution was evolutionary, drawing on a large body of earlier results for PDEs developed by Rayleigh, Ritz, and Galerkin.

The finite element method obtained its real impetus in the 1960s and 1970s by the developments of J. H. Argyris with co-workers at the University of Stuttgart, R. W. Clough with co-workers at UC Berkeley, O. C. Zienkiewicz with co-workers Ernest Hinton, Bruce Irons and others at Swansea University, Philippe G. Ciarlet at the University of Paris 6 and Richard Gallagher with co-workers at Cornell University. Further impetus was provided in these years by available open source finite element software programs. NASA sponsored the original version of NASTRAN, and UC Berkeley made the finite element program SAP IV widely available. In Norway the ship classification society Det Norske Veritas (now DNV GL) developed Sesam in 1969 for use in analysis of ships. A rigorous mathematical basis to the finite element method was provided in 1973 with the publication by Strang and Fix. The method has since been generalized for the numerical modeling of physical systems in a wide variety of engineering disciplines, e.g., electromagnetism, heat transfer, and fluid dynamics.

### Divide and Conquer!

To be able to make simulations, a mesh, consisting of up to millions of small elements that together form the shape of the structure needs to be created. Calculations are made for every single element. Combining the individual results gives us the final result of the structure. The approximations we just mentioned are usually polynomial and in fact, interpolations over the element(s). This means we know values at certain points within the element but not at every point. These ‘certain points’ are called nodal points and are often located at the boundary of the element. The accuracy with which the variable changes is expressed by the approximation, which can be linear, quadratic, cubic etc. In order to get a better understanding of approximation techniques, we will look at a one-dimensional bar. Consider the true temperature distribution T(x) along the bar in the picture below: Let’s assume we know the temperature of this bar at 5 specific positions (Numbers 1-5 in the illustration). Now the question is: How can we predict the temperature in between these points? A linear approximation is quite good but there are better possibilities to represent the real temperature distribution. If we choose a square approximation, the temperature distribution along the bar is much smoother. Nevertheless, we see that irrespective of the polynomial degree, the distribution over the rod is known once we know the values at the nodal points. If we would have an infinite bar, we would have an infinite number of unknowns (**DEGREES OF FREEDOM (DOF)**). But in this case, we have a problem with a finite number of unknowns:

*A system with a finite number of unknowns is called a discrete system. A system with an infinite number of unknowns is called a continuous system.*

For the purpose of approximations, we can find the following relation for a field quantity **u(x)**:

$ \displaystyle u(x)={{u}^{h}}(x)+e(x)$

Where ** u^{h}(x)** is the approximation, which differs from the real solution

**with the error term**

*u(x)***which is not known before. De facto, this error term has to vanish so that the approximate solution**

*e(x)***becomes valid. The question that arises is how**

*u*^{h}(x)**can be parametrized in a discrete function space. One possible solution is to express**

*u*^{h}(x)**as a sum of shape function**

*u*^{h}(x)**with coefficients**

*φ*_{i}(x)

*α*_{i}(x).$ \displaystyle {{u}^{h}}(x)=\underset{{i=1}}{\overset{n}{\mathop \sum }}\,{{\alpha }_{i}}{{\phi }_{i}}(x)$

The line illustrated at the top shows this principle for a 1D problem. ** u** can represent the temperature along the length of a rod that is heated in a non-uniform manner. In our case, there are four elements along the x-axis, where the function defines the linear approximation of the temperature illustrated by dots along the line.

One of the biggest advantages we have when using the **Finite Element Analysis** is that we can either vary the discretization per element or discretize the corresponding basis functions. De facto, we could use smaller elements in regions where high gradients of ** u** are expected. For the purpose of modelling the steepness of the function we need to make approximations.

### Partial Differential Equations

Before proceeding with the Finite Element Analysis, it is important to understand the different types of PDE’s and their suitability for FEA. Understanding this is important to everyone, irrespective of one’s motivation to using finite element analysis. One should constantly remind oneself that FEA is a tool and any tool is only as good as its user.

PDEs can be categorized as elliptic, hyperbolic, and parabolic. When solving these differential equations, boundary and/or initial conditions need to be provided. Based on the type of PDE, the necessary inputs can be evaluated. Examples for PDEs in each category include the Poisson equation (elliptic), Wave equation (hyperbolic), and Fourier law (parabolic).

There are two main approaches to solving elliptic PDEs, namely the finite difference methods (FDM) and variational (or energy) methods. FEM falls into the second category. Variational approaches are primarily based on the philosophy of energy minimization.

Hyperbolic PDEs are commonly associated with jumps in solutions. For example, the wave equation is a hyperbolic PDE. Owing to the existence of discontinuities (or jumps) in solutions, the original FEM technology (or Bubnov-Galerkin Method) was believed to be unsuitable for solving hyperbolic PDEs. However, over the years, modifications have been developed to extend the applicability of FEM technology.

Before concluding this discussion, it is necessary to consider the consequence of using a numerical framework that is unsuitable for the type of PDE. Such usage leads to solutions that are known as “improperly posed.” This could mean that small changes in the domain parameters lead to large oscillations in the solutions, or that the solutions exist only in a certain part of the domain or time, which are not reliable. Well-posed explications are defined as those where a unique solution exists continuously for the defined data. Hence, considering reliability, it is extremely important to obtain well-posed solutions.

#### – The Weak and Strong Formulation

The mathematical models of heat conduction and elastostatics covered in this series consist of (partial) differential equations with initial conditions as well as boundary conditions. This is also referred to as the so-called **Strong Form** of the problem. A few examples of “strong forms” are given in the illustration below.

Second order partial differential equations demand a high degree of smoothness for the solution ** u(x)**. That means that the second derivative of the displacement has to exist and has to be continuous! This also implies requirements for parameters that cannot be influenced like the geometry (

**sharp edges**) and material parameters (

**different Young’s modulus in a material**). To develop the finite element formulation, the partial differential equations must be restated in an

**integral form**called the weak form. The

**weak form**and the strong form are

**equivalent**! In stress analysis, the weak form is called the

**principle of virtual work**.

$ \displaystyle \mathop{\int }_{0}^{l}\frac{{dw}}{{dx}}AE\frac{{du}}{{dx}}dx={{(wA\bar{t})}_{{x=0}}}+\mathop{\int }_{0}^{l}wbdx\text{ }\text{ }|\text{ }\forall w;with\text{:}w(l)=0$

The given equation is the so-called weak form (in this case the weak formulation for elastostatics). The name states that solutions to the weak form do not need to be as smooth as solutions of the strong form, which implies weaker continuity requirements.

You have to keep in mind that the solution satisfying the weak form is also the solution of the strong counterpart of the equation. Also remember that the trial solutions ** u(x)** must satisfy the displacement boundary conditions. This is an essential property of the trial solutions and this is why we call those boundary conditions essential boundary conditions.

#### – Minimum Potential Energy

The Finite Element Analysis can also be executed with a variation principle. In the case of one-dimensional elastostatics, the minimum of potential energy is resilient for conservative systems. The equilibrium position is stable if the potential energy of the system **Π** is minimum. Every infinitesimal disturbance of the stable position leads to an energetic unfavorable state and implies a restoring reaction. An easy example is a normal glass bottle that is standing on the ground, where it has minimum potential energy. If it falls over, nothing is going to happen, except for a loud noise. If it is standing on the corner of a table and falls over to the ground, it is rather likely that the bottle breaks since it carries more energy towards the ground. For the variation principle we make use of this fact. The lower the energy level, the less likely it is to get a wrong solution. The total potential energy **Π** of a system consists of the work of the inner forces (strain energy)

$ \displaystyle {{A}_{i}}=\int_{0}^{l}{{\frac{1}{2}E(x)A(x){{{\left( {\frac{{du}}{{dx}}} \right)}}^{2}}}}dx$ ; $ \displaystyle \frac{1}{2}E(x)A(x){{\left( {\frac{{du}}{{dx}}} \right)}^{2}}=\frac{1}{2}\sigma \epsilon A(x)$

And the work of the external forces

$ \displaystyle {{A}_{a}}=A(x)\bar{t}(x)u(x){{|}{{{{\text{ }_\Gamma\text{ }}_{t}}}}}$

The total energy is:

$ \displaystyle \text{ }\Pi\text{ }={{A}_{i}}-{{A}_{a}}$

#### – Mesh Convergence

One of the most overlooked issues in computational mechanics that affect accuracy, is mesh convergence. This is related to how small the elements need to be to ensure that the results of an analysis are not affected by changing the size of the mesh.

The figure above shows the convergence of quantity with increase in degrees of freedom. As depicted in the figure, it is important to first identify the quantity of interest. At least three points need to be considered and as the mesh density increases, the quantity of interest starts to converge to a particular value. If two subsequent mesh refinements do not change the result substantially, then one can assume the result to have converged.

Going into the question of mesh refinement, it is not always necessary that the mesh in the entire model is refined. St. Venant’s Principle enforces that the local stresses in one region do not affect the stresses elsewhere. Hence, from a physical point of view, the model can be refined only in particular regions of interest and further have a transition zone from coarse to fine mesh. There are two types of refinements (h- and p-refinement) as shown in the figure above. h-refinement relates to the reduction in the element sizes, while p-refinement relates to increasing the order of the element.

Here it is important to distinguish between geometric effect and mesh convergence, especially when meshing a curved surface using straight (or linear) elements will require more elements (or mesh refinement) to capture the boundary exactly. Mesh refinement leads to a significant reduction in errors:

Refinement like this can allow an increase in the convergence of solutions without increasing the size of the overall problem being solved.

**How to measure convergence?**

So now that the importance of convergence has been discussed, how can convergence be measured? What is a quantitative measure for convergence? The first way would be to compare with analytical solutions or experimental results.

**Error of the displacements:**

$ \displaystyle {{e}_{u}}=u-{{u}^{h}}$

Where ** u** is the analytical solution for the field.

**Error of the strains:**

$ \displaystyle {{e}_{\epsilon }}=\epsilon -{{\epsilon }^{h}}$

Where ϵ is the analytical solution for the strain field.

**Error of the stresses:**

$ \displaystyle {{e}_{\sigma }}=\sigma -{{\sigma }^{h}}$

Where σ is the analytical solution for the stress field.

As shown in the equations above, several errors can be defined for displacement, strains and stresses. These errors could be used for comparison and they would need to reduce with mesh refinement. However, in a FEA mesh, the quantities are calculated at various points (nodal and Gauss). So, in this case, what needs to be determined is where and at how many points should the error be calculated.

$ \displaystyle ||{{e}{u}}|{{|}_{?}}=c{{h}^{p}}$

### Comparison to the Finite Difference Method

The finite difference method (FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are:

- The most attractive feature of the FEM is its ability to handle complicated geometries (and boundaries) with relative ease. While FDM in its basic form is restricted to handle rectangular shapes and simple alterations thereof, the handling of geometries in FEM is theoretically straightforward.
- FDM is not usually used for irregular CAD geometries but more often rectangular or block shaped models.
- The most attractive feature of finite differences is that it is very easy to implement.
- There are several ways one could consider the FDM a special case of the FEM approach. E.g., first order FEM is identical to FDM for Poisson’s equation, if the problem is discretized by a regular rectangular mesh with each rectangle divided into two triangles.
- There are reasons to consider the mathematical foundation of the finite element approximation sounder, for instance, because the quality of the approximation between grid points is poor in FDM.
- The quality of a FEM approximation is often higher than in the corresponding FDM approach, but this is extremely problem-dependent and several examples to the contrary can be provided.

Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e. solving for deformation and stresses in solid bodies or dynamics of structures) while computational fluid dynamics (CFD) tends to use FDM or other methods like finite volume method (FVM). CFD problems usually require discretization of the problem into a large number of cells/grid points (millions and more), therefore cost of the solution favors simpler, lower order approximation within each cell. This is especially true for ‘external flow’ problems, like air flow around the car or airplane, or weather simulation.

### Application

A variety of specializations under the umbrella of the mechanical engineering discipline (such as Computational Fluid Dynamic (CFD), aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and also in minimizing weight, materials, and costs.

FEM allows detailed visualization of where structures bend or twist, and indicates the distribution of stresses and displacements. FEM software provides a wide range of simulation options for controlling the complexity of both modeling and analysis of a system. Similarly, the desired level of accuracy required and associated computational time requirements can be managed simultaneously to address most engineering applications. FEM allows entire designs to be constructed, refined, and optimized before the design is manufactured. The mesh is an integral part of the model and it must be controlled carefully to give the best results. Generally, the higher the number of elements in a mesh, the more accurate the solution of the discretized problem. However, there is a value at which the results converge and further mesh refinement does not increase accuracy.

This powerful design tool has significantly improved both the standard of engineering designs and the methodology of the design process in many industrial applications. The introduction of FEM has substantially decreased the time to take products from concept to the production line. It is primarily through improved initial prototype designs using FEM that testing and development have been accelerated. In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue.

**Source:***Websites:***1w.** Wikipedia **2w.** SimScale I , II **3w.** Wikimedia*Books & Articles:***1ba.** Jacob Fish and Ted Belytschko, “A First Course in Finite Elements by Jacob Fish and Ted Belytschko”, Wiley, 2007. / **2ba.** R. Courant, “Variational methods for the solution of problems of equilibrium and vibrations”, 1943. / **3ba.** Schnellback, Probleme der Variationsrechnung, Journal für die reine und Angewandte Mathematik , v. 41, pp. 293-363 (1851). / **4ba.** G. Strang and G. J. Fix, An analysis of the Finite Element Method, Prentice-Hall, Englewood Cliffs, New Jersey (1973). / **5ba.** ard Paulsen; Håkon With Andersen; John Petter Collett; Iver Tangen Stensrud (2014). Building Trust, The history of DNV 1864-2014. Lysaker, Norway: Dinamo Forlag A/S. pp. 121, 436. / **6ba.** Hastings, J. K., Juds, M. A., Brauer, J. R., Accuracy and Economy of Finite Element Magnetic Analysis, 33rd Annual National Relay Conference, April 1985. / **7ba.** Hrennikoff, Alexander (1941). “Solution of problems of elasticity by the framework method”. Journal of Applied Mechanics. 8 (4): 169–175. Courant, R. (1943). “Variational methods for the solution of problems of equilibrium and vibrations”. Bulletin of the American Mathematical Society. 49: 1–23.