APPLIED INDUSTRIAL CONTROL SOLUTIONS LLC

HOME PAGE Case Studies Published Articles Control Tips/Tricks Training/Mediation

The following is the first section of the 1st Chapter of the "Advanced Dynamic Simulation Course". This course has been taught to engineers at Rockwell Automation, Ford Motor Co., and others. Reading through this first section is a great way to brush up on Laplace transforms, using them to solve differential equations, and applying these methods to control system design, using Bode Plot methods.

This chapter covers fundamentals of Industrial Control System Analysis, Control System Stability, and Laplace Transforms. For Control System Engineers

CHAPTER 1 

FUNDAMENTALS

1.1. Review of complex numbers and complex algebra

1.1.1 Introduction

Why start this course work with a review of complex numbers? Because classical control system analysis is based on the application of complex variables and functions of complex variables. And complex variables in turn are functions of complex numbers.

The analysis of ac circuits can also use complex numbers. For example, sinusoidal signals are often transformed into phasors, and resistance, capacitance, and inductance are often transformed into impedance. These complex variables -- phasors and impedance -- are complex numbers.

Complex numbers must be manipulated using complex algebra, which is an extension of the algebra of real numbers. This is because complex numbers have their own special rules for adding, subtracting, multiplying, and dividing.

Some of the complex arithmetic is avoided, however, when using the Laplace transformation technique as part of the control system analysis. When using the technique, the differential equations (describing the system to be controlled) are converted into algebraic equations which are functions of a complex variable ( s = s + jw ). Then ordinary algebra can be used to manipulate the equations into forms more suited for factoring and/or plotting the roots in the complex plane.

1.1.2 Imaginary Numbers

Imaginary numbers were invented to provide solutions to square roots of negative numbers. They have an unfortunate name, because they exist more than in the imagination. There is no one universally accepted way of writing them. In the electrical field, however, it is standard practice to use the letter j , as in j2 , j0.01 , and -j5.6.

Imaginary numbers sometimes require special rules when combining them:

  • Adding and subtracting use the same rules as those for real numbers.
    • Examples: j2 + j0.01 = j2.01

j2 - j5.6 = -j3.6.

  • Multiplying two imaginary numbers, however, uses a different rule than that for real numbers. It results with a real number whose sign is opposite of what you would get if they were real numbers.
    • Examples: (j2)(j0.01) = -0.02

(j2)(-j5.6) = +11.2.

  • Dividing two imaginary numbers also uses a different rule. It results with in a real number, but in this case the sign is the same as you would get if they were real numbers.
    • Examples: (j2) / (j0.01) = 200

(-j5.6) / (j2) = -2.8.

These rules for multiplying and dividing imaginary numbers may be easier to remember if you view the j operator at being equal to Ö -1. Then, j2 = -1. For instance, the examples given above can be viewed in the following ways:

    • When multiplying

    • When dividing

This view of j should be used as a memory aid only, because j is not a number itself; it just designates a number as being imaginary.

Division involving a real and an imaginary number involves a special step, called rationalizing, whenever the imaginary number is in the denominator. For example,

The rationalizing step was when the numerator and denominator were each multiplied by j.

1.1.3 Complex numbers

1.1.3.1 Rectangular Form of a Complex Number

When a real number and an imaginary number are added or subtracted, as in 12 + j5 and -4 + j3 , the result is a complex number in rectangular form. A complex number is a point on the complex plane shown in Figure 1.1.1.

Figure 1.1.1 Complex plane represents complex numbers as points

The following conventions pertain to the complex plane in Figure 1.1.1:

  • The horizontal axis in the complex plane is called the real axis.
  • The vertical axis is called the imaginary axis.
  • The horizontal and vertical axes divide the complex plane into four

quadrants. Regarding the points (complex numbers) shown in Figure 1.1.1:

    • (-4 + j3) lies in the 2nd quadrant,
    • (-4 - j3) lies in the 3rd quadrant,
    • (2 - j5) lies in the 4th quadrant,
    • one lies on the negative real axis (at -7.5) .

Some other complex-plane notation we will deal with when performing control system analyses are:

The left-half plane ; i.e., to the left of the imaginary axis. In addition, points not on the real axis will always appear in complex conjugate pairs. Such pairs, as

(-4+j3) and (-4-j3)

in the figure, have the same negative real part and, except for the sign, the same imaginary part. The rectangular form is practical only for adding and subtracting, as indicated in the following:

    • Adding and subtracting operations are applied separately to the real and imaginary parts. For example,

    • Multiplying two complex numbers in rectangular form follow the ordinary rules of algebra, along with the rules for imaginary numbers. For example,

= (ac-bd) +j(ad+bc)

    • Dividing two complex number in rectangular form is a two-step process:

  1. rationalize the denominator (i.e., make it real) by multiplying the top and bottom by the complex conjugate of the denominator;
  2. then simplify. For example,

1.1.3.2. Exponential and Polar Forms of a Complex Number

The exponential and polar forms are summarized below. The polar form is a shorthand way of writing the exponential form. Its simplicity makes it much more popular

  • The exponential form of a complex number is Aejq

where: A is the magnitude

q is angle

  • The polar form is AÐ q .

The basis of these forms is Euler’s identity for complex numbers:

Figure 1.1.2 shows how a complex number -- e.g., (-4 + j3) -- can be expressed in exponential and polar forms.

Figure 1.1.2 Exponential and polar forms require magnitude (A) and angle (q )

The magnitude A is found from the right-triangle rule:

(hypotenuse, c)2 = (side, a)2 + (side, b)2

The angle is found from studying Euler’s identity and noting that:

tanq = (imaginary part) / (real part)

The exponential and polar forms are most practical for multiplying and dividing.

  • Multiplying two complex numbers
    • in exponential form (use the law of exponents):

    • in polar form: multiply the amplitudes and add the angles; i.e.,

  • Dividing two complex numbers
    • in exponential form (use the law of exponents):

    • in polar form: divide the amplitudes and subtract the angles:

1.2. Definition of a Linear System

1.2.1. Math Models of Physical System

Deriving a reasonable (mathematical) model of the physical system is perhaps the most important part of an analysis effort. There are many different forms in which the model can be developed. The model choice should be the one most suitable for the particular system and circumstances.

State-space models, for example, are most suitable for optimal control problems, multiple-input, multiple-output (MIMO) systems, and for computer-aided design algorithms (e.g., MATLAB). Transfer-function models, on the other hand, are most suitable for analyzing the transient or frequency response of single-input, single-output (SISO) time-invariant systems.

1.2.2. Simplicity vs Accuracy

Generally, the accuracy of an analysis is improved only at the cost of increased model complexity. The linear lumped-parameter model of a physical system, for example, ignores nonlinear effects and distributed parameters. As a result, it may be valid only at low frequencies, since the ignored distributed parameters typical produce high-frequency effects.

The recommended approach is to start with a simple model. Perform some analyses or generate some solutions to get a feeling for the accuracy of the modelling. Perhaps the simple model was sufficient for the analysis objectives at hand. If not, then build a more complex model to improve on the solution.

1.2.3. Linear Systems

A linear model of a physical system, from a mathematical view point, generally means the principle of superposition applies; i.e., the response of the model to several inputs equals the sum of the responses to each input taken alone. Using this principle, a complicated solution to a linear differential equation can be built up from a series of simple solutions.

A differential equation -- the basis of control system modelling -- is linear if the coefficients are constants or functions of the independent variable. An example differential equation with constant (i.e., time-invariant) coefficients is

(1.2.1)

An example differential equation with a coefficient that is a function of the independent variable (i.e., time) is

(1.2.2)

1.2.4. Nonlinear Systems

A nonlinear model of a physical system is one in which the principle of superpostion does not apply. The response to two inputs, for example, will not equal the sum of the response with each input taken alone.

Some examples of nonlinear effects in physical systems, which can be successfully modelled to various degrees, are listed below:

  • Saturation (with large signals)
  • Dead space or backlash (affecting small signals)
  • Power-law dependence (e.g., damping force vs velocity)
  • Frequency-dependent dampers (linear at low frequencies, nonlinear at high frequencies)

Figure 1.2.1 shows the characteristic curves for the first three nonlinearities above.

Figure 1.2.1 Characteristics curves for various nonlinearities

One way a differential equation becomes nonlinear is if the coefficients are functions of the dependent variable. For example,

(1.2.3)

The other way a differential equation becomes nonlinear is if the order of the dependent variable and/or its derivative(s) is greater than one. For example,

(1.2.4)

Since all physical systems are to some degree nonlinear, a truly linear model does not really exist. There typically will be a limited range of output magnitudes, however, over which the response of the linear model approximates sufficiently well the real world.

If no limited linear range exists, usually the nonlinear equations can be linearized around an operating (equilibrium) point. The result is a linear system approximately equivalent to the nonlinear system within a limited operation range. The linearization technique is very important in control engineering.

1.3. The Laplace Transform

This section reviews the Laplace transform method for solving linear differential equations. The following features makes the method attractive for control system analysis:

  • The method converts differentiation (and integration) and many common functions -- such as steps, ramps, sinusoids, and exponentials -- into algebraic terms in a complex variable s .
  • The transformed differential equation is an algebraic equation which can be solved for the dependent variable.
  • The solution (as a function of time) of the differential equation -- i.e., the inverse Laplace transform -- may be then found directly from a Laplace transform table, or by first using the partial fraction technique and then the transform table.

Often the final step of finding the inverse Laplace transform is not necessary. Graphical techniques -- e.g., Bode plot, root locus diagram, Nyquist plot, and Nichols chart -- exists for predicting the system performance without actually finding the differential equation solution as a function of time.

1.3.1 Notation

The notation conventionally used in the Laplace transform includes the following:

  • f(t) = a function of time t such that f(t) = 0 for t < 0
  • s = complex variable

(1.3.1)

  • s = real part
  • w = imaginary part
  • L = an operational symbol indicating the Laplace transform
  • F(s) = Laplace transform of f(t)
  • definition of the Laplace transform of f(t)

(1.3.2)

  • notation for the inverse Laplace transform

(1.3.3)

1.3.2 Some Useful Properties

Some useful properties and results of the Laplace transform are as follows:

  • multiplying by a constant A

(1.3.4)

  • function addition

(1.3.5)

  • transform of the 1st derivative

(1.3.6)

(1.3.7)

(1.3.8)

(1.3.9)

(1.3.10)

(1.311)

  • transform of the 2nd derivative (integrate by parts twice)

(1.3.12)

  • transform of the exponential function

(1.3.13)

(1.3.14)

    • transform of a step function (amplitude = A)

(1.3.15)

1.3.3 Some Common Transform Pairs

Table 1.3.1 shows Laplace transforms of time functions that frequently appear in linear control system analysis.

 

f(t)

F(s)

1

Unit impulse, d (t)

1

2

Unit step, U(t)

3

t

4

5

6

sin w t

7

cos w t

8

9

10

11

12

K* = complex conjugate of K

Table 1.3.1 Laplace Transform Pairs

1.3.4 Example Application: First-Order Differential Equation Solution

As an application in which the Laplace transform is used to solve linear differential equation, consider the simple electrical network in Figure 1.3.1. Assume we want to solve for the current i(t) in the series RL circuit, immediately after it is connected together. The initial current, therefore, is zero; i.e., I(0) = 0 .

Figure 1.1.3 Series RL electrical circuit

The sum of the voltage drops around the series circuit equals the applied source voltage; i.e.,

(1.3.16)

Taking the Laplace transform of both sides of the above equation and solving for the current yields:

(1.3.17)

(1.3.18)

(1.3.19)

(1.3.20)

The time solution i(t) is the inverse Laplace transform I(s) ;..i.e.,

(1.3.21)

The Laplace transform on the right-hand side of Eq.(1.3.17) can be found in Table 1.3.1, namely the transform pair numbered 5. The time solution can therefore be written directly from the table:

(1.3.22)

Whenever the inverse Laplace transform is not readily known or available in the table of transform pairs on hand, it is sometimes necessary to first expand function into partial fractions -- simple rational functions of s for which the inverse Laplace is readily available. Using the I(s) expression above in Eq.(1.3.20) as an example, the right-hand side is expanded as follows:

(1.3.23)

Thus, we have two partial fractions, with coefficients A and B, which correspond to transform pairs numbered 2 and 4 in Table 1.3.1 . Rewriting the right-hand side of the above equation over a common denominator results in the following:

(1.3.24)

The two numerators above must be equal. Comparing coefficients of s :

(1.3.25)

(1.3.26)

Therefore,

(1.3.27)

(1.3.28)

Rewriting Eq.(1.3.20) in terms of the partial fractions:

(1.3.29)

Using Table 1.3.1

(1.3.30)

as in Eq.(1.3.22).

1.3.5 More Useful Properties

  • final-value theorem

(1.3.31)

  • initial-value theorem

(1.3.32)

  • convolution theorem

(1.3.33)

1.3.5.1 The final-value theorem

Equation (1.3.31) is a way to predict the steady-state (i.e., final value) of the solution from the Laplace transform, without the need to find the inverse transform. For example, in the RL circuit considered above, Eq.(1.3.22) and Eq.(1.3.30) indicate that in the steady-state -- i.e., in the limit as t ® ¥ -- the current Iss = V / R . This result may be obtained from the I(s) expression in Eq.(1.3.20) by applying the final-value theorem:

(1.3.34)

(1.3.35)

(1.3.36)

1.3.5.2 The initial-value theorem

Equation (1.3.32) allows us to predict the initial value of the variable an infinitesimally small time after the start of the solution (at t = 0+ ). For the RL circuit problem, we again start with the I(s) expression in Eq.(1.3.20) and apply the initial-value theorem:

(1.3.37)

as predicted by Eq.(1.3.22).

1.3.5.3 The convolution theorem

Equation (1.3.33) allows us to obtain the inverse Laplace transform by performing a special integration. Applying this to the RL circuit solution above, we start with the I(s) expression in Eq.(1.3.20) and, as with partial fractions, we look for a way to break I(s) into two simple factors -- F(s) and G(s) -- for which we know the inverse transform. For example,

(1.3.38)

Using Table 1.3.1:

(1.3.39)

(1.3.40)

From the convolution theorem:

(1.3.41)

(1.3.42)

(1.3.43)

as in Eq.(1.3.22).

1.3.6 Inverse transform of complex conjugate pairs

Another method for finding the inverse Laplace transform of F(s) that includes complex conjugate pairs is given in this portion of the supplement. The new transform pair developed below becomes entry #12 in Table 1.3.1 (at end of this discussion).

1.3.6.1 Form of F(s) with complex conjugate pairs

If the denominator of a Laplace transform has complex roots the following method may be employed to find the inverse Laplace transform.

(1.3.44)

This can be expressed in terms of its complex conjugate roots as:

(1.3.45)

Note: K is presented with respect to the positive conjugate pole. (i.e. )

where:

(1.3.46)

1.3.6.2 Finding the values of K and K*

The value of K is found using the following substitution (follow the signs carefully)

(1.3.47)

similarly for

(1.3.48)

will always be the conjugate of K, so no separate calculation is needed.

1.3.6.3 Form of the inverse transform

Once K and are known, the following rules are used to obtain the time solution:

(1.3.49)

The magnitude and angle of K are found using the rectangular to polar transformation presented in the section on complex numbers.

1.3.6.4 Using MATLAB to perform partial fraction expansion

When working with complex numbers it is very easy to make mistakes. The MATLAB program provides the user with a convenient method for performing partial fraction expansion. It is called the "residue" function. Briefly, here’s how to use it.

  • define the numerator and denominator of the Laplace transform in terms of its polynomial coefficients. For example

(1.3.50)

  • enter the numerator as a vector of numerator polynomial coefficients.

num=[1 3 2];

  • enter the denominator as a vector of denominator polynomial coefficients

den= [1 5 8 6 0];

  • use the residue function to find the partial fraction expansion coefficients.

The following is an excerpt from the MATLAB help screen for <residue>

[r,p,k]=residue(num,den); finds the residues, poles and direct term of a partial fraction expansion of the ratio of two polynomials, num(s) and den(s).

If there are no multiple roots,

num (s)    r(1)         r(2)               r(n)
--------- = -------- + -------- + ... + -------- + k(s)
den (s)   s - p(1)   s - p(2)          s - p(n)

Vectors [num] and [den] specify the coefficients of the polynomials in descending powers of s.

The following vectors are returned:

  • residues in the column vector "r",
  • pole locations in column vector "p",
  • direct terms in row vector "k".

The number of poles: n = length(den)-1 = length(r) = length(p)

The direct term coefficient vector is empty if length(num) < length(den); otherwise length(k) = length(num)-length(den)+1

If p(j) = ... = p(j+m-1) is a pole of multiplicity m, then the expansion includes terms of the form

   r(j)          r(j+1)               r(j+m-1)
-------- + ------------ + ... + ------------
 s - p(j)   (s - p(j))^2          (s - p(j))^m

[num,den] = residue(r,p,k), with 3 input arguments and 2 output arguments, converts the partial fraction expansion back to the polynomials with coefficients [num] and [den].

For the example given: (pasted directly from the MATLAB command window)

» num=[1 3 2];

» den= [1 5 8 6 0];

» [r,p,k]=residue(num,den);

r =

-2/15

-1/10 - 1/5i

-1/10 + 1/5i

1/3

p =

-3 0i

-1 + 1i

-1 - 1i

0 0i

k =

[]

From the "p" or pole vector we see that the second pole is the positive conjugate pole. Therefore the partial fraction expansion is given as:

(1.3.51)

We can find the rectangular to polar transformation of the residues in "r" with the following MATLAB commands

For Magnitude:

» abs(r)

ans =

0.1333

0.2236

0.2236

0.3333

For Phase:

» angle(r)*180/pi

ans =

180.0000

-116.5651

116.5651

0.0000

Using the rules presented above we can conclude that:

1.5 The Basis of Frequency Response Analysis

The relationship between the steady-state response and the input forcing function was studied at the end of the last section. It forms the basis of some very important frequency analysis tools to be studied in the next chapter. To lay the foundation for this frequency analysis, therefore, the following property of linear systems needs to be established:

  • If a linear system is driven from a steady sinusoidal source, its steady-state output will contain a component at the sinusoidal driving frequency, with all other frequency components (due to the system) decaying to zero over time.

To show this, reconsider the generalized system representing the real-world electrical and mechanical systems in Figure 1.4.3 (of the last section). The differential equation describing such second-order systems with damping is the following:

(1.5.1)

subject to the initial conditions: at t = 0 , x(0) = 0 (1.5.2)

dx/dt(0) = 0 (1.5.3)

Let the applied forcing function be a sinusoid at a frequency w A , not necessarily equal to the system natural frequency w n ; i.e., let

(1.5.4)

which has a Laplace transform equal to the following:

(1.5.5)

Taking the Laplace transform of the DEQ in Eq.(1.5.1):

(1.5.6)

(1.5.7)

Solving for X(s) :

(1.5.8)

(1.5.9)

Expanding X(s) into partial fractions:

(1.5.10)

Once the values of the coefficients A, B, C, and D have been determined -- see the method describing this at end of this section -- the inverse Laplace transform of each term is found.

Notice (with Table 1.3.1 transform pairs in mind) the transient terms that will eventually die out over time are those with the C and D coefficients, since the z term in the denominator quadratic factor produces (in the time domain) the damping term:

(1.5.11)

The steady-state term, however, contains no z factor in the quadratic term. Its inverse transform becomes:

(1.5.12)

(1.5.13)

To put this into a more suitable form, the following algebraic manipulation is made, by multiplying and dividing by the same square-root factor:

(1.5.14)

This can be rewritten in the simple form:

(1.5.15)

where:

(1.5.16)

(1.5.17)

Thus, we have shown (by example) the relationship between the steady-state response and the input sinewave frequency. Once the initial transient has died out, the only remaining response frequency component is that of the driving frequency. Just how the amplitude Cw A and the phase angle f vary with the driving frequency will be studied in the next chapter, when we review some important frequency response analysis techniques.

1.5.1 Determination of the Partial-fraction Coefficients in Eq.(1.5.10)

Picking up the solution at the point where we were solving for X(s):

(1.5.8)

(1.5.9)

Expanding X(s) into partial fractions:

(1.5.10)

We begin the determination of the coefficients A , B , C , and D by putting the two fractions in the above equation over a common denominator.

(1.5.18)

Equating numerator coefficients of s3, s2, s1, and s0 in Eq.(1.5.9) and Eq.(1.5.18):

(1.5.19)

Multiplying (row 1) and (row 2) by -w A2 :

(1.5.20)

Adding (row 4) to (row 2) , and adding (row 3) to (row 1) :

(1.5.21)

Adding (row 2)(-2z w n)/(w n2 - w A2) to (row 1) :

(1.5.22)

(1.5.23)

(1.5.24)

(1.5.25)

(1.5.26)

So,

(1.5.27)

(1.5.28)

(1.5.29)

(1.5.30)

(1.5.31)

where:

(1.5.32)

and

(1.5.33)

Thus, we see that the amplitude Cw A is a special function of the driving frequency w A . It depends on the magnitude of a system characteristic polynomial G(s) evaluated at s = jw A . This property involving the complex variable s , which is in general s = s + jw , will be investigated further in the next chapter.

1.5.2 Obtaining a Frequency Response - Two Approaches

The following supplement to this section further exemplifies the relationship between the steady-state response of a system to a sinusoidal input and the system transfer function with s replaced by jw .

Figure 5.1. Example System

Figure 5.1 shows a low-pass filter that is excited with a sinusoidal forcing function (the sine wave generator), with the resultant waveform displayed on an oscilloscope. There are two ways of calculating what the result will be. The first is to solve the differential equation using Laplace transforms. The second is to use the frequency domain model of the low-pass filter. Following is a presentation of the two techniques.

1.5.2.1 Technique 1) Solving with differential equations:

As shown previously in section 4, an RC circuit (also known as a low pass filter) can be represented in differential equation form as:

Eq. (1.5.34)

This equation may be expressed in terms of the voltage across the capacitor as:

Eq (1.5.35)

or in the Laplace domain:

Eq (1.5.36)

or:

Eq (1.5.37)

Let the input forcing function be represented as X(s) and let the output of the filter (the voltage across the capacitor ) be represented as Y(s), then the relationship between X(s) and Y(s) is given as H(s) where:

Eq (1.5.38)

This is known as the transfer function of the system.

Let the forcing function be a unit sinusoidal disturbance, with 0 [deg] phase shift and an input frequency .

Eq (1.5.39)

The output of the filter can be calculated as:

Eq (1.5.40)

expanding yields:

Eq (1.5.41)

The following characteristics of a first-order system will be discussed more thoroughly in Session 2 -- Frequency Domain Analysis. They are postulated here to simplify the analysis that is underway.

  • The low pass filter "corner frequency" is defined as the frequency at which the magnitude of the output is
times the magnitude of the sinusoidal input.
  • The output is phase shifted by -45 [deg] with respect to the input. This occurs when the sinusoidal input disturbance is injected at the "corner" frequency.
  • The corner frequency is given as 1/RC.

We can calculate the time response of the low pass filter using differential equations and verify that the above postulates are indeed true!

Let RC = 1 (that is we are going to let the corner frequency equal 1 [rad]), and let us inject a 1 [rad] unit sinusoid into the low pass filter. From (1.5.41) above we obtain:

Eq (1.5.42)

Using MATLAB to find the roots of the denominator of (1.5.42) yields the following partial fraction expansion form:

eq (1.5.43)

Using MATLAB to find the partial fraction expansion coefficients (using the residue command) we obtain the following expansion.

Eq (1.5.44)

which can be split into the following two linear contributions, one the result of a complex conjugate pair and the second an exponentially decaying term:

Eq (1.5.45)

The inverse Laplace transform for the exponentially decaying contribution is:

Eq (1.5.46)

Referring to the information presented in the supplement above (on using Matlab commands to find the inverse Laplace transform), we see that for the complex conjugate pair we have:

K = ,Eq (1.5.47)

Eq (1.5.48)

Eq (1.5.49)

Eq (1.5.50)

Eq (1.5.51)

Therefore:

Eq (1.5.52)

which simplifies to:

Eq (1.5.53)

Combining and we obtain the following:

Eq (1.5.54)

When the transient aspect of the above time response decays to zero (at ) we observe that the output does indeed have a magnitude that is times the magnitude of the sinusoidal input and that the output is phase shifted by -45 [deg] with respect to the input sinusoid. In addition we note that the system has a steady state oscillation that is of the same frequency as the input sinusoidal disturbance.

1.5.2.2 Technique 2) Solving in the frequency domain:

To find the sinusoidal steady state response of any "transfer function" we only need to let in the transfer function of interest. In this case (RC = 1)

Eq (1.5.55)

therefore:

Eq (1.5.56)

and at

Eq (1.5.57)

Eq (1.5.58)

Eq (1.5.59)

As demonstrated above, this method is considerably simpler, and control systems engineers always use this method, as opposed to the differential equation approach, when evaluating the frequency response of a system.

NEXT: The Bode plot and frequency domain analysis

HOME PAGE Case Studies Published Articles Control Tips/Tricks Training/Mediation