The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear—quadratic regulator LQR , a feedback controller whose equations are given below. The settings of a regulating controller governing either a machine or process like an airplane or chemical reactor are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human engineer. The cost function is often defined as a sum of the deviations of key measurements, like altitude or process temperature, from their desired values.

Author:Akirr Mikajind
Language:English (Spanish)
Published (Last):27 October 2004
PDF File Size:15.1 Mb
ePub File Size:18.4 Mb
Price:Free* [*Free Regsitration Required]

From the main problem, the dynamic equations of the inverted pendulum system in state-space form are the following:. To see how this problem was originally set up and the system equations were derived, consult the Inverted Pendulum: System Modeling page. For this problem the outputs are the cart's displacement in meters and the pendulum angle in radians where represents the deviation of the pedulum's position from equilibrium, that is,.

The design criteria for this system for a 0. As you may have noticed if you went through some of the other inverted pendulum examples, the design criteria for this example are different.

In the other examples we were attemping to keep the pendulum vertical in response to an impulsive disturbance force applied to the cart. We did not attempt to control the cart's position. In this example, we are attempting to keep the pendulum vertical while controlling the cart's position to move 0.

A state-space design approach is well suited to the control of multiple outputs as we have here. This problem can be solved using full-state feedback. The schematic of this type of control system is shown below where is a matrix of control gains. Note that here we feedback all of the system's states, rather than using the system's outputs for feedback. In this problem, represents the step command of the cart's position.

The 4 states represent the position and velocity of the cart and the angle and angular velocity of the pendulum. The output contains both the position of the cart and the angle of the pendulum. We want to design a controller so that when a step reference is given to the system, the pendulum should be displaced, but eventually return to zero i.

To view the system's open-loop response please refer to the Inverted Pendulum: System Analysis page. The first step in designing a full-state feedback controller is to determine the open-loop poles of the system.

Enter the following lines of code into an m-file. As you can see, there is one right-half plane pole at 5. This should confirm your intuition that the system is unstable in open loop. The next step in the design process is to find the vector of state-feedback control gains assuming that we have access i. This can be accomplished in a number of ways. Another option is to use the lqr command which returns the optimal controller gain assuming a linear plant, quadratic cost function, and reference equal to zero consult your textbook for more details.

Before we design our controller, we will first verify that the system is controllable. Satisfaction of this property means that we can drive the state of the system anywhere we like in finite time under the physical constraints of the system. For the system to be completely state controllable, the controllability matrix must have rank where the rank of a matrix is the number of linearly independent rows or columns.

The controllability matrix of the system takes the form shown below. The number corresponds to the number of state variables of the system. Adding additional terms to the controllability matrix with higher powers of the matrix will not increase the rank of the controllability matrix since these additional terms will just be linear combinations of the earlier terms. Since our controllability matrix is 4x4, the rank of the matrix must be 4. Adding the following additional commands to your m-file and running in the MATLAB command window will produce the following output.

Therefore, we have verified that our system is controllable and thus we should be able to design a controller that achieves the given requirements. Specifically, we will use the linear quadratic regulation method for determining our state-feedback control gain matrix.

The MATLAB function lqr allows you to choose two parameters, and , which will balance the relative importance of the control effort and error deviation from 0 , respectively, in the cost function that you are trying to optimize.

The simplest case is to assume , and. The cost function corresponding to this and places equal importance on the control and the state variables which are outputs the pendulum's angle and the cart's position.

Essentially, the lqr method allows for the control of both outputs. In this case, it is pretty easy to do. The controller can be tuned by changing the nonzero elements in the matrix to achieve a desirable response. The element in the 1,1 position of represents the weight on the cart's position and the element in the 3,3 position represents the weight on the pendulum's angle. The input weighting will remain at 1. Ultimately what matters is the relative value of and , not their absolute values.

Now that we know how to interpret the matrix, we can experiment to find the matrix that will give us a "good" controller. We will go ahead and find the matrix and plot the response all in one step so that changes can be made in the control and seen automatically in the response. Add the following commands to the end of your m-file and run in the MATLAB command window to get the following value for and the response plot shown below.

The curve in red represents the pendulum's angle in radians, and the curve in blue represents the cart's position in meters. As you can see, this plot is not satisfactory. The pendulum and cart's overshoot appear fine, but their settling times need improvement and the cart's rise time needs to be reduced. As I'm sure you have noticed, the cart's final position is also not near the desired location but has in fact moved in the opposite direction.

This error will be dealt with in the next section and right now we will focus on the settling and rise times. Go back to your m-file and change the matrix to see if you can get a better response.

You will find that increasing the 1,1 and 3,3 elements makes the settling and rise times go down, and lowers the angle the pendulum moves. In other words, you are putting more weight on the errors at the cost of increased control effort. Modifying your m-file so that the 1,1 element of is and the 3,3 element is , will produce the following value of and the step response shown below. You may have noted that if you increased the values of the elements of even higher, you could improve the response even more.

The reason this weighting was chosen was because it just satisfies the transient design requirements. Increasing the magnitude of more would make the tracking error smaller, but would require greater control force. More control effort generally corresponds to greater cost more energy, larger actuator, etc. The controller we have designed so far meets our transient requirements, but now we must address the steady-state error.

In contrast to the other design methods, where we feedback the output and compare it to the reference input to compute an error, with a full-state feedback controller we are feeding back all of the states.

We need to compute what the steady-state value of the states should be, multiply that by the chosen gain , and use a new value as our "reference" for computing the input. This can be done by adding a constant gain after the reference. The schematic below shows this relationship:. We can find this factor by employing the used-defined function rscale. The matrix is modified to reflect the fact that the reference is a command only on cart position.

Note that the function rscale. You will have to download it here and place it in your current directory. More information can be found here, Extras: rscale. Now you can plot the step response by adding the above and following lines of code to your m-file and re-running at the command line.

Now, the steady-state error is within our limits, the rise and settle times are met, and the pendulum's overshoot is within range of the design criteria.

Note that the precompensator employed above is calculated based on the model of the plant and further that the precompensator is located outside of the feedback loop. Therefore, if there are errors in the model or unknown disturbances the precompensator will not correct for them and there will be steady-state error.

You may recall that the addition of integral control may also be used to eliminate steady-state error, even in the presence of model uncertainty and step disturbances. For an example of how to implement integral control in the state space setting, see the Motor Position: State-Space Methods example. The tradeoff with using integral control is that the error must first develop before it can be corrected for, therefore, the system may be slow to respond.

The precompensator on the other hand is able to anticipitate the steady-state offset using knowledge of the plant model. A useful technique is to combine the precompensator with integral control to leverage the advantages of each approach. The response achieved above is good, but was based on the assumption of full-state feedback, which is not necessarily valid. To address the situation where not all state variables are measured, a state estimator must be designed. A schematic of state-feedback control with a full-state estimator is shown below, without the precompensator.

Before we design our estimator, we will first verify that our system is observable. The property of observability determines whether or not based on the measured outputs of the system we can estimate the state of the system. Similar to the process for verifying controllability, a system is observable if its observability matrix is full rank. The observability matrix is defined as follows. We can employ the MATLAB command obsv to contruct the observability matrix and the rank command to check its rank as shown below.

Since the observability matrix is 8x4 and has rank 4, it is full rank and our system is observable. The observability matrix in this case is not square since our system has two outputs. Note that if we could only measure the pendulum angle output, we would not be able to estimate the full state of the system. This can be verified by the fact that obsv A,C 2,: produces an observability matrix that is not full rank. Since we know that we can estimate our system state, we will now describe the process for designing a state estimator.

Based on the above diagram, the dynamics of the state estimate are described by the following equation. The spirit of this equation is similar to that of closed-loop control in that last term is a correction based on feedback.

Specifically, the last term corrects the state estimate based on the difference between the actual output and the estimated output. Now let's look at the dynamics of the error in the state estimate. As is with the case for control, the speed of convergence depends on the poles of the estimator eigenvalues of.

Since we plan to use the state estimate as the input to our controller, we would like the state estimate to converge faster than is desired from our overall closed-loop system.

That is, we would like the observer poles to be faster than the controller poles. A common guideline is to make the estimator poles times faster than the slowest controller pole.


Inverted Pendulum: State-Space Methods for Controller Design



Linear–quadratic regulator


Related Articles