In the last post we went quickly over the equations of motion for the rigid body rocket. We ended up with a system of equations of size 3. The state space model that we ended up with has information about the angle of rotation, rotational velocity, and motion in the Z direction. The inputs that we have control over are the angle of deflection and the thrust level. For the sake of simplicity, lets keep the thrust of the rocket constant and the only input that we have into the system is the angle of the thrust vector.

Soooo you may be asking how do we control or stabilize the system??? Well lets start by asking what you want to control. Do you want to control the angle of the rocket? the rotational velocity? the drift? Maybe a specific combination of both of them? Turns out, according to the paper, that there are basically 2 ways that one could control the rocket. You can either try to control and minimize the drift of the rocket, or you could minimize the angle of attack of the rocket. Minimizing the drift makes sense. When launching a rocket we want it to follow a prescribed path and deviate from it as little as possible. But why would one want to minimize the angle of attack? Well turns out that if one can minimize the angle of attack, then one is minimizing the loads on the rocket.

Well this is all nice and dandy, but how do we even start building a controller for this? According to the paper you can use pole placement using root locus to achieve the desired control, or Linear Quadratic Regulator approach. This sounds very much like trying to decide if we want to use classical control methods, or modern control methods. I should note that just becasuse it says modern, does not necessarily mean better, but since I am a sucker for doing the least amount of work 🙂 then my preferred method is the LQR approach.

#### What is LQR?

I am not even going to attempt to explain LQR in here, but I will say a couple of things about LQR in case you are not familiar with it. Given a state space model, the LQR approach minimizes the following cost function.

Q is a user defined matrix that tells the solver which combination of states to minimize. We also are trying to minimize the use of the input thrust angle.

The result of the LQR is a set of gains that control the rocket.

*Nice things about LQR*

#1 I can design a controller that will try to achieve an objective. That objective can be written in a very mathematical way via matrices Q and R (in our case R = 1) which I will show later.

#2. LQR takes the information of each state in the rocket (angle, rot velocity, drift) and multiplies those states with some gains, and sends the result to the TVC system and voila, you can now control the rocket.

#3 Generation of the gains is easily computed if you have matlab or some similar program that solves the Ricatti equation.

#4 What else do you want!!!

*Annoying things about LQR*

#1. It requires that you know the state of the system. That means you must know the angle, angular velocity, and drift of the rocket. That is 3 sensors that you must add to the rocket and you better make sure those sensors don’t fail!!

#2 You technically don’t know if the controller you get is stable.

#3 What do you mean you don’t know if it is stable

#4 Yup, you don’t really know if it is stable. Actually you will know if it is stable when you find the eigenvalues of the system, but the LQR approach will not guarantee you stability. You also don’t know if it is robust. Ohh, and I forgot to tell you this, the controller is proportional, which means there will always be some residual error and you might never achieve your commanded state. This can be solved by adding an integral control to the LQR approach, but that is for another blog.

I can understand dear reader that you might think LQR is a pile of c***. But the nice thing about it is that it has a nice mathematical way of minimizing an objective. Also I am also more familiar with LQR than with pole placement for rockets and hence I will show how to use the LQR approach. So sorry classical control guys, I am a millennial. Maybe in another post I will show pole placement for the rocket.

#### Designing the LQR Controller

Ok, so now it is time to get some real numbers for the ARES-I rocket. I have used the parameters from the paper to populate my steady state model.

Once I do that I need to decide what is the Q matrix I need to use. Since in one case I am attempting to minimize the drift of the rocket, the Q matrix will be a 3×3 matrix which will consist of zeros everywhere except in one place.

Now, if we want to control the angle of attack we need to use the definition of angle of attack from part 1. The angle of attack is a combination of the angle of rotation of the rocket and the drift velocity of the rocket with respect to the main velocity of the rocket.

Given this definition, the matrix Q for minimizing the angle of attack is:

With these Q matrices I used the attached matlab script to generate the gains for load minimum and drift minimum.

The paper never specifies what it uses to penalize R, however after some trial and error I think the author used a value of R=0.1.

%% This script generates the state space model for ARES-I CLV at T=60 s clear all; Iyy = 2.186e8; % [Slug-ft^2] m = 38901; % [Slug] Tc = 2.361e6; % [lbf] V = 1347; % [ft/s] Cn_alpha = 0.1465; % g = 26.10; % [ft/s^2] N_alpha = 686819; % [lbf/rad] M_alpha = 0.3807; % [s^-2] M_delta = 0.5726; % [s^-2] x_cg = 53.19; % [ft] x_cp = 121.2; % [ft] F = Tc; Mach = 1.4; h = 34000; S = 116.2; Fbase = 1000; Ca = 2.4; D = Ca*680*S - Fbase; Drag = 7.15*D; A_matrix = [ 0 1 0; M_alpha 0 M_alpha/V; -(F-Drag+N_alpha)/m 0 -N_alpha/(m*V)]; B_matrix = [0 0; M_delta M_alpha; Tc/m -N_alpha/m]; B_matrix_simple = [0; M_delta; Tc/m ]; C_matrix = diag([1 1 1]); D_matrix = [0 0;0 0;0 0]; D_matrix_simple = [0;0;0]; pitch_ARES_ss = ss(A_matrix,B_matrix_simple,C_matrix,D_matrix_simple);

%% Cost Function based on Angle of Attack cvector = {'bo' 'ro' 'go'}; R_vector = [0.1 5 10]; figure;hold on; for k = 1:1 R_matrix_drift = R_vector(k); Q_matrix_drift = [1 0 1/V; 0 0 0; 1/V 0 1/V^2]; [K S e] = lqr(pitch_ARES_ss,Q_matrix_drift,R_matrix_drift); for i=1:100000 e_val(:,i) = eig(A_matrix-B_matrix_simple*K*i/10000); end plot(real(e_val(1,:)),imag(e_val(1,:)),cvector{k}); plot(real(e_val(2,:)),imag(e_val(2,:)),cvector{k}); plot(real(e_val(3,:)),imag(e_val(3,:)),cvector{k}); grid; end xlim([-2 1]); legend('R = 0.1');

#### Stability of the LQR Controller

My favorite method to show if a system is stable or not is figuring out the location of the poles of the system. Or saying it another way, find the eigenvalues of the A matrix in the steady state representation for the closed loop system. The author in the paper shows the following root locus:

My root locus with R=0.1 is:

Hooraay!!!!!!!! I am able to match the authors work!!!

So a couple of things need to be said.

- As I increase the gain of the control loop, the rocket eventually goes unstable.
- R=0.1 means that I am penalizing the amount of the work that the TVC system has to do.
- Choosing a larger R implies that the system is going to control angle of attack much more, but at the cost of having to use the TVC a lot more.

Now that I have shown the stability of the controller it is time to see what it does in “real life”. In part 3 of this series I will be showing some of the simulation results from using drift minimum, load minimum and what an unstable controller looks like. Cheers.