# Exercise13

- Due No Due Date
- Points None

**13.1 Model Reduction and Model Reduction Error**

Find balanced state space realizations for the following systems and truncate them to find suitable approximations that make small for all (hint: balreal+modred). Plot Bode diagrams for and to illustrate the results for different model orders.

a)

b)

c)

d) It is possible to show that the truncation error lies between the following bounds

(the "twice the tail-bound")

The bound is valid if is stable, and you are keeping the first states and have .

Compare these bounds with the result you got in c).

Hint: Either use a Bode plot of , or use the function hinfnorm(H) which calculates .

**13.2 Balanced Realization Transformation**

Show that the coordinate transformation changes matrices the following way

a) , , (state-space matrices)

b) (reachability and observability Gramians)

c) Show that the following procedure transforms the system to a form where

- Compute Gramians and for the system given by
- Compute a matrix so that (this is called Cholesky factorization)
- Compute a SVD of and write it on the form
- Use the coordinate transformation with

**13.3 Grey box Identification of a Heated Rod**

The file greyrod.m on Lecture 12 does Grey Box Identification of the parameters kappa and htf. A compartment model of the system is produced by heatd.m.

Try different model orders 1,2,5 and 100 and compare the result of the grey box identification. Study both the Bode diagrams, and the estimated values of kappa and htf and comment on the results.

(You will need greyrod.m and heatd.m )

**13.4 State Space Identification with measurable states**

If the full state vector is known, then the state space model can be found by standard least squares regression in

a) Study the file ex13_4.m and explain how the code works

b) Suggest a practical method to reduce the number of nonzero elements in . Such a method can be useful if you suspect that many elements in theses matrices are zero. (You don't need to implement it).

-----------------------------------

**Solutions:**

Matlab files with solutions

**ex 13_1:** See the code ex13_1.m

**ex 13_2:** See this solution exercise-13solutions.pdf

**ex 13_3**: See the code ex13_3.m (you will need greyrod.m and heatd.m )

**ex 13_4:**

a) The code in the exercise implements solutions to the normal equations for a linear matrix equation of the form

Measurement-matrix = Theta-matrix * Regressor-matrix

in the code the Measurement-matrix is called "lhs", for "left hand side", and Regressor-matrix is called "rhs".

We have talked about such a generalization of linear regression to the matrix case at an earlier exercise, and concluded then that the corresponding normal equations has similar structure as before. In the code the solution is obtained both from such a normal equation, and from matlab's internal implementation of solution to over and underdetermined linear equations (which often have better numerics)

b) One can use the "lasso" method from an early lecture in the course, adding an L_1 penalty on the sum of all coefficients in the A,B,C and D matrices. The simplest way to implement this would probably be to vectorize the matrix equation, i.e. rewrite the given linear equations system in the problem into the traditional form , where would be a parameter vector with all the matrix elements of A,B,C and D.