Exercise13
- Due No Due Date
- Points None
13.1 Model Reduction and Model Reduction Error
Find balanced state space realizations for the following systems G(s)=C(sI−A)−1B and truncate them to find suitable approximations
ˆG that make
|G(iω)−ˆG(iω)|small for all
ω (hint: balreal+modred). Plot Bode diagrams for
G and
ˆG to illustrate the results for different model orders.
a) A=(−1000−0.1000−0.101),B=(11−1),C=(111)
b) A=(−1000−0.1000−0.101),B=(111),C=(111)
c) G(s)=(s+2)(s+4)(s+6)(s+8)(s+1)(s+3)(s+5)(s+7)
d) It is possible to show that the truncation error lies between the following bounds
σr+1≤|G(iω)−ˆG(iω)|≤2(σr+1+…σn) (the "twice the tail-bound")
The bound is valid if A is stable, and you are keeping the first
r states and have
σr>σr+1.
Compare these bounds with the result you got in c).
Hint: Either use a Bode plot of G−ˆG, or use the function hinfnorm(H) which calculates
supω|H(iω)|.
13.2 Balanced Realization Transformation
Show that the coordinate transformation x=Tˉx changes matrices the following way
a) ˉA=T−1AT,
ˉB=T−1B,
ˉC=CT,ˉD=D (state-space matrices)
b) ˉP=T−1PT−T,ˉQ=TTQT (reachability and observability Gramians)
c) Show that the following procedure transforms the system to a form where ˉP=ˉQ=Σ
- Compute Gramians
P and
Q for the system given by
(A,B,C,D)
- Compute a matrix
R so that
P=RRT (this is called Cholesky factorization)
- Compute a SVD of
RTQR and write it on the form
RTQR=UΣ2UT
- Use the coordinate transformation
x=Tˉx with
T=RUΣ−1/2
13.3 Grey box Identification of a Heated Rod
The file greyrod.m on Lecture 12 does Grey Box Identification of the parameters kappa and htf. A compartment model of the system is produced by heatd.m.
Try different model orders 1,2,5 and 100 and compare the result of the grey box identification. Study both the Bode diagrams, and the estimated values of kappa and htf and comment on the results.
(You will need greyrod.m Download greyrod.m and heatd.m Download heatd.m )
13.4 State Space Identification with measurable states
If the full state vector x is known, then the state space model
(A,B,C,D) can be found by standard least squares regression in
[x2⋯xN+1y1⋯yN]=[ABCD][x1⋯xNu1⋯uN]+noise
a) Study the file ex13_4.m Download ex13_4.m and explain how the code works
b) Suggest a practical method to reduce the number of nonzero elements in (A,B,C,D). Such a method can be useful if you suspect that many elements in theses matrices are zero. (You don't need to implement it).
-----------------------------------
Solutions:
Matlab files with solutions
ex 13_1: See the code ex13_1.m Download ex13_1.m
ex 13_2: See this solution exercise-13solutions.pdf Download exercise-13solutions.pdf
ex 13_3: See the code ex13_3.m Download ex13_3.m (you will need greyrod.m Download greyrod.m and heatd.m Download heatd.m )
ex 13_4:
a) The code in the exercise implements solutions to the normal equations for a linear matrix equation of the form
Measurement-matrix = Theta-matrix * Regressor-matrix
in the code the Measurement-matrix is called "lhs", for "left hand side", and Regressor-matrix is called "rhs".
We have talked about such a generalization of linear regression to the matrix case at an earlier exercise, and concluded then that the corresponding normal equations has similar structure as before. In the code the solution is obtained both from such a normal equation, and from matlab's internal implementation of solution to over and underdetermined linear equations (which often have better numerics)
b) One can use the "lasso" method from an early lecture in the course, adding an L_1 penalty on the sum of all coefficients in the A,B,C and D matrices. The simplest way to implement this would probably be to vectorize the matrix equation, i.e. rewrite the given linear equations system in the problem into the traditional form y=Φθ, where
θ would be a parameter vector with all the matrix elements of A,B,C and D.