Nothing Special   »   [go: up one dir, main page]

Robust and Optimal Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 346
At a glance
Powered by AI
The document provides information about a book on robust and optimal control that covers various topics in control theory and its applications.

The book is about robust and optimal control approaches using a two-port framework.

The book covers various control theory topics such as state-space realization, stability, observability, controllability, optimal control, robust control, and applications to mechanical and electrical systems.

Advances in Industrial Control

Mi-Ching Tsai
Da-Wei Gu

Robust and
Optimal Control
A Two-port Framework Approach
Advances in Industrial Control

For further volumes:


http://www.springer.com/series/1412
Mi-Ching Tsai • Da-Wei Gu

Robust and Optimal Control


A Two-port Framework Approach

123
Mi-Ching Tsai Da-Wei Gu
Department of Mechanical Engineering Department of Engineering
National Cheng Kung University University of Leicester
Tainan, Taiwan Leicester, UK

ISSN 1430-9491 ISSN 2193-1577 (electronic)


ISBN 978-1-4471-6256-8 ISBN 978-1-4471-6257-5 (eBook)
DOI 10.1007/978-1-4471-6257-5
Springer London Heidelberg New York Dordrecht
Library of Congress Control Number: 2013956809

© Springer-Verlag London 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of
this publication or parts thereof is permitted only under the provisions of the Copyright Law of the
Publisher’s location, in its current version, and permission for use must always be obtained from Springer.
Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations
are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Series Editors’ Foreword

The series Advances in Industrial Control aims to report and encourage technology
transfer in control engineering. The rapid development of control technology has
an impact on all areas of the control discipline: new theory, new controllers,
actuators, sensors, new industrial processes, computer methods, new applications,
new philosophies : : : , new challenges. Much of this development work resides in
industrial reports, feasibility study papers, and the reports of advanced collaborative
projects. The series offers an opportunity for researchers to present an extended
exposition of such new work in all aspects of industrial control for wider and rapid
dissemination.
The Advances in Industrial Control monograph series started in 1992, and in
many ways the sequence of volumes in the series provides an insight into what
industries and what control system techniques were the focus of attention over the
years. A look at the series titles on robust control yields the following list:
• Robust Multivariable Flight Control by Richard J. Adams, James M. Buffington,
Andrew G. Sparks, and Siva S. Banda (ISBN 978-3-540-19906-9, 1994)
• H1 Aerospace Control Design by Richard A. Hyde (ISBN 978-3-540-19960-1,
1995)
• Robust Estimation and Failure Detection by Rami S. Mangoubi (ISBN 978-3-
540-76251-5, 1998)
• Robust Aeroservoelastic Stability Analysis by Rick Lind and Marty Brenner
(ISBN 978-1-85233-096-5, 1999)
• Robust Control of Diesel Ship Propulsion by Nikolaos Xiros (ISBN 978-1-
85233-543-4, 2002)
• Robust Autonomous Guidance by Alberto Isidori, Lorenzo Marconi, and Andrea
Serrani (ISBN 978-1-85233-695-0, 2003)
• Nonlinear H2 / H1 Constrained Feedback Control by Murad Abu-Khalaf, Jie
Huang, and Frank L. Lewis (ISBN 978-1-84628-349-9, 2006)
• Structured Controllers for Uncertain Systems by Rosario Toscano (ISBN 978-1-
4471-5187-6, 2013)

v
vi Series Editors’ Foreword

And from the sister series, Advanced Textbooks in Control and Signal Processing
come:
®
• Robust Control Design with MATLAB by Da-Wei Gu, Petko Hr. Petrov, and
Mihail M. Konstantinov (2nd edition ISBN 978-1-4471-4681-0, 2013)
• Robust and Adaptive Control by Eugene Lavretsky and Kevin Wise (ISBN 978-
1-4471-4395-6, 2013)
Clearly, robust control has seen a steady stream of monographs and books in
both series. There is no doubt that the work of George Zames, Bruce Francis, John
Doyle, Keith Glover, and many others created a paradigm change in control systems
theory. Also note the number of aerospace-industry applications in the above list of
texts. This emphasis can be ascribed to the availability of accurate high-dimensional
multivariable models of aerospace systems within the industry, to the wide range of
operating envelopes and therefore models, that aerospace vehicles traverse during
a flight and the facility of optimization-based robust-control techniques in dealing
with multivariable systems and their operational constraints.
From time to time, the Advances in Industrial Control series publishes a
monograph that is theoretical and tutorial in content. This contrasts with most
entries to the series that contain a mix of the theoretical, the practical, and the
industrial. This monograph Robust and Optimal Control by Mi-Ching Tsai and
Da-Wei Gu is one of those exceptions. The authors themselves actually raise the
question “Why another book on the topic of Robust Control?” and their answer is
that they have devised a new route to understanding the derivations and computation
of robust and optimal controllers that they believe is a valuable addition to the
literature of the subject. Their two-port approach is claimed to be more accessible
to an engineering readership and to resonate in particular with an electrical- and
electronic-engineering readership. The theoretical developments reported in the
monograph are fully supported by detailed chapters covering all the background
®
material and MATLAB code and illustrated in a simple but persuasive servo-motor-
control problem in the final chapter of the monograph. The list of monographs and
textbooks on robust control shows that there is a continuing industrial interest in
this field and for this reason this monograph is a valuable entry to the Advances in
Industrial Control monograph series.

Industrial Control Centre M.J. Grimble


Glasgow, Scotland, UK M.A. Johnson
Preface

This is a book on robust and optimal control of linear, time-invariant systems.


The human being always seeks better results in all activities, and this desire
pushes the advance of science and technology. This happens in the control systems
and control engineering area as well. It is desirable to develop a control strategy,
a controller, for a dynamic system under consideration, to satisfy all possible
constraints and to optimize a certain cost function which reflects the design
objectives. This is so-called an optimal control problem. Such problems can be
traced back to as early as the seventeenth century, in the Brachistochrone curve
problem raised by Johann Bernoulli. Solution approaches towards such problems
include the classical Calculus of Variations, Pontryagin’s Maximum Principle,
Dynamic Programming, (Differential) Game Theory, and Nonsmooth Optimization.
These procedures are complicated, and solutions do not always exist. Fortunately,
for linear time-invariant (LTI) systems, many cases would have satisfactory results,
and this book presents a powerful approach which is also easy to understand, for
electrical engineers in particular, we hope.
On the other hand, robustness is another vitally important issue in control systems
design. A successfully designed automatic control system should be always able
to maintain stability and an acceptable performance level despite uncertainties in
system dynamics and/or in the operation environment to a certain degree, while such
uncertainties inevitably exist in any real-world control system. In the late 1970s and
early 1980s with the pioneering work by Zames [8] and Zames and Francis [9], a
theoretic development, now known as the H1 optimal control theory, was taking
shape. Robust controllers for LTI systems can be found by solving corresponding
optimization problems. Robustness is thus achieved by designing a controller
which attains certain optimality of the closed-loop system. The H1 and related
optimization approaches are well developed and elegant. They provide systematic
design procedures, in particular, for multi-input, multi-output linear systems.
There have been a number of books on this subject. Some books are on the
underlying theories and the derivation of solution formulae [1, 3, 5, 7, 10]. Others
are more on design methodologies, application of such theories, and implementation
software [2, 6]. Naturally, a question arises, “Do we need another book on this
subject?”
vii
viii Preface

It seems satisfactory that practicing control engineers can use available solution
formulae and software routines to work out robust and optimal controllers for given
design problems, when they know well the underlying control systems and design
specifications. However, are we happy with such designed controllers without
knowing exactly how the formulae are derived and on what grounds the solution
procedures are based? As control engineers, are we confident enough to implement
such designed controllers? Answers to above queries might be obvious, and there are
sources for us to know the theories of design approaches, as pointed out earlier. The
problem is that the theory behind the state-space approaches presented in [10] and
other books is very mathematically oriented and difficult for engineers, and students
as well, to understand. Hence, is it possible to present the robust and optimal control
theory for LTI systems in a way such that engineers and students can follow and
grasp the essence of the solution approach? This motivated the research and writing
of the present book.
This book presents an alternative approach to find a robust controller via
optimization. This approach is based on the chain scattering decomposition (CSD),
initiated by Professor Hidenori Kimura [4] and references therein who also named
this as chain scattering description. CSD uses the configuration of two-port circuits
which is a fundamental ingredient of electrical engineering and is familiar to all
electrical engineers and students with basic electrical engineering knowledge. It is
shown in the book that (sub)optimal H1 , H2 as well as stabilizing controllers can
be synthesized following the CSD approach. The book starts from the well-know
linear fractional transformation (LFT), in which a control design problem can easily
be formulated, and then converts LFT into CSD format. From the CSD formulation,
the desired controller can be directly derived by using the framework proposed in
the book in an intuitive and convenient way. The results are complete and valid for
general system settings. The derivation of solution formulae is straightforward and
uses no mathematics beyond linear algebra. It is hoped that readers may obtain
insight from this robust and optimal controller synthesis approach, rather being
bewildered in a mathematics maze.
The prerequisites for reading this book are classical control and state variable
control courses at undergraduate level as well as elementary knowledge of linear
algebra and electrical circuits. This book is intended to be used as a textbook for
an introductory graduate course or for senior undergraduate course. It is also our
intention to prepare this book for control engineers training courses on robust and
optimal control systems design for linear time-invariant systems. With the above
consideration in mind, we use plenty of simple yet illustrative worked examples
throughout the book to help readers to understand the concepts and to see how
the theory develops. Where appropriate, MATLAB codes for the examples are also
included for readers to verify the results and to try on their own problems. Most
chapters are followed with exercises for readers to digest the contents covered in
the chapter. To further demonstrate the proposed approaches, in the last chapter, an
application case study is presented which shows wider usage of the framework.
Preface ix

References

1. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
2. Gu DW, Petkow PH, Konstantinov MM (2005) Robust control design with Matlab. Springer,
London
3. Helton JW, Merino O (1998) Classical control using H1 methods-theory, optimization, and
design. Society for industrial and Applied Mathematics, Baltimore
4. Kimura H (1997) Chain-scattering approach to H1 control. Birkhäuser, Boston
5. Maciejowski JM (1989) Multivariable feedback design. Addison-Wesley, Wokingham/
Berkshire
6. Skogestad S, Postlethwaite I (2005) Multivariable feedback control: analysis and design. Wiley,
New York
7. Stoorvogel AA (1992) The H1 control problem: a state space approach. Prentice Hall,
Englewood Cliffs
8. Zames G (1981) Feedback and optimal sensitivity: model reference transformation, multiplica-
tive seminorms, and approximated inverse. IEEE Trans Autom Control 26:301–320
9. Zames G, Francis BA (1983) Feedback, minimax sensitivity, and optimal robustness. IEEE
Trans Autom Control 28:585–601
10. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River

National Cheng Kung University, Tainan, Taiwan Mi-Ching Tsai


University of Leicester, Leicester, UK Da-Wei Gu
Acknowledgements

The authors started research on robust and optimal control using the CSD approach
about two decades ago. Many interesting results were however discovered while
the first author was taking his sabbatical leave (2003–2004) in the University of
Cambridge. The main parts of the book were written during the second author’s
recent visits at the National Cheng Kung University (NCKU), Taiwan. The authors
thank NCKU and the University of Leicester for the support they received. The
authors are greatly indebted to many individual people. The book would not be
completed without their help.
All the contributors in the development of the CSD approach and related robust
and optimal control theory, Professor Keith Glover, Professor Hidenori Kimura,
Professor Ian Postlethwaite, Professor Malcolm Smith, Professor Jan Maciejowski,
Professor Fang Bo Yeh, and Professor Shinji Hara, to name just a few, are very
gratefully thanked. The authors are also indebted to their colleagues and students,
past and present, for finding time in their busy schedules to help in editing,
reviewing, and proofreading of the book. The long list includes Dr. Chin-Shiong
Tsai, Dr. Bin-Hong Sheng, Dr. Jia-Sen Hu, Dr. Fu-Yen Yang, Dr. Wu-Sung Yao,
Chun-Lin Chen, Ting-Jun Chen, Chia-Ling Chen, and many others.
Finally and most importantly, the authors owe their deepest gratitude and love to
their families for their understanding, patience, and encouragement throughout the
writing of this book.

xi
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Linear Algebra and Matrix Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Eigenvalues and Eigenvectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 Matrix Inversion and Pseudoinverse . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.5 Vector Norms and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.6 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Function Spaces and Signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Norms for Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Linear System Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 Linear Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 State Similarity Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 Stability, Controllability, and Observability . . . . . . . . . . . . . . . 24
2.3.4 Minimal State-Space Realization . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.5 State-Space Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.6 State-Space Formula for Parallel Systems. . . . . . . . . . . . . . . . . 28
2.3.7 State-Space Formula for Cascaded Systems . . . . . . . . . . . . . . 29
2.3.8 State-Space Formula for Similarity Transformation . . . . . . 29
2.4 Linear Fractional Transformations and Chain
Scattering-Matrix Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 Two-Port Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1 One-Port and Two-Port Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Impedance and Admittance Parameters (Z and Y Parameters) . . . . . 40
3.3 Hybrid Parameters (H Parameters) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

xiii
xiv Contents

3.4 Transmission Parameters (ABCD Parameters) . . . . . . . . . . . . . . . . . . . . . . 44


3.5 Scattering Parameters (S Parameters) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6 Chain Scattering Parameters (T Parameters). . . . . . . . . . . . . . . . . . . . . . . . 51
3.7 Conversions Between (ABCD) and (S, T) Matrix Parameters. . . . . . 54
3.8 Lossless Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 Linear Fractional Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.1 Linear Fractional Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Application of LFT in State-Space Realizations . . . . . . . . . . . . . . . . . . . . 69
4.3 Examples of Determining LFT Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.3.1 Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3.2 Cascade Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3.3 Parallel Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 Relationship Between Mason’s Gain Formulae and LFT . . . . . . . . . . 80
4.5 LFT Description and Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.6 Inner and Co-inner Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5 Chain Scattering Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.1 CSD Definitions and Manipulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Cascaded Connection of Two CSD Matrices . . . . . . . . . . . . . . . . . . . . . . . 103
5.3 Transformation from LFT to CSD Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.4 Transformation from LFT to Cascaded CSDs . . . . . . . . . . . . . . . . . . . . . . 110
5.5 Transformation from CSD to LFT matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.6 Applications of CSDs in State-Space Realizations . . . . . . . . . . . . . . . . . 121
5.7 An Application of CSDs to Similarity Transformations. . . . . . . . . . . . 127
5.8 State-Space Formulae of CSD Matrix Transformed
from LFT Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.9 State-Space Formulae of LFT Matrix Transformed
from CSD Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.10 Star Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.11 J-Lossless and Dual J-Lossless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6 Coprime Factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.1 Coprimeness and Coprime Factorization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.2 Coprime Factorization over RH1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.3 Normalized Coprime Factorization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Contents xv

7 Algebraic Riccati Equations and Spectral Factorizations . . . . . . . . . . . . . 171


7.1 Algebraic Riccati Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
7.2 Similarity Transformation of Hamiltonian Matrices. . . . . . . . . . . . . . . . 178
7.3 Lyapunov Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
7.4 State-Space Formulae for Spectral Factorizations
Using Coprime Factorization Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.4.1 Spectral Factorization Case I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
7.4.2 Spectral Factorization Case II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.4.3 Spectral Factorization Case III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
8 CSD Approach to Stabilization Control and H2 Optimal Control . . . . 211
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
8.2 Characterization of All Stabilizing Controllers . . . . . . . . . . . . . . . . . . . . . 213
8.2.1 Method I: CSDr  CSDl Using a Right CSD
Coupled with a Left CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.2.2 Method II: CSD1  CSDr Using a Left CSD
Coupled with a Right CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
8.3 State-Space Formulae of Stabilizing Controllers . . . . . . . . . . . . . . . . . . . 220
8.3.1 Method I: CSDr  CSDl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
8.3.2 Method II: CSD1  CSDr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.4 Example of Finding Stabilizing Controllers . . . . . . . . . . . . . . . . . . . . . . . . 227
8.4.1 Method I: CSDr  CSDl Using a Right CSD
Associated with a Left CSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
8.4.2 Method II: CSD1  CSDr Using a Left CSD
Associated with a Right CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.5 Stabilization of Special SCC Formulations . . . . . . . . . . . . . . . . . . . . . . . . . 235
8.5.1 Disturbance Feedforward (DF) Case . . . . . . . . . . . . . . . . . . . . . . 237
8.5.2 Full Information (FI) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
8.5.3 State Feedback (SF) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
8.5.4 Output Estimation (OE) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
8.5.5 Full Control (FC) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
8.5.6 Output Injection (OI) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.6 Optimal H2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
8.6.1 Method I: Using a Right CSD Associated
with a Left One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
8.6.2 Method II: Using a Left CSD Associated
with a Right One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.7 Example of the Output Feedback H2 Optimal Control Problem . . . 252
8.7.1 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
8.8 Example of LQR Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
8.9 More Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
8.10 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
xvi Contents

9 A CSD Approach to H-Infinity Controller Synthesis . . . . . . . . . . . . . . . . . . . 267


9.1 H1 Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.1.1 Method I: CSDr  CSDl Right CSD Coupled
with Left CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.1.2 Method II: CSDl  CSDr Left CSD Coupled
with Right CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.2 State-Space Formulae of H1 Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.2.1 Method I: CSDr  CSDl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.2.2 Method II: CSDl  CSDr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
9.3 H1 Solution of Special SCC Formulations . . . . . . . . . . . . . . . . . . . . . . . . . 281
9.3.1 Disturbance Feedforward (DF) Problem . . . . . . . . . . . . . . . . . . 281
9.3.2 Full Information (FI) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
9.3.3 State Feedback (SF) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
9.3.4 Output Estimation (OE) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 285
9.3.5 Full Control (FC) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
9.3.6 Output Injection (OI) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
9.4 H1 Controller Synthesis with Coprime Factor Perturbations . . . . . 290
9.4.1 Robust Stabilization Problem of Left Coprime
Factorization Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
9.4.2 Robust Stabilization Problem of Right
Coprime Factor Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
10 Design Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10.1 Mathematical Models of DC Servomotor . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10.2 Two-Port Chain Description Approach to Estimation
of Mechanical Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
10.3 Coprime Factorization Approach to System Identification . . . . . . . . . 312
10.4 H1 Robust Controller Design for Speed Control . . . . . . . . . . . . . . . . . . 314
10.4.1 PDF Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
10.4.2 PDFF Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10.4.3 Coprime Factorization Approach to Advanced
PDFF Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
10.5 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Chapter 1
Introduction

This book presents a fresh approach to optimal controller synthesis for linear
time-invariant (LTI) control systems. The readers are assumed to have taken
taught modules on automatic control systems, including classical control in the
frequency domain and state variable control, in a first-degree course (BEng or
BSc). Knowledge of electrical and electronic engineering will be beneficial to
understanding of the approach.
Consider the negative unity feedback control system configuration in Fig. 1.1
with the plant G and controller K. The performance specification of tracking control
requires the output y to follow the reference input r closely. This closeness can be
found in the error signal e. It is obvious that

E D .I C GK/1 R (1.1)

where E(s) and R(s) are the Laplace transforms of the time signals e(t) and
r(t). Therefore, it is required that the error signal e to be “small.” An adequate
measurement to judge the “size” of a signal is the 2-norm, i.e., the square root of
the “energy” of the signal. For a given, fixed input signal r, good tracking asks for
that the 2-norm of e should be as small as possible. That is, one wants to find a
stabilizing controller K which makes k(I C GK) 1 Rk2 small. If one wants “best”
tracking, the controller K should be sought via solving the following optimization
problem:
 
 
min .I C GK/1 R : (1.2)
stabilizing K 2

In the case that the reference r is not explicitly known, rather it belongs to an
energy-bounded set, the objective of good tracking then requires the infinity norm of
the transfer function (matrix) from r to e to be small. That is, a stabilizing controller
K is sought to make k(I C GK) 1 k1 small, or to solve

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 1
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__1,
© Springer-Verlag London 2014
2 1 Introduction

Fig. 1.1 The unity feedback y


control system
r + e u
K G
- controller plant

 
 
min .I C GK/1  : (1.3)
stabilizing K 1

How to find such controllers K is the theme of this book. (Definitions and details of
the solution deduction can be found in Chaps. 2, 8, and 9.)
In addition to the performance requirement, robustness is another central issue
in control systems design. Most effective and efficient control design methods are
model based. That is, the controller is designed based on a model which represents
the dynamics of the system (the plant) to be controlled. It should be noted that in
almost all cases in reality, a model will never completely and truly describe the
dynamic behavior of the actual system because of unintentionally or intentionally
excluded dynamics (so-called unmodeled plant dynamics) and/or because of wear-
and-tear effects upon the plant (plant parameter variation). On the other hand,
industrial control systems operating in real world are vulnerable to external
disturbances and measurement noises. Hence, a designed controller should be able
to keep the control system stable as well as maintain certain performance level even
in the presence of unmodeled dynamics, parameter variations, disturbances, and
noises. That is the commonly acceptable definition for the robustness of a control
system.
Consideration of robustness in control systems design is not a new topic.
Up to the middle of the last century, the 1950s, control systems analysis and
design had been dominated by frequency domain methods. Single-input-single-
output (SISO) cases were the main subject. Certain stability robustness can be
achieved by ensuring good gain and phase margins in the design. Good stability
margins usually lead to good, well-damped time responses of SISO systems, i.e.,
good performance. Rapid developments and needs in aerospace engineering in
the 1960s have greatly propelled the development of control system theory and
design methodology, particularly in the state-space approach which is powerful for
multi-input-multi-output (MIMO) systems. Because the system models of aerospace
systems are relatively accurate, the multivariable techniques developed at that time
placed emphasis on achieving good system performance, rather on how to deal
with system dynamics perturbations. These techniques are based on linear quadratic
performance criteria with Gaussian disturbance (noise) considered adequate in such
systems and proved to be successful in many aerospace applications. However,
applications of these techniques, commonly referred to as linear-quadratic-Gaussian
(LQG) methods, to other industrial problems make apparent the poor robustness
properties exhibited by LQG controllers. This led to a substantial research effort to
develop theory and methodology that could explicitly address the robustness issue
1 Introduction 3

in general feedback systems design. The pioneering work in the development now
known as the H1 optimal control theory was published in the early 1980s by Zames
[6] and Zames and Francis [7].
In the H1 framework, the designer from the outset specifies a model of system
uncertainty, such as additive perturbation together with output disturbance, which is
most suitable to the problem at hand. A constrained optimization is then formulated
to maximize the robust stability of the closed-loop system to the type of uncertainty
chosen, the constraint being the internal stability of the feedback system. In most
cases, it would be sufficient to seek a feasible (suboptimal) controller such that
the closed-loop system achieves certain robust stability. Performance objectives can
also be included in the cost function of the optimization problem. Mature theory and
elegant solution formulae in the state-space approach [1, 2, 8] have been developed,
which are based on the solutions of certain algebraic Riccati equations, and are
available in software packages such as MATLAB and Slicot [5].
However, many people, including students and practicing engineers, have expe-
rienced difficulties in understanding the underlying theory and solution procedures
due to the complex mathematics involved in the advanced approaches that very
much hinders the wide application of this advanced methodology in real world
and industrial control systems design. In this book an alternative approach is
presented for the synthesis of H1 and H2 (sub)optimal controllers. This approach
uses the so-called chain scattering decomposition or chain scattering description
(CSD) and provides a unified synthesis framework. CSD is based on the two-port
circuit formulation which is familiar to electrical and electronic engineering students
and engineers. With this engineering background, readers are guided through a
general yet rigorous synthesis procedure. Complemented by illustrative examples
and exercise problems, readers can not only learn detailed steps in synthesizing a
robust controller but also see and obtain an insight to the methodology.
The chain scattering description approach was first proposed by Kimura [4] in
an attempt to provide a unified, systematic, and self-contained exposition of H1
control. In Kimura’s approach, to deal with general systems, signal augmentation is
required to formulate a single CSD framework which leads to extra calculations
in the synthesis of controllers. This book defines right and left CSD matrices
instead and proposes to use coupled CSD and coprime factorizations with specific
requirements in controller synthesis. This approach is capable of dealing with all
general systems and avoids unnecessary computation load and is of a transparent and
intuitive nature. Such a framework covers the synthesis of all stabilizing controllers,
H2 optimal as well as H1 (sub)optimal control synthesis problems.
The CSD approach is straightforward in the derivation of the resultant controller
synthesis procedures. It starts with a standard configuration (SCC, standard control
configuration) of robust control scheme, converts it into a chain scattering decom-
position format, and then uses coprime factorization and spectral factorization to
characterize the required stabilizing controllers. The mathematical tools required
are limited within linear algebra (matrix and vector manipulations) and algebraic
Riccati equations.
4 1 Introduction

A brief outline of the content of this book is as follows.


Following the Introduction is a chapter on prerequisite material readers should be
familiar with. This includes linear algebra basics (vector and matrix manipulations,
eigenvalues and eigenvectors, linear spaces, vector and matrix norms), function
spaces (signals and operators, norms), linear control system basics (realization of
LTI state-space model, stability, controllability, observability, system manipula-
tions), and a brief description of linear fractional transformation (LFT) and chain
scattering matrices. Most books on robust control would start with an introduction
of uncertainties considered in the design and illustrate how such uncertainties can
be incorporated in the so-called standard control (or compensation) configuration
(SCC) which is in the linear fractional transformation (LFT) form. This book
will however exclude these topics; rather, it is assumed that readers are already
familiar with how to formulate an appropriate SCC. Starting with a standard
control configuration, this book explains how to equivalently convert it into a chain
scattering description and how to find solutions to stabilization problems, H2 and
H1 (sub)optimization problems, and other synthesis problems in the CSD form.
Chapter 3 introduces one-port and two-port networks which are widely used in
electrical circuits. With basic electrical signals (voltage, current, impedance, and
admittance), the network structures, input/output variables, and system parameters
are explained. It is interesting to see how abstract concepts, such as LFT, can
be illustrated with simple electrical circuit examples, while in most books these
concepts are introduced without any practical explanations, and readers would most
likely fail to catch the essence of the concepts. This in turn helps readers understand
the theories and methodology developed later in this book.
Chapter 4 focuses further on linear fractional transformations (LFTs), its struc-
ture, usage in control systems analysis and synthesis, and its properties when
the interconnected system is inner (co-inner). A direct method is introduced
to formulate LFT configurations from a block diagram description of feedback
systems. The chapter also describes the relation between LFT and Mason’s Rule,
which helps readers to link classical control techniques with the modern, state-space
tools in control systems analysis and synthesis.
As mentioned earlier, the alternative synthesis approach adopted to find robust
(sub)optimal controllers employs chain scattering descriptions (CSDs) of control
systems. Chapter 5 discusses CSD in detail, its definitions, manipulations, state-
space representations, its relationship with LFT, etc. Using CSD formulation to
represent a LFT is not a new idea [3, 4]. However, previously it either requires
certain assumptions on the LFT system or has to include “artificial” input/output
in conversion. One novelty of this book is to present an equivalence conversion
between LFT and CSD generally.
Chapters 6 and 7 are devoted to coprime factorization, spectral factorization, and
algebraic Riccati equations. General solution formulae as well as special solution
formulae particularly used in the CSD procedure, all in state-space form, are
derived. In the derivation, block diagram manipulations and illustrative examples
are extensively used to help readers obtain insight of the development of theoretic
results and methodology.
References 5

With all the ingredients prepared in previous chapters, Chaps. 8 and 9 readily
present solution procedures and formulae to compute all the stabilizing controllers,
the H2 optimal controller and H1 suboptimal controllers. Synthesis of all these
controllers follows a unified framework. Again, all the derivations are accompanied
with graphical illustrations and simple numerical examples.
Following the introduction of the controller synthesis problems in previous
chapters, the last chapter focuses on presenting a more practical case study. It shows
how the CSD approach can be applied in many industrial applications. Again, the
solution procedure is transparent, logical, and explicit.
For readers’ convenience, MATLAB codes for most worked examples are
included in this book.
The material in this book can be used for postgraduate as well as senior
undergraduate teaching.

References

1. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2
and H1 control problems. IEEE Trans Autom Control 34:831–847
2. Francis BA (1987) A course in H1 control theory. Springer, Berlin
3. Green M (1992) H1 controller synthesis by J-lossless coprime factorization. SIAM J Control
Optim 30:522–547
4. Kimura H (1996) Chain-scattering approach to H1 control. Birkhäuser, Boston
5. Niconet e. V (2012) SLICOT – Subroutine Library in Systems and Control Theory. http://slicot.
org/
6. Zames G (1981) Feedback and optimal sensitivity: model reference transformation, multiplica-
tive seminorms, and approximated inverse. IEEE Trans Autom Control 26:301–320
7. Zames G, Francis BA (1983) Feedback, minimax sensitivity, and optimal robustness. IEEE
Trans Autom Control 28:585–601
8. Zhou K, Doyle JC, Glover K (1995) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 2
Preliminaries

Classical control design and analysis utilizes the frequency domain tools to specify
the system performance. The background of operator theories and single-input
and single-output, linear systems is required. In modern control, the time domain
approach can be used to deal with multi-input and multi-out cases. Moreover,
concepts of linear algebra and matrix-vector operations are used in system analysis
and synthesis. Some useful fundamentals will be therefore reviewed in this chapter.

2.1 Linear Algebra and Matrix Theory

This section presents useful and well-known fundamentals of linear algebra and
matrix theory, which facilitate the understanding of the subsequent control system
concepts and methodology introduced. The stated results can be considered to be
purely preliminary in nature, and hence, their proofs are omitted.

2.1.1 Vectors and Matrices

Control systems are, in general, multivariable. That means one deals with more than
one variable in input, output, and state. Hence, vectors and matrices are frequently
used to represent systems and system interconnections. In engineering and science,
one usually has a situation where more than one quantity is closely linked to another.
For instance, in specifying the location of a robot on a flat floor, one may use the
numbers 2 and 3 to indicate the robot is at 2 units east and 3 units north from
where one stands, and following the same logic, one may use 1 and 2 to indicate
that the robot is at 1 unit west and 2 units south. Here, (2, 3) and (1, 2) represent
two different locations, and the numbers 2 and 3 are in a fixed order to show that

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 7
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__2,
© Springer-Verlag London 2014
8 2 Preliminaries

particular location, while (3, 2) would represent the position 3 units east and 2 units
north. Such a group of numbers in a certain order forms a vector, and the dimensions
of a vector correspond to how many numbers there are in the vector. Hence, (2, 3)
is a 2-dimensional vector. Conventionally, a vector is defined as a column vector.
2 1
In the above example, the position vector is thus written as or . For any
3 2
positive integer n,2an n-dimensional
3 (usually shortened as n-dim or n-D) vector x is
x1
6 7
denoted by x D 4 ::: 5.
xn
 The transpose
 of a vector x is denoted as xT and is defined by x T D
x1 x2    xn , a row vector. A group of vectors of the same dimension in a
2 3
mi1
6 7
certain order forms a matrix. For example, for Mi D 4 ::: 5, 1  i  p, Mi is
mi n
 
an n-dim vector and M D M1 M2    MP is an n  p matrix. Obviously, a vector
 
is a special case of matrices. x T D x1 x2    xn is simply a matrix of 1  n
dimensions. The elements or entries in a matrix can be real numbers or complex
numbers. One uses M 2 Rn  p to show the matrix M is of n  p dimensions and all
the elements of M are real numbers; M 2 Cn  p shows a n  p dimensional matrix
M with complex numbers. It is clear that Rn  p  Cn  p . It is also a convention to
use capital English letters to show a matrix such as M, whereas lower case letters
(sometimes in bold face) are employed to show a vector
2 such as x 3 and lower case
m11    m1p
6 7
letters to show a scalar number. A matrix M D 4 :::    ::: 5 of the n  p
mn1    mnp
dimension can be abbreviated as M D fmij gn  p . Similar to vectors, the transpose of
a matrix M is MT D fmji gp ˚ n . When
 M is in Cn  p , the complex conjugate transpose

of M is defined by M D mj i pn when mj i is the complex conjugate of mji .
A few manipulations can be defined 2 for vectors3and matrices.2 Two matrices 3
m11    m1p n11    n1p
6 7 6 7
of the same dimensions, e.g., M D 4 :::    ::: 5 and N D 4 :::    ::: 5,
mn1    mnp nn1    nnp
2 3
p11    p1p
6 :: : 7
can be added together, i.e., P D M C N where P D 4 :    :: 5 D
pn1    pnp
2 3
m11 C n11    m1p C n1p
6 :: :: 7
4 :  : 5. A multiplication is defined for two matrices only
mn1 C nn1    mnp C nnp
2.1 Linear Algebra and Matrix Theory 9

Table 2.1 Classification of normal matrices


Diagonal
Matrix (A) Definition Eigenvalues elements Determinant
Hermitian M* D M 2R aii 2 R det(M) 2 R
Positive definite x*Mx > 0, 8 x ¤ 0 >0 aii > 0 det(M) > 0
Positive semi-definite x*Mx  0, 8 x 0 aii  0 det(M)  0
Unitary M 2 Cn  n M*M D I jj D 1 NA j det(M)j D 1
Orthogonal M 2 Rn  n MT M D I jj D 1 NA j det(M)j D 1

when their dimensions are compatible. That is, for M D fmij gn  p , N D fnkl gk  l ,
only when p D k, one may have the product P D MN, where P D fpij gn  l , with
X
p.Dk/
pij D mir nrj . The following paragraph summarizes a few more aspects of
rD1
vector/matrix manipulations [4].
1. A square matrix M is called nonsingular if a matrix B exists, such that
MB D BM D I. Define B D M 1 . The inverse matrix M 1 exists if det(M) ¤ 0,
where det(M) is the determinant of M. If M 1 does not exist, M is said to be
singular. If the inverse of M, B, and MB all exist, then (MB) 1 D B 1 M 1 .
2. A complex square matrix is called unitary if its inverse is equal to its complex
conjugate transpose M*M D MM* D I, where I denotes the identity matrix of the
appropriate dimensions. A square matrix M is called orthogonal if it is real and
satisfies MT M D MMT D I. For an orthogonal matrix, the inverse is its transpose.
3. An n  p matrix M is of rank m if the maximum number of linearly independent
rows (or columns) is m. This equals to the dimension of img(M) :D fMxjx 2 Rp g.
4. An n  p matrix M is said to have full row rank if n  p and rank(M) D n. It has
a full column rank if n  p and rank(M) D p.
5. A symmetric matrix M of n  n dimension is positive definite if xT Mx  0, where
x is any n-dimensional (real) vector, and xT Mx D 0, only if x D 0. If for any n-
dimensional vector x, xT Mx  0 always holds, then M is positive semi-definite.
A positive (semi-)definite matrix M may be denoted as M > 0(M  0). Similarly,
negative definite and negative semi-definite matrices may be defined.
6. For a positive definite matrix M, its inverse M 1 exists and is also positive
definite.
7. All eigenvalues of a positive definite matrix are positive.
8. For two positive definite matrices M1 and M2 , one has ˛M1 C ˇM2 > 0 when ˛, ˇ
are nonnegative and both not zero.
9. A square matrix M is called normal if MM* D M*M. A normal matrix has the
decomposition of M D UƒU*, where UU* D I and ƒ is a diagonal matrix. The
following Table 2.1 summarizes the classification of normal matrices.
10 2 Preliminaries

2.1.2 Linear Spaces

Let R and C be real and complex scalar fields, respectively. A linear space V over
a field F consists of a set on which two operations are defined. The first one is
denoted by “addition (C)”; for each pair of elements x and y in V, there exists a
unique element x C y in V. And the second one is a scalar “multiplication ()”; for
each element ˛ in F and each element x in V, there is a unique element ˛x in V. The
following conditions hold with respect to the above two operations.
1. For each element x in V, 1  x D x.
2. For all x, y, z in V, (x C y) C z D x C (y C z).
3. For all x, y in V, x C y D y C x.
4. For each element x in V, there exists an element y in V, such that x C y D 0.
5. There exists an element in V denoted by 0, such that x C 0 D x for each x in V.
6. For each element ˛ in F and each pair of elements x and y in V,
˛(x C y) D ˛x C ˛y.
7. For each ˛, ˇ in F and each element x in V, (˛ˇ)x D ˛(ˇx).
8. For each ˛, ˇ in F and each element x in V, (˛ C ˇ)x D ˛x C ˇx.
Note that one uses the same symbol “0” to denote the element zero and scalar
number zero in V and F, respectively. In the following, some basic concepts are
reviewed first. These definitions can be easily found in standard linear algebra
textbooks, for example see [8].
1. As mentioned in the earlier paragraph, the elements x C y and ˛x are called the
sum of x and y and the product of ˛ and x, respectively, where x, y 2 V, ˛ 2 F.
2. A subset W of a vector space V over a field F is called a subspace of V if
W itself is a vector space over F under the operations of addition and scalar
multiplication defined on V.
3. Let x1 , x2 , : : : , xk be vectors in V, then an element of the form ˛ 1 x1 C ˛ 2 x2 C   
C ˛ k xk with ˛ i 2 F is a linear combination over F of x1 , x2 , : : : , xk .
4. The set of all linear combinations of x1 , x2 , : : : , xk 2 V is a subspace called the
span of x1 , x2 , : : : , xk , denoted by
n ˇ o
ˇ
span fx1 ; x2 ; : : : ; xk g D x ˇx D ˛1 x1 C ˛2 x2 C    C ˛k xk I ˛i 2 F : (2.1)

5. Vectors x1 , x2 , : : : , xk are said to be linearly dependent if there is at least one


xi that can be expressed as a linear combination of fxj , j D 1, 2, : : : , k, j ¤ ig
or there exist constants c1 , c2 , : : : , ck which are not all zero, such that
c1 x1 C c2 x2 C    C ck xk D 0. The vectors x1 , x2 , : : : , xk are linearly indepen-
dent if c1 x1 C c2 x2 C    C ck xk D 0 indicates that all c1 , c2 , : : : , ck are zero.
1 when i D j
6. The vectors x1 , x2 , : : : , xk are orthonormal if xi  xj D ıij D ,
0 otherwise
where ı ij is usually called the Kronecker delta.
2.1 Linear Algebra and Matrix Theory 11

7. Let W be a subspace of a vector space V, then a set of vectors fx1 , x2 , : : : , xk g 2


W is said to be a basis of W if x1 , x2 , : : : , xk are linearly independent and
W D spanfx1 , x2 , : : : , xk g. The dimension of a vector subspace W equals to
the number of basis vectors.
8. Let W be a subspace of V. The set of all vectors in V that are orthogonal to
every vector in W is the orthogonal complement of W and is denoted by W? .
Hence,

W ? D fy 2 V W y  x D 0; 8x 2 W g : (2.2)

Each vector x in V can be expressed uniquely in the form x D xW C xW ?


for xW 2 W and xW ? 2 W ? .
9. A set of vectors fu1 , u2 , : : : , uk g is said to be an orthonormal basis for a
k-dimensional subspace W if the vectors form a basis and are orthonormal.
Suppose that the dimension of V is n, it is then possible to find a set of
orthonormal basis fuk C 1 , : : : ,un g such that

W ? D span fukC1 ; : : : ; un g : (2.3)

10. A collection of subspaces W1 , W2 , : : : , Wk of V is mutually orthogonal if


x*y D 0 whenever x 2 Wi and y 2 Wj for i ¤ j.
11. The kernel (or null) space of a matrix M 2 Rn  p , which can be viewed as a
linear transformation from Rp to Rn , is defined as
n ˇ o
ˇ
ker M D N.M / D x ˇx 2 Rp W M x D 0 : (2.4)

12. The image (or range) of M is


n ˇ o
ˇ
img.M / D y ˇy 2 Rn W y D M x; 8x 2 Rp : (2.5)

13. Let M be an n  p real, full rank matrix with n > p, the orthogonal
 complement
of M is a matrix M? of dimension n  (n  p), such that M M ? is a square,
nonsingular matrix with the following property: MT M? D 0.
14. The following properties hold:

.ker M /? D Œimg.M /T and Œimg.M /? D ker M T : (2.6)

2.1.3 Eigenvalues and Eigenvectors

A matrix can be interpreted as a mapping between two linear spaces. For example,
x1 y1
a 2  2 matrix M D fmij g2  2 , y D Ax, where x D and y D are both
x2 y2
12 2 Preliminaries

in R2  1 (the two spaces in this case are the same). For most x, the image y would
show a rotation of x plus an expansion or reduction in length, which is decided by
the matrix M. However, there are some vectors in the space of which the images
generated by the mapping M will remain at the same direction as the original
vectors. These vectors are the eigenvectors of M, showing somehow the essence
(eigen) of the mapping M. The factors of the length change are the eigenvalues of
M. Rigorous definitions are given below.
For an n  n square matrix M, the determinant det(I  M) is called the charac-
teristic polynomial of M. The characteristic equation is given by

det . I  M / D 0: (2.7)

The n roots of the characteristic equation are the eigenvalues of M. For an eigenvalue
 of matrix M, there is a nonzero vector  such that

M  D  (2.8)

where  is called the eigenvector of M corresponding to the eigenvalue .


Definition 2.1 The spectral radius of matrix M is defined as

.M / D max ji .M /j (2.9)


i

where fi g is the eigenvalue set of M and j • j is the modulus of •.


It is easy to show that if M is a Hermitian matrix, i.e., M D M*, then all
eigenvalues of M are real. The spectral radius indicates the size of the set which
contains all the eigenvalues of M.
Definition 2.2 If M is Hermitian, then there exists a unitary matrix U (i.e.,
U*U D UU* D I) and a real diagonal matrix ƒ, such that

M D UƒU  : (2.10)

In this case, U is the right eigenvector matrix of M.

2.1.4 Matrix Inversion and Pseudoinverse

Matrix inversion is unavoidable and essential in control system manipulation. In this


section, the useful formulae of the matrix inversion can be found [4].
Let M be a square n  n matrix partitioned as
 
M11 M12
M D (2.11)
M21 M22

where M11 : n1  n1 , M12 : n1  n2 , M21 : n2  n1 , M22 : n2  n2 , and n1 C n2 D n.


2.1 Linear Algebra and Matrix Theory 13

Suppose that M11 is nonsingular, then M can be decomposed (block diagonalized)


as
   
I 0 M11 0 I M11 1 M12
M D 1 (2.12)
M21 M11 I 0 S 0 I

where S D M22  M21 M11  1 M12 is the Schur complement of M11 in M. Then, if M
is nonsingular, it can be derived that
 
1 M11 1 C M11 1 M12 S 1 M21 M11 1 M11 1 M12 S 1
M D : (2.13)
 S 1 M21 M11 1 S 1

Dually, if M22 and M are nonsingular, then


   
I M12 M22 1 b 0
S I 0
M D (2.14)
0 I 0 M22 M22 1 M21 I

and
" #
1
b1
S Sb1 M12 M22 1
M D (2.15)
b1 M22 1 C M22 1 M21 S
 M22 1 M21 S b1 M12 M22 1

where Sb D M11  M12 M22 1 M21 is called the Schur complement of M22 in M. The
matrix inversion formulae can be further simplified if M is block triangular as
 1  
M11 0 M11 1 0
D ; (2.16)
M21 M22  M22 1 M21 M11 1 M22 1

 1  
M11 M12 M11 1 M11 1 M12 M22 1
D : (2.17)
0 M22 0 M22 1

b1 can be represented by


If both M11 and M22 are nonsingular, then S

b1 D M11  M12 M22 1 M21 1


S

1
D M11 1 C M11 1 M12 M22  M21 M11 1 M12 M21 M11 1 : (2.18)

The pseudoinverse (also called Moore-Penrose inverse) of a matrix M is denoted


as MC which satisfies the following conditions:

MM C M D M; (2.19)

M C MM C D M C ; (2.20)
14 2 Preliminaries



MM C D MM C ; (2.21)

C

M M D M C M: (2.22)

The pseudoinverse is useful especially when matrix M is either non-square or


singular.

2.1.5 Vector Norms and Matrix Norms

Norm is another important concept of vectors and matrices. It can be further


developed for functions and systems as well. In this section, definitions of vector
norm and matrix norm will be introduced [4]. The concept of norm can be loosely
understood as a description of size or volume. A vector norm, denoted by k  k, of
any vector x over the field C, must have the following properties:
1. kxk > 0, unless x D 0, in which case kxk D 0.
2. kcxk D jcjkxk where c is any scalar in C.
3. kx C yk  kxk C kyk.
2 3
x1
6 :: 7
Definition 2.3 Let x D 4 : 5 be a vector in Cn . The following are norms of Cn .
xn
1. Vector 1-norm: kxk1 D max jxi j.
1in
X
n
2. Vector 1-norm: kxk1 D jxi j.
iD1 v
u n 2
p uX
3. Vector 2-norm: kxk2 D 
x xD t jxi j .
iD1
!1=p
X
n
4. Vector p-norm (for 1  p < 1): kxkp D jxi jp .
iD1

In the case of matrices, a matrix norm satisfies


1. kAk > 0 unless A D 0, in which case kAk D 0;
2. kcAk D jcjkAk where c is any scalar in C;
3. kA C Bk  kAk C kBk;
4. kABk  kAkkBk.
2.1 Linear Algebra and Matrix Theory 15

2 3
m11 m12    m1n
6 m21 m22    m2n 7
6 7 mn
Definition 2.4 Let M D 6 : :: :: :: 7 be a matrix in C . The
4 :: : : : 5
mm1 mm2    mmn
following gives a list of different matrix norms, which will be useful for the rest
of this book.
Xm
ˇ ˇ
1. Matrix 1-norm (column sum): kM k1 WD max ˇmij ˇ.
j
p iD1
2. Matrix 2-norm: kM k2 WD max .M  M /.
X
n
ˇ ˇ
3. Matrix 1-norm (row sum): kM k1 WD max ˇmij ˇ.
i
j D1
v
uX
p u m X
n
4. Frobenius norm: kM kF WD trace .M  M / D t mij mij .
iD1 j D1

2.1.6 Singular Value Decomposition

The singular values of a matrix M are defined as


p
i .M / WD i .M  M /: (2.23)

The maximal singular value is denoted as

.M / WD max .i .M // ;


i

and the minimal singular value is

.M / WD min .i .M // :


i

It is straightforward from the above definition that the matrix M and its complex
conjugate transpose M* have the same singular values, i.e., f i (M)g D f i (M*)g.
Let M 2 Cm  n ; there exist unitary matrices U D [u1 u2    um ] 2 Cm  m and
V D [v1 v2    vn ] 2 Cn  n such that

M D U †V  ; (2.24)

where
 
†1 0
†D ; (2.25)
0 0
16 2 Preliminaries

2 3
1 0  0
60 2  0 7
6 7
†1 D 6 : :: :: :: 7; (2.26)
4 :: : : : 5
0 0    r

with  1   2       r > 0 and r D rank(M). Equation (2.24) is called the singular


value decomposition (SVD) of the matrix M. The matrix admits the decomposition
2 3
1 0  0
X
r
 6
60 2  0 7
7 
M D i ui vi  D u1 u2    ur 6 : :: :: :: 7 v1 v2    vr : (2.27)
4 :: : : : 5
iD1
0 0    r

2.2 Function Spaces and Signals

Controllers or control schemes are as a matter of fact functions in the time domain
or the frequency domain. Hence, the synthesis of the required controller, an optimal
controller in particular, is a procedure in functional analysis. However, considering
that the underlying systems in this book are mainly the linear time-invariant systems
and that this book is primarily for practicing control engineers and engineering
students, many mathematical definitions and deductions will not be included in
order to make it more accessible to the targeted readers. Interested readers are
recommended to consult relevant books, for instance [5, 6, 7, 10], for rigorous and
in-depth treatment of those mathematical concepts.

2.2.1 Function Spaces

Function spaces useful for the themes introduced in this book are L2 , H2 , L1 , and
H1 , and their orthogonal complement spaces.
The space Lp (for 1  p < 1) consists of all Lebesgue measurable functions w(t)
defined in the interval (1, 1) such that
Z 1 p1
p
kwkp WD jw.t /j dt < 1: (2.28)
1

The space L1 consists of all Lebesgue measurable functions w(t) such that

kwk1 WD esssup jw.t /j < 1: (2.29)


t2R
2.2 Function Spaces and Signals 17

Fig. 2.1 Calculation Laplace transform


L2 [ 0,
procedures of function spaces
) H2
Inverse transform

P+ P+
Laplace transform
L2 ( − , )
Inverse transform
P− P−
Laplace transform
L2 ( − , 0] H 2⊥
Inverse transform

H2 is the subspace of L2 in which every function is analytic in Re(s) > 0 (the


real part of s D  C j! 2 C), the open right-half plane, and H1 be a subspace
of L1 in which every function is analytic and bounded in Re(s) > 0. The space
H2 ? is the orthogonal complement of H2 in L2 . If G(s) is a strictly proper, stable,
real, rational transfer function matrix, then G(s) 2 H2 implies that G (s) 2 H2 ? ,
where G (s) :D GT (s). The real rational subspace of H1 is denoted by RH1 ,
which consists of all proper and real, rational, stable transfer function matrices. The
relationship between spaces L2 and H2 is illustrated in Fig. 2.1 [3, 10].
Definition 2.5 Definitions of L2 , H2 , L1 , and H1 function spaces.
1. L2 -function space: G(s) 2 L2 , if
Z 1  
trace G  .j!/ G .j!/ d! < 1: (2.30)
1

The rational subspace of L2 , denoted by RL2 , consists of all real, rational,


strictly proper transfer function matrices with no poles on the imaginary axis jR.
2. H2 -function space: G(s) 2 H2 , if G(s) is stable and
s Z 
1  
1
kG.s/k2 WD trace G  .j!/ G .j!/ d! < 1: (2.31)
2 1

Hence, the norm for H2 can be computed just as it is done for L2 . The real
rational subspace of H2 , which consists of all strictly proper and real, rational,
stable transfer function matrices, is denoted by RH2 .
3. L1 -function space: G(s) 2 L1 , if

kG .j!/k1 WD ess sup  ŒG .j!/ < 1: (2.32)


!
18 2 Preliminaries

RL
Example
( s − 3)( s − 4) ( s − 1)
A: B:
BH GH ( s + 1)( s + 2) ( s + 4)
B E C Stable
H ( s + 7) ( s − 20)
F G Strictly C: D:
RH ( s + 5) ( s + 3)( s + 5)
A RH 2 proper
D
( s + 4)( s + 5) ( s − 1)
RH 2 ⊥ E:
( s + 6)( s + 7)
F:
( s + 2)( s + 4)
I

RL2 Anti-Stable ( s + 20) ( s + 1)( s + 2)


G: H:

( s + 3)( s + 5) ( s + 3)( s + 4)( s + 5)
RH
J
( s + 3) ( s + 4)( s + 6)
I: J:
( s − 1)( s + 2) ( s − 3)( s − 5)

Fig. 2.2 Illustration of the relationship among different function spaces

All proper and real, rational, transfer function matrices with no poles on the
imaginary axis form a subspace which is denoted by RL1 .
4. H1 norm, the 1-norm of Hardy space functions: G 2 H1 , if G(s) is stable and

kGk1 D sup  ŒG.s/ D sup ŒG .j!/ < 1: (2.33)


Re.s/0 !

H1 is a subspace of L1 with functions that are analytic and bounded in the


open right-half plane. The real, rational subspace of H1 is denoted by RH1
which consists of all proper and real, rational, stable transfer function matrices.
This book introduces tools and concepts of optimal controller synthesis [3]. Most
of the framework is set in the H1 function space. For the linear time-variant and
causal systems, a given system G(s) 2 RH1 means the following:
(a) G(s) is stable, and lim ˆ.t / which is the impulse response of G(s), is bounded.
t!1
(b) All poles of G(s) are located in the open left-half plane.
(c) If G(s) has a “minimal” state-space model (A, B, C, D), then the real part of all
eigenvalues of the state matrix A is negative.
A state matrix is called Hurwitz if the real parts of all its eigenvalues are negative.
The following Fig. 2.2 shows the relationship among different function spaces,
where BH1 :D fF 2 RH1 : kFk1 < 1g is denoted as the set of all stable contractions
and GH1 is the set of all units of RH1 , i.e., if F 2 GH1 , then F 2 RH1 and
F 1 2 RH1 .
Example 2.1 Determine the function spaces for each of the following transfer
s2
functions: (1) G1 .s/ D sC1
s
; (2) G2 .s/ D sC1 ; and (3) G3 .s/ D .s1/.sC2/
s
.
2.2 Function Spaces and Signals 19

ˇ ˇ
ˇ j! ˇ
1. It is clear that G1 (s) is stable and sup jG1 .j!/j D sup ˇ j!C1 ˇ D sup p !
D
! ! ! 1C! 2
1 < 1 for ! > 0. Hence, G1 (s) 2 RH1 . By decomposition of G1 (s), one has
G1 .s/ D sC1
s
D 1  sC1
1
. Thus,
s Z 
1
1
kGk2 D jG .j!/j2 d!
2 1
s Z 
1
1 1 1
D 1 1 d!
2 1 j! C 1 j! C 1
s Z 
1
1 1 1 1
D 1  C d!
2 1 j! C 1 j! C 1 .j!C1/ .j! C 1/
D 1: (2.34)

This implies G1 (s) 62 RH2 , which agrees with the fact that G1 (s)
ˇ is bi-proper.
ˇ
ˇ .j!/2 ˇ
2. By definition, G2 (s) 62 RL1 because of sup jG2 .j!/j D sup ˇ j!C1 ˇ D 1;
R1 ! !
G2 (s) 62ˇ RH2ˇ because of  1 jG(j!)j2 d! D 1; and G2 (s) 62 RH1 because of
ˇ s2 ˇ
sup ˇ sC1 ˇ D 1.
Re.s/0
3. Apparently, G3 (s) 62 RH1 because G(s) is not analytic at s D 1; G3 (s) 2 RL1
because G(s) is analytic on j! axis and satisfies sup jG .j!/j < 1; and
!
G3 (s) 62 RH2 because G(s) is not analytic at s D 1.

2.2.2 Norms for Signals and Systems

Norm symbolizes the size of a system or a function. For the control system analysis
and synthesis, norm offers a direct criterion corresponding to design specifications.
The detailed treatment of this topic can be found in books, such as [2, 3]. In this
book, the following definitions are listed for easy reference. Note that the signal
mentioned below is scalar and measurable, and the system is also scalar and linear
time-invariant and causal. The vector (matrix) version of these norms can be found
in, e.g., the books mentioned above.
Definition 2.6 The 1-norm of a signal y(t) on (1,1) is defined as
Z 1
kyk1 WD jy.t /j dt : (2.35)
1
20 2 Preliminaries

Definition 2.7 The 2-norm of a signal y(t) is defined as


sZ
1
kyk2 WD y 2 .t /dt : (2.36)
1

Definition 2.8 The 1-norm of a signal y(t) is defined as

kyk1 WD sup jy.t /j : (2.37)


t

Definition 2.9 The 1-norm of a stable system G(s) is defined as


Z 1
1
kGk1 WD jG .j!/j d!: (2.38)
2 1

Definition 2.10 The 2-norm of a stable system G(s) is defined as


s Z 1
1
kGk2 WD jG .j!/j2 d!: (2.39)
2 1

For a state-space described system defined in (2.52) below, the

H2 norm can be determined by


p
kGk2 D trace .B T Po B/ (2.40)

where Po is the observability gramian, which will be discussed in Chap. 7.


Definition 2.11 The 1-norm of a stable system G(s) is defined as

kGk1 WD sup jG .j!/j : (2.41)


!

The kGk1 equals the distance in the complex plane from the origin to the furthest
point on the Nyquist plot of G(s). It also appears as the peak value on the Bode
magnitude plot of G(s). The Hankel norm is also a representation of function size
[3], especially in the design framework of H1 loop shaping. The Hankel norm can
be exploited to determine the stability margin. Its definition is given below.
2.2 Function Spaces and Signals 21

Definition 2.12 Hankel norm is used for determining the residual energy of a
system before t D 0. For a stable system described as y(t) D Gu(t), the Hankel norm
is defined as
v Z 1
u
u y T .t /y.t /
u
u
kGkH D u sup Z 0 0 : (2.42)
tu2L2 .1;0/
uT .t /u.t /dt
1

This can be determined by


p
kGkH D max .Pc Po /; (2.43)

where Pc and Po are the controllability gramian and observability gramian matrices,
respectively, which will be discussed in Chap. 7.
Example 2.2 Given a linear system G(s) as below, determine its H2 norm and
Hankel norm.
   
1 0 1
xP D xC u
0 2 1
 
y D 1 2 x: (2.44)

The observability gramian Po and controllability gramian Pc are


2 3
1 2

6 3 7
Po D 4 2 5 and (2.45)
2
 1
3
2 3
1 1
6 7
Pc D 4 2 3 5 : (2.46)
1 1
3 4
Hence, one can obtain
p 1
kGk2 D trace .B T Po B/ D p ; (2.47)
6
p 1
kGkH D max .Pc Po / D : (2.48)
6
22 2 Preliminaries

2.3 Linear System Theory

The aim of this section is to introduce some basic results in linear system theory
[1] that are particularly applicable to the work in the following chapters of this
book. The descriptions, properties, and algebras of linear systems facilitate the
development of optimal and robust control theory. These concepts preliminarily
offer tools for system analysis and synthesis, and construct the main scope of
modern control theory and control engineering.

2.3.1 Linear Systems

A finite-dimensional LTI dynamic system can be described by the following


equations:

xP D Ax C Bu; x.0/ D x0
y D C x C Du; (2.49)

where 8 t  0, x(t) 2 Rn is the state vector, u(t) 2 Rm is the input vector, and y(t) 2 Rp
is the output vector. The transfer function from u to y is defined as

Y.s/ D G.s/U.s/; (2.50)

where Y(s) and U(s) are the Laplace transform of y(t) and u(t), respectively. It can
be shown that

G.s/ D D C C .sI  A/1 B: (2.51)

For simplicity, the state-space realization (A,B,C,D) can be written in a compact


form as

(2.52)

The state response in the time domain is


Z t
x.t / D eAt x0 C eA.t / Bu ./ d; (2.53)
0

and the output response is


Z t
y.t / D C e x0 C
At
C eA.t / Bu ./d C Du.t /: (2.54)
0
2.3 Linear System Theory 23

B C
u x y

B̂ T −1 T Ĉ

Fig. 2.3 Relationship of state-space similarity transformation

2.3.2 State Similarity Transformation

Different states can be defined to a linear time-invariant system given in (2.49) via
^
n  n nonsingular matrix T. Let x D T x, and then the system can be described by
P
^ ^ ^ ^
x D TAT 1 x C TBu; x.0/ D x 0 D T x0
^
y D C T 1 x C Du (2.55)

The transformed system is derived via the state similarity transformation of (T, T 1 ).
One has the same transfer function matrix from the input to the output, though with
different state-space model:

(2.56)

^ ^ ^ ^
where A D TAT 1 , B D TB, C D C T 1 , and D D D. The relationship of
this transformation is illustrated in Fig. 2.3. The conjugate system G (s) of G(s) is
given by

(2.57)

Finally, if D is invertible, a state-space representation of G(s) 1 , the inverse of G(s),


is given by

(2.58)
24 2 Preliminaries

2.3.3 Stability, Controllability, and Observability

2.3.3.1 Stability

Stability is the most important property of a control system under study. In


this section, the concepts of bounded-input-bounded-output (BIBO) stability and
asymptotic stability will be discussed.
Definition 2.13 A system is BIBO stable if it generates a bounded output when it
is subject to any bounded input.
For a linear system modeled by transfer function G(s), it is called BIBO stable if
and only if all the poles of G(s) are in" the open left-half
# plane, i.e., with negative real
1 1
parts. For instance, given a G.s/ D sC1 .s1/.sC2/
1 , one can find that the poles of
0 sC3
G(s) are f1, 1,  2,  3g. Hence, it is not BIBO stable due to that there is a positive
real pole f1g. The following defines the asymptotic stability.
Definition 2.14 A system of (2.49) is called asymptotically stable if, for any given
initial state x0 , the state kx(t)k ! 0, as t ! 1, when u  0.
A necessary and sufficient condition for the system to be asymptotically stable
is that the real part of all eigenvalues of A should be negative. The asymptotic
stability is also called the internal stability, though the term is more often used in a
closed-loop system setting. Asymptotic stability implies BIBO stability; however,
BIBO stability does not imply the asymptotic stability. That is, asymptotically
stable systems must be BIBO stable, but a BIBO stable control system may not
be asymptotically stable [1]. The possible discrepancy between BIBO stability and
asymptotic stability of a control system arises from whether the underlying system is
completely controllable or completely observable. Controllability and observability
are introduced next.

2.3.3.2 Controllability

Taking the given system in (2.49), e.g., controllability refers to the ability of the
input signal u to transfer the state x from any initial state to any final state in finite
time. A system is called completely controllable if, for any given initial state x0 and
any final state xf , there exist a finite time Tf and an input u(t), 0  t  Tf , which takes
x(0) D x0 to x(Tf ) D xf . Note that controllability of a system concerns only the matrix
pair (A,B), and the state similarity transformation does not affect the controllability.
To verify the controllability and the following observability, the rank test and
gramian test are the well-known methods [1]. The following summarizes these
schemes.
2.3 Linear System Theory 25

Fig. 2.4 Circuit example on


observability +
1Ω 1Ω C1 1Ω
1F
u + y
+ x -
1Ω 1Ω 1Ω
-

An n _ th-order system is completely controllable if any one of the following is


true:
 
1. The controllability matrix B AB    An1 B is of full rank.
 
2. The matrix I  A B has full row rank at every eigenvalue of A.
3. The controllability gramian matrix
Z t
T
Wc D eA BB T eA  d (2.59)
0

is nonsingular and thus positive definite for every t > 0.


4. All the eigenvalues of A C BF can be assigned arbitrarily, where F is an
appropriately chosen state feedback matrix and always exists.
A system model in (2.49) is said to be stabilizable if there exists a state feedback
matrix F such that A C BF is stable (i.e., the state matrix of the feedback system is
Hurwitz).

2.3.3.3 Observability

The controllability describes the ability that the input drives the states, of which the
dual concept is the observability of a system. Taking the given system in (2.49), e.g.,
the observability means the extent to which the system state variables are “visible” at
the output. A system is called completely observable if, by setting the input identical
to zero, any initial state x(0) can be uniquely decided by the output y(t), 0  t  T,
for some finite T. For example, Fig. 2.4 shows that the initial voltage across the
capacitor, x(0), cannot be determined by the voltage output y. If no input (voltage
source) u is applied to the circuit of Fig. 2.4, the initial state (voltage across the
capacitor) cannot be deduced from the output y. Note that the observability concerns
only the matrix pair (A,C), and the state similarity transformation does not change
the observability.
26 2 Preliminaries

The complete observability of a system can be found by using the rank test or
gramian test, which are summarized as follows:
2 3
C
6 CA 7
6 7
1. The observability matrix 6 : 7 is of full rank.
4 :: 5
CAn1
 
I  A
2. The matrix has full column rank at every eigenvalue of A.
C
3. The observability gramian matrix
Z t
T
Wo D eA  C T C eA d (2.60)
0

is nonsingular and thus positive definite for every t > 0.


4. All eigenvalues of A C HC can be assigned arbitrarily, where H is an appropri-
ately chosen observer gain matrix and always exists.
A system model in (2.49) is said to be detectable if there exists an observer gain
matrix F such that A C HC is stable.

2.3.4 Minimal State-Space Realization

For any given LTI system in a state-space model (2.49), an adequately chosen state
similarity transfer matrix T can be applied to transform (2.52) into

(2.61)

Representation (2.61) is the so-called canonical decomposition form (Kalman


canonical decomposition). It can be easily derived that for zero initial states, the
transfer function of the system is actually

G.s/ D D C C .sI  A/1 B D D C CCO .sI  ACO /1 BCO ; (2.62)

which shows that the transfer function only describes the controllable and observ-
able part of the system. Figure 2.5 shows the relation of (2.61) in a block diagram.
The dynamics of the uncontrollable, unobservable, or both, if they exist in the
system, will not be seen in the input/output relationship (the transfer function). That
explains the possible situation of a system being BIBO stable but not asymptotically
stable.
2.3 Linear System Theory 27

1
s

ACO

1
CCO
s

ACO

1
BCO
s

ACO
u 1 y
BCO CCO
s

ACO

Fig. 2.5 Block diagram of canonical decomposition function

There are many state-space realizations corresponding to the same transfer


function. The state-space realization (A,B,C,D) with the least dimensions of the
state is called a minimal realization of the transfer function. Minimal realization
(A,B,C,D) is always completely controllable and completely observable.

2.3.5 State-Space Algebra

Let state-space realizations of the systems G1 (s) and G2 (s) be given respectively by,
    
xP 1 A1 B1 x1
D (2.63)
y1 C1 D 1 u1

and
    
xP 2 A2 B2 x2
D : (2.64)
y2 C2 D 2 u2

Obviously, the system models formed from G1 (s) and G2 (s) could involve the
variables from both systems. By augmenting (2.63) and (2.64), one obtains
2 3 2 32 3 2 3 2 32 3
xP 1 A1 0 B1 x1 x2 I 0 0 x2
4 x2 5 D 4 0 I 0 5 4 x2 5 ) 4 xP 1 5 D 4 0 A1 B1 5 4 x1 5 (2.65)
y1 C1 0 D 1 u1 y1 0 C1 D 1 u1
28 2 Preliminaries

D1
x1 1 x1
B1 C1
s

A1
u y

D2
x2 1 x2
B2 C2
s

A2

Fig. 2.6 Block diagram of a parallel system

and
2 3 2 32 3 2 3 2 32 3
xP 1 I 0 0 xP 1 xP 2 A2 0 B2 x2
4 xP 2 5 D 4 0 A2 B2 5 4 x2 5 ) 4 xP 1 5 D 4 0 I 0 5 4 xP 1 5 : (2.66)
y2 0 C2 D 2 u2 y2 C2 0 D 2 u2

It can be seen in the following that manipulations between two control system
models can be realized via the algebra of usual constant matrix operations.

2.3.6 State-Space Formula for Parallel Systems

As shown in Fig. 2.6, let u1 D u and u2 D u. Since


 
  x1
y D y 1 C y 2 D C1 C2 C .D1 C D2 / u; (2.67)
x2

a state-space realization of the transfer function from u to y D y1 C y2 can be found


from (2.65) and (2.66) which have the same dimension of the total states as
2 3 2 32 3
xP 1 A1 0 B1 x1
4 xP 2 5 D 4 0 A2 B2 5 4 x2 5 ; (2.68)
y C1 C2 D 1 C D 2 u

i.e.,

(2.69)
2.3 Linear System Theory 29

D1 D2

u x1 1 x1 x2 1 x2
B1 C1 B2 C2 y
s y1 s

A1 A2

Fig. 2.7 Block diagram of a cascaded system

2.3.7 State-Space Formula for Cascaded Systems

As shown in Fig. 2.7, let u1 D u, u2 D y1 , and y D y2 . Then, one has a state-space


realization of the transfer function from (2.65) and (2.66) by matrix multiplication
as
2 3 2 32 3 2 32 32 3
xP 1 I 0 0 xP 1 I 0 0 A1 0 B1 x1
4 xP 2 5 D 4 0 A2 B2 5 4 x2 5 D 4 0 A2 B2 5 4 0 I 0 5 4 x2 5
y 0 C2 D 2 u2 0 C2 D 2 C1 0 D 1 u
2 32 3
A1 0 B1 x1
D 4 B2 C1 A2 B2 D1 5 4 x2 5 ; (2.70)
D 2 C1 C2 D 2 D 1 u

or equivalently
2 3 2 32 32 3 2 32 3
xP 2 A2 0 B2 I 0 0 x2 A2 B2 C1 B2 D1 x2
4 xP 1 5 D 4 0 I 0 5 4 0 A1 B1 5 4 x1 5 D 4 0 A1 B1 5 4 x1 5 :
y C2 0 D 2 0 C1 D 1 u C2 D 2 C1 D 2 D 1 u
(2.71)

Hence,

(2.72)

2.3.8 State-Space Formula for Similarity Transformation


^
Define a new state variable vector x D T x. Then one has
P
^
x D T x;
P (2.73)
1 ^
xDT x (2.74)
30 2 Preliminaries

From (2.49), (2.73) and (2.74),

P
^
^
x D T xP D TAx C TBu D TAT 1 x C .TB/u (2.75)

and
P
^
^
x D y D C x C Du D C T 1 x C Du: (2.76)

This implies

(2.77)

  
A11 A12 B1  
Consider the specific case that A D ,B D , C D C 1 C2 ,
A21 A22 B2
   
I X I X
and T D i:e:; T 1 D which is helpful to characterize the
0 I 0 I
minimum realization of the state-space solutions later. Then,
    
 I X  A11 A12 B1 A11 C XA21 A12 C XA22 B1 C XB2
T AB D D
0 I A21 A22 B2 A21 A22 B2
(2.78)

and
2 3
  A11 C XA21 A12 C XA22  
TA 1 4 5 I X
T D A21 A22
C 0 I
C1 C2
2 3
A11 C XA21 A11 X C XA21 X C A12 C XA22
D4 A21 A21 X C A22 5: (2.79)
C1 C1 X C C2

This is equivalent to the matrix manipulations of

(2.80)
2.4 Linear Fractional Transformations and Chain Scattering-Matrix Description 31

2.4 Linear Fractional Transformations and Chain


Scattering-Matrix Description

Consider a general feedback control framework shown in Fig. 2.8, where P denotes
the interconnection system of the controlled plant, namely, the standard control (or
compensation) configuration (SCC) [10]. The closed-loop transfer function from w
to z in Fig. 2.8 is given by
 
P11 P12
LFTl .P; K/ D LFTl ; K WD P11 C P12 K.I  P22 K/1 P21 ;
P21 P22
(2.81)

where LFT stands for the linear fractional transformation and the subscript “l”
stands for “lower.” Different from the LFT, the chain scattering-matrix description
(CSD) developed in the network circuits provides a straightforward interconnection
in a cascaded way. The CSD transforms a LFT into a two-port network connection.
Thus, many known theories which have been developed for a two-port network can
then be used. The definition of CSD is briefly introduced below, while the details
on background, properties, and use of CSD will be described in Chaps. 3, 4, and 5.
Figure 2.9 shows the right and left CSD representations.
Define right
and
left CSD transformations with G and K denoted by CSDr (G,K)
and by CSDl G; Q K , respectively [9], as

 
G11 G12
CSDr .G; K/ D CSDr ; K WD .G12 C G11 K/ .G22 C G21 K/1
G21 G22
(2.82)

z w
P

Fig. 2.8 Linear fractional y K u


transformation

a b
z u u z

w G y K K y G% w

Fig. 2.9 Right and left CSD (a) Right CSD (b) Left CSD
32 2 Preliminaries

Fig. 2.10 Unity feedback r + ye u ym


control system K Pp
-
controller plant

and
 

Q K D CSDl GQ 11 GQ 12

Q 11  K GQ 21 1 GQ 12  K GQ 22 ;
CSDl G; ; K WD  G
GQ 21 GQ 22
(2.83)

where G22 and GQ 11 are square and invertible. Note that, if P21 is invertible, the SCC
matrix P can be transformed to a right CSD as
 
P12  P11 P21 1 P22 P11 P21 1
GD : (2.84)
 P21 1 P22 P21 1

Also, if P12 is invertible, the SCC matrix P can be transformed to a left CSD as

 
P12 1 P12 1 P11
GQ D : (2.85)
P22 P12 1 P21  P22 P12 1 P11

Example 2.3 Consider the unity feedback control system in Fig. 2.10, where Pp is
a SISO-controlled
plant. Find its corresponding LFTl and CSD representations.
ye
Let z D , w D r, and y D ye .
u
From the unity feedback control system, by definition,
0 1
as u D 0, one has ym D 0;
y
@ eA
u 1
hence, r D ye from r  Pp u D ye so that P11 D r j uD0 D and P21 D
0
ye
r juD0
D 1. Similarly, as r D 0, one can also obtain

ye

u Pp ye
P12 D jrD0 D and P22 D jrD0 D Pp :
u 1 u

ye
The closed-loop transfer function from r to z D is presented as below
u
(Fig. 2.11).
2.4 Linear Fractional Transformations and Chain Scattering-Matrix Description 33

Fig. 2.11 LFT of ⎛ ye ⎞ r


closed-loop transfer function ⎜u⎟ ⎡⎛ I ⎞ ⎛ − Pp ⎞ ⎤
form ⎝ ⎠ ⎢⎜ ⎟ ⎜ ⎟⎥
⎢⎝ 0 ⎠ ⎝ I ⎠ ⎥
⎢ I − Pp ⎥⎦

ye u
K

Fig. 2.12 Right CSD of ⎛ ye ⎞ u


closed-loop transfer function ⎜u⎟ ⎡⎛ 0 ⎞ ⎛ I ⎞ ⎤
form ⎝ ⎠ ⎢⎜ ⎟ ⎜ ⎟ ⎥
⎢⎝ I ⎠ ⎝ 0 ⎠ ⎥ K
⎢ Pp I ⎥⎦
r ⎣
ye

From
h i
z D LFTl .P; K/ w D P11 C P12 .I  KP22 /1 KP21 w;

one has

ye 1
D .1 C PP K/1 r:
u K

From the control block 0


diagram,
1
as ye D 0, one has, by the definition of the
y
@ eA
u ˇˇ 0 ˇ
right CSD, G11 D u ye D0 D and G21 D ur ˇye D0 D Pp , since
1
r  Pp u D ye D 0. As u D 0, one then has

ye

u 1
G12 D juD0 D and from r  Pp u D ye :
ye 0

ye
Equivalently, the closed-loop transfer function from r to z D can be
u
represented by Fig. 2.12.
From z D CSDr (G,K) w D (G11 K C G12 )(G21 K C G22 ) 1 w, one has

ye 1
1
D 1 C Pp K r:
u K
34 2 Preliminaries

This concludes that

z D LFTl .P; K/ w D CSDr .G; K/ w:

Exercises

1. Prove that all the eigenvalues (H) of a Hamiltonian matrix H are symmetric to
the j!-axis.
2 3
12 5 1
2. Determine the rank of A D 4 2 4 1 2 5.
12 1 9
2 1 3 2 3
p p1  
2 2 1
6 7 11
3. Let Q D 4 p1  p1 5, R D , b D 4 1 5. Utilize the least square
2 2 01
0 0 1
approach to solve Ax D b where A D QR.
4. Consider the following system:
      
xP 1 1 0 x1 x10
D ; x0 D :
xP 2 2 4 x2 x20

Find the response of x1 (t) and x2 (t).


5. Sketch the state trajectories of the following system in the (x1 , x2 , x3 ) plane for
2 3
2
x0 D 4 4 5 and input u(t) D 0. Determine the controllability of the above
4
system
2 3 2 3
0 1 0 0
xP D 4 0 0 1 x C 05u
5 4
6 11 6 1
 
y D 1 5 1 x:

6. The transfer function of a linear system is given by

Y.s/ sCa
D 3 :
U.s/ s C 7s 2 C 14s C 8

(a) Determine the values of a when the system is not completely controllable or
not completely observable.
(b) Define the state variables and derive the state-space model of which one of
the states is unobservable.
(c) Define the state variables and derive the state-space model of which one of
the states is uncontrollable.
References 35

7. The state-space model of a third-order system is shown below:

xP 1 D 2x1 C 3x2 C 3x3 C u


xP 2 D 2x1  3x2  2u
xP 3 D 2x1  2x2  5x3 C 2u
y D 7x1 C 6x2 C 4x3 :

Use state similarity transformation to decouple the state-space model and


explain the observability and controllability for each of the subsystems.
8. Consider the following systems and decide in which function space they
belong to.
.s C 1/
(a)
.s C 2/ .s C 4/
2s  1
(b)
.s C 1/ .s C 3/
1
(c)
s2
9. Consider the linear system below. Determine its H2 norm, H1 norm, and Hankel
norm.
   
0 1 0
xP D xC u
 1 2 1
 
y D 1 0 x:

References

1. Chen CT (2009) Linear system theory and design. Oxford University Press, New York
2. Doyle JC, Francis B, Tannenbaum A (1992) Feedback control theory. Macmillan Publishing
Company, New York
3. Francis BA (1987) A course in H1 control theory, vol 88, Lecture notes in control and
information sciences. Springer, Berlin
4. Golub GH, Van Loan CF (1989) Matrix computations. The Johns Hopkins University Press,
London
5. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
6. Helton JW, Merino O (1998) Classical control using H1 methods. SIAM, Philadelphia
7. Rudin W (1973) Functional analysis. McGraw-Hill, New York
8. Strang G (2004) Linear algebra and its applications, 4th edn. Academic, New York
9. Tsai MC, Tsai CS (1993) A chain scattering matrix description approach to H1 control. IEEE
Trans Autom Control 38:1416–1421
10. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 3
Two-Port Networks

This chapter will briefly introduce two-port network descriptions which are closely
related to that of the general control framework involving descriptions using LFT
and CSD. The two-port network was developed as a common methodology to
describe the relationship between inputs and outputs of an electrical circuit. For
example, the impedance matrix of a two-port network can be determined by each
port’s voltage and current according to Ohm’s law. The exposure in this book will
focus on both scattering (i.e., LFT) and chain scattering (i.e., CSD) parameters as
well as their applications to the modern control theory.

3.1 One-Port and Two-Port Networks

Before undertaking the study of a two-port network, it is worth reviewing the


concept of a one-port network. Figure 3.1a depicts a circuit with a single terminal
which forms a standard one-port network. A one-port network circuit has only one
terminal in which the output current is equal to input current.
Unlike the one-port network, a two-port network circuit as illustrated in Fig. 3.1b
has two terminals in which the input current is equal to the output current in
each terminal. The two-terminal description of network systems offers freedom
for ease of connecting several subsystems. Hence, utilizing two-port descriptions
to characterize a circuit system for specific design objectives will be much more
convenient than that of the one-port. For example, Thevenin and Norton circuit
equivalents were developed based on one-port network theories [3]. Consider a
simple circuit with two impedances Z1 and Z2 as illustrated in Fig. 3.2, where ZL is
the load impedance. The no-load Thevenin equivalent circuit (i.e., the open circuit
without ZL ) is shown in Fig. 3.3 where the transfer function from V1 to V2 is given by

V2 Z2
D : (3.1)
V1 Z1 C Z2

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 37
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__3,
© Springer-Verlag London 2014
38 3 Two-Port Networks

a b
I1 I1 I2

+ + +
V1 Circuit V1 Circuit V2
− I1 I1 I2
− −

Fig. 3.1 (a) One-port and (b) two-port networks

Fig. 3.2 Two-port circuit I1 Z1 I2


+ +
V1 + Z2 V2 = VT ZL
− −

Fig. 3.3 Equivalent I1 RT = Z1 || Z 2 I2


Thevenin circuit
A
Z2
VT = V1 + ZL
Z1 + Z 2
B

I2
V1 1 I1 V2 1
Z2 − I2
Z1 ZL

Fig. 3.4 Corresponding control block diagram

When load ZL is included, due to the load effect, the transfer function from V1 to V2
can be determined as
Z1 Z2
V2 .Z1 CZ2 / Z2 ZL
D D : (3.2)
V1 ZL C .ZZ1 1CZ
Z2 Z L .Z 1 C Z2 / C Z1 Z2
2/

A system block diagram to describe this circuit is illustrated in Fig. 3.4. It should
be noted that the signal that flows into the two-port system is  I2 , so that the
relationship between terminal voltage and current is ZL D I V2
2
. One can then apply
the Mason’s gain formulae to determine its transfer function from V1 to V2 for the
cases with and without load ZL , respectively, as

V2 Z2 =Z1 Z2
D D ; (3.3)
V1 1 C Z2 =Z1 Z1 C Z2
3.1 One-Port and Two-Port Networks 39

Fig. 3.5 LFT form


V2 V1
P
V2 I2
1

ZL

and

V2 Z2 =Z1 Z2 ZL
D D : (3.4)
V1 1 C Z2 =Z1 C Z2 =ZL ZL .Z1 C Z2 / C Z1 Z2

Furthermore, with the load effect, Fig. 3.4 can be formulated by a systematic
framework using a two-port description as shown in Fig. 3.5. It can be determined
from Fig. 3.4 by cutting around the load impedance  Z1L , and there are two loops
in the case with load. Then by the LFT approach, one has
" # " # " #
V2 P11 P12 V1 1
D and I2 D  V2 ; (3.5)
V2 P21 P22 I2 ZL

where
2
Z2 Z1 Z2 3
6 Z1 C Z2 Z1 C Z2 7
P D6
4
7:
5
Z 2 Z Z1 2
Z1 C Z2 Z1 C Z2

It is reminded that the negative sign of “ Z1L ” means that the current I2 direction is
opposite to that of I2 as defined in the circuit in Fig. 3.2. By the definition of LFT, it
can be verified that in the case of no-load,

V2 Z2
D LFTl .P; 0/ D P11 D ; (3.6)
V1 Z1 C Z2

and in the case with load,



V2 1 1 1 1 Z2 ZL
DLFTl P;  DP11 P12 1CP22 P21 D :
V1 ZL ZL ZL ZL .Z1 CZ2 / CZ1 Z2
(3.7)

Clearly, the results above are the same as (3.1)–(3.2) and (3.3)–(3.4). However, the
two-port description approach is more systematic and characterizes conveniently the
load effect. The system performance can be tuned easily by the load impedance as
an external part. For example, engineers often use different terminated impedances
40 3 Two-Port Networks

to eliminate the echo problem in communication circuits. The same idea arises in
control engineering. The feedback terminator of an LFT system can be chosen to
achieve the desired response. Furthermore, an open loop-loop unstable system can
be stabilized by a properly defined terminator in the two-port network description.
Resistors (R), inductors (L), and capacitors (C) are basic passive impedance
elements of an electrical circuit. Electronic circuits are frequently needed to process
electrical signals. The problem appearing in the two-port network theory is how to
discover the relationship between input and output at each terminal. Based on these
physical variables V1 , V2 , I1 , and I2 , there are six types of parameters which are
often used for the two-port network description:
1. Impedance parameter (Z parameter)
2. Admittance parameter (Y parameter)
3. Hybrid parameter (H parameter)
4. Transmission parameter (ABCD parameter)
5. Scattering parameter (S parameter)
6. Chain scattering parameter (T parameter)
Although these parameters are common in circuit design synthesis and analysis,
it should be noted that some circuits may not have impedance, admittance, or
transmission matrix descriptions, physically due to certain numerical constraints.
For example, a circuit with transformers does not have an impedance parameter,
and a simple circuit with shunt (or series) impedance does not have the two-port
admittance (or impedance) matrix. The scattering-matrix description, which has its
roots in microwave theory and has connections to operator theory, is then proposed
to overcome problems such as the absence of physical parameters. This situation
will be further discussed in the following sections.

3.2 Impedance and Admittance Parameters


(Z and Y Parameters)

Figure 3.6 depicts a linear, two-port network along with appropriate voltages,
currents, and its terminals. Let the matrix relationship of the impedance parameters
be defined by

I1 I2
+ +
Linear
V1 V2 ZL
Network
− I1 I2 −
Fig. 3.6 Two-port Network
3.2 Impedance and Admittance Parameters (Z and Y Parameters) 41

" # " # " #


V1 Z11 Z12 I1 1
D and I2 D  V2 ; (3.8)
V2 Z21 Z22 I2 ZL

where, according to the superposition principle,


ˇ ˇ
V1 ˇˇ V1 ˇˇ
Z11 D . / ; Z12 D . / ;
I1 ˇI2 D0 I2 ˇI1 D0
ˇ ˇ
V2 ˇˇ V2 ˇˇ
Z21 D . / ; Z22 D . / : (3.9)
I1 ˇI2 D0 I2 ˇI1 D0

The impedance parameter, derived from Ohm’s law, is useful for series-
connected circuits. Similarly, the admittance matrix Y is defined as
" # " # " #
I1 Y11 Y12 V1
D ; (3.10)
I2 Y21 Y22 V2

where
ˇ ˇ
I1 ˇˇ 1 I1 ˇˇ 1
Y11 D ; Y12 D ;
V1 ˇV2 D0 V2 ˇV1 D0
ˇ ˇ
I2 ˇˇ 1 I2 ˇˇ 1
Y21 D ; Y22 D : (3.11)
V1 ˇV2 D0 V2 ˇV1 D0

Note that the admittance parameter description is useful for parallel-connected


circuits. Apparently, it can be observed that
" # " #1
Y11 Y12 Z11 Z12
D : (3.12)
Y21 Y22 Z21 Z22

In addition, the parameters of matrices Z and Y are for physical units. One can easily
examine the load (ZL ) effect for a given two-port impedance (or admittance) matrix
Z (or Y) by the LFT description as illustrated in Fig. 3.7.
Now recall the circuit presented in Fig. 3.2. According to (3.8) and (3.10),
the two-port impedance matrix Z and admittance matrix Y can be determined,
respectively, as
" # " #
Z11 Z12 Z1 C Z2 Z2
ZD D ; (3.13)
Z21 Z22 Z2 Z2
42 3 Two-Port Networks

a b
V1 I1 I1 V1

V2 Z I2 I2 Y
V2

1
− −Z L
ZL

Fig. 3.7 LFT forms of Z and Y parameters: (a) LFT form of Z and (b) LFT form of Y

2 3
" # 1
1

Y11 Y12 6 Z1
Z1 7
Y D D6
4
7: (3.14)
Y21 Y22 Z1 C Z2 5
 Z11
Z1 Z2

In fact, it can be verified that


2 3
" #1 1 1

Z1 C Z2 Z2 6 Z1 Z1 7
Y D Z 1 D D6
4
7: (3.15)
Z2 Z2 1 Z1 C Z2 5

Z1 Z1 Z2

It can be realized from (3.14) that for the series short circuit where Z1 D 0, Fig. 3.2
does not have the two-port Y parameter description. It can also be found from (3.14)
that for the shunt open circuit where Y parameter is equal to zero, and the circuit
shown in Fig. 3.2 does not have the two-port Z parameter description. Equations
(3.8) and (3.9) can be used to determine the relationship (transfer function) between
the currents and voltages by exploring the LFT. For instance, the overall input
impedance which is the transfer function from I1 to V1 is given by (3.13) and by
I2 D  Z1L V2 when closing the loop,


V1 1 1 Z2 1
D LFTl Z;  D .Z1 C Z2 / C Z2 1C ZL
I1 ZL ZL ZL
Z1 .Z2 C ZL / C Z2 ZL
D Z1 C .Z2 jj ZL / D : (3.16)
Z2 C ZL

Moreover, the overall input admittance which is the transfer function from V1 to I1
is given by (3.16) and by V2 D  ZL I2 when closing the loop,
3.3 Hybrid Parameters (H Parameters) 43


I1 1 ZL ZL .Z1 C Z2 / 1
D LFTl .Y; ZL / D  1C
V1 Z1 Z1 Z1 Z2
1 Z 2 C ZL
D : (3.17)
Z1 Z1 .Z2 C ZL / C Z2 ZL

3.3 Hybrid Parameters (H Parameters)

Now consider the hybrid parameter of a two-port network defined as


" # " #" #
V1 H11 H12 I1
D ; (3.18)
I2 H21 H22 V2
ˇ ˇ
V1 ˇ ˇ
where H11 D . / is the short-circuit input impedance, H12 D VV12 ˇ
I1 ˇV D0
2 ˇ I1 D0
ˇ
the open-circuit reverse voltage gain, H21 D II21 ˇ the short-circuit forward
ˇ 1

V2 D0
ˇ
current gain, and H22 D VI22 ˇ
the open-circuit output admittance. The
I1 D0
hybrid parameter H is commonly seen in the analysis of transistor circuits. For the
circuit of Fig. 3.2, one has, by (3.18),
" #
Z1 1
H D : (3.19)
1 1
Z2

Here, the overall input impedance can be determined as in Fig. 3.8 and is given by

V1 ZL 1 Z1 .Z2 C ZL / C Z2 ZL
D LFTl .H; ZL / D Z1 C ZL 1 C D ;
I1 Z2 Z2 C ZL
(3.20)

which is the same as (3.16).

V1 I1

I2 H
V2

Fig. 3.8 LFT form of H −Z L


parameter
44 3 Two-Port Networks

3.4 Transmission Parameters (ABCD Parameters)

The above cases have shown how to find input/output relations by using the LFT
structure. This section shows how the transmission parameters can be used to derive
those relations by directly considering two-port network chains. It will be seen
that the two-port network chain description is an alternative to that of LFT. It is,
however, more appealing to electrical engineers and communication engineers, due
to its direct connection to the two-port network structure.
The transmission parameter matrix description can connect several two-port
network circuits in a series as illustrated in Fig. 3.9. The transmission parameter
matrix is defined by
" # " # " #
V1 A B V2
D ; (3.21)
I1 C D I2
ˇ
ˇ
where A D VV12 ˇ denotes the open-circuit reverse voltage gain, B D
ˇ I2 D0 ˇ 1

V1 ˇ ˇ
I2 ˇV D0
. / the short-circuit transfer impedance, C D VI12 ˇ
the open-
2 ˇ I2 D0
ˇ
circuit transfer admittance, and D D I I1
ˇ the short-circuit reverse current
2 V2 D0
gain. The transmission parameters are often called the ABCD parameters in the
electrical engineering community. Figure 3.10 shows the two-port transmission
parameter description with load ZL .
For the circuit in Fig. 3.2, the transmission parameters in (3.21) can be found as
" # " #
A B 1C Z1
Z2
Z1
D 1
: (3.22)
C D Z2
1

I1 −I2
+ Two-port Two-port +
V1 ... V2 ZL
− Network 1 Network N −

Fig. 3.9 Two-port network chain

V1 V2
⎡A B⎤
⎢C D ⎥ ZL
Fig. 3.10 Transmission ⎣ ⎦
I1 −I2
parameter chain description
3.4 Transmission Parameters (ABCD Parameters) 45

Fig. 3.11 Two-port circuit I a1 Z1 −Ia2 I b1 − Ib2


+ + + +
Va1 Va 2 Vb1 Z2 Vb 2
− − − −

Fig. 3.12 Partition of the a b


two-port circuit: (a) Left −Ia2
I a1 Z1 I b1 − Ib2
sub-circuit and (b) Right
sub-circuit + + + +
Va1 Va 2 Vb1 Z 2 Vb 2
− − − −

One can then obtain the overall input impedance via


" # !
V1 A B Z1 .Z2 CZL / CZ2 ZL
D CSDr ; ZL D .BCAZL / .DCC ZL /1 D :
I1 C D Z2 CZL
(3.23)

As expected, the result is the same as (3.16) and (3.20). The transmission parameter
is useful for chaining several two-port networks in a series, which is the same
as matrix multiplication. Hence, transmission parameters are also called chain
matrices. Consider the circuit in Fig. 3.2 again and redraw it in Fig. 3.11. It is further
decomposed with two sub-circuits as shown in Fig. 3.12.
The sub-circuit of Fig. 3.12a gives
" # " # " #
Va1 Aa Ba Va2
D (3.24)
Ia1 Ca D a Ia2

with
" # " #
Aa Ba 1 Z1
D : (3.25)
Ca D a 0 1

Similarly, the sub-circuit of Fig. 3.12b gives


" # " # " #
Vb1 Ab Bb Vb2
D (3.26)
Ib1 Cb D b Ib2

with
" # " #
Ab Bb 1 0
D 1
: (3.27)
Cb D b Z2
1
46 3 Two-Port Networks

Finally, the transmission parameter can be chained up (i.e., multiplying two


matrices) as
" # " # " # " #
V1 Va1 Aa Ba Va2
D D
I1 Ia1 Ca D a Ia2
" # " # " # " # " #
Aa Ba Ab Bb Vb2 A B V2
D D
Ca D a Cb D b Ib2 C D I2
(3.28)

with
" # " # " # " #
A B 1 Z1 1 0 1C Z1
Z2
Z1
D 1
D 1
: (3.29)
C D 0 1 Z2
1 Z2
1

This concludes the same result as in (3.22).


In summary, the impedance, admittance, hybrid, and chain parameters in two-
port network circuits are defined, respectively, based on the relationship between
the input and output (voltages and currents) as
" # " # " #
V1 Z11 Z12 I1
D ; (3.30)
V2 Z21 Z22 I2
" # " # " #
I1 Y11 Y12 V1
D ; (3.31)
I2 Y21 Y22 V2
" # " # " #
V1 H11 H12 I1
D ; (3.32)
I2 H21 H22 V2
" # " # " #
V1 A B V2
D : (3.33)
I1 C D I2

Additionally, the overall input impedance can be equivalently represented by



1
LFTl Z;  D .LFTl .Y; ZL //1 DLFTl .H; ZL /
ZL
" # !
A B
D CSDr ; ZL : (3.34)
C D

The equivalence naturally leads to exploration of conversion formulae between these


parameters. In the following, an example is used to illustrate how the parameter H
from Z is determined.
3.4 Transmission Parameters (ABCD Parameters) 47

Let the impedance matrix described in (3.8) be augmented as


2 3 2 3 2 3 2 3
V1 Z11 Z12 V1 Z11 Z12
6 7 6 7 " # 6 7 6 7 " #
6 V2 7 6 Z21 Z22 7 I1 6 I2 7 6 0 I 7 I1
6 7 6 7 )6 7D6 7
6 I 7D6 I 7 6 I 7 6 I 7 :
4 1 5 4 0 5 I2 4 1 5 4 0 5 I2
I2 0 I V2 Z21 Z22
(3.35)

If Z22 is invertible, then


" # " # " #1 " # " # " #
V1 Z11 Z12 I 0 I1 H11 H12 I1
D D ;
I2 0 I Z21 Z22 V2 H21 H22 V2
(3.36)

where, by the matrix inversion,


" # " #
H11 H12 Z11  Z12 Z22 1 Z21 Z12 Z22 1
D : (3.37)
H21 H22 Z22 1 Z21 Z22 1

In the analogy, the Z parameter matrix can be converted into the chain parameters.
Let
2 3 2 3 2 3 2 3
V1 Z11 Z12 V1 Z11 Z12
6 7 6 7 " # 6 7 6 7 " #
6 V2 7 6 Z21 Z22 7 I1 6 I1 7 6 I 0 7 I1
6 7 6 7 )6 7 6 7
6 I 7D6 I 0 7 I2 6 V 7D6 Z 7 :
4 1 5 4 5 4 2 5 4 21 Z22 5 I2
I2 0 I I2 0 I
(3.38)

If Z21 is invertible, then one can obtain that


" # " # " #1 " # " # " #
V1 Z11 Z12 Z21 Z22 V2 A B V2
D D ;
I2 I 0 0 I I2 C D I2
(3.39)

where
" # " #
A B Z11 Z21 1 Z11 Z21 1 Z22  Z12
D : (3.40)
C D Z21 1 Z21 1 Z22

Note that the matrix conversion between any two parameter descriptions can be
carried out by the same methodology.
48 3 Two-Port Networks

3.5 Scattering Parameters (S Parameters)

In this section, a scattering parameter, namely, S parameter [5], will be discussed.


Consider a transmission line as illustrated in Fig. 3.13, where Z denotes its equiva-
lent impedance which is composed of four basic passive elements (R, L, G, C). Let
Zo be the characteristic impedance defined by
r
R C Ls
Z0 D : (3.41)
G C Cs

A transmission line which is only composed of (L,qC) is called lossless. Then,


the characteristic impedance (3.41) becomes Z0 D L
C
. The impedance Z of a
lossless transmission line contains only the imaginary part so that signals in the
lossless transmission line do not consume any real power.
From the power wave theory [5], engineers often utilize S parameter to describe
the injection and reflection phenomena of high-frequency microwave circuits and
to determine phenomena such as standing wave, echo, and impedance matching
problems throughout the entire communication procedure. Define

V1 C Z0 I1 p  V2 C Z0 I2 p 
a1 D p watt ; a2 D p watt ;
2 Z0 2 Z0

V1  Z0 I1 p  V2  Z0 I2 p 
b1 D p watt ; b2 D p watt ; (3.42)
2 Z0 2 Z0

where ai denotes the incident wave (signal) at port i and bi represents the reflected
wave (signal) at port i. Let
" # " # " #
b1 S11 S12 a1
D ; (3.43)
b2 S21 S22 a2

where
ˇ ˇ ˇ ˇ
b1 ˇˇ b2 ˇˇ b1 ˇˇ b2 ˇˇ
S11 D ; S D ; S D ; S D : (3.44)
a1 ˇa2 D0 a1 ˇa2 D0 a2 ˇa1 D0 a2 ˇa1 D0
12 21 22

The two-port S parameter description is illustrated, in LFT and CSD, in Fig. 3.14.

Z
+ +
Z 0 V1 V2 Z0
Fig. 3.13 Transmission line − −
circuit
3.5 Scattering Parameters (S Parameters) 49

a b
b1 a1 b1 a2
S12
⎡ S11 S12 ⎤
b2 ⎢S S 22 ⎥⎦
a2
⎣ 21
S11 S22 ΓL

ΓL a1
S21
b2

Fig. 3.14 LFT form of S parameter and its block description in CSD: (a) LFT form of S parameters
and (b) Block description of S in CSD

Generally speaking, two-port networks can be interconnected by a terminator in a


lower LFT or in a right CSD. The two-port S parameter description is different from
those discussed in the previous sections in that matrix S is unit-free. This scattering
description is devoted to utilizing the signals information of (a1 , b1 ) and (a2 , b2 ).
However, it will be seen next that S parameter is in fact a transformation of variables
from voltage/current to incident/reflected waves. Rewriting (3.42) yields
2 V Z I 3
" # 1
p
0 1
" #
b1 6 2 Z0 7 V1
D6
4 V1 C Z0 I1
7D…
5 ; (3.45)
a1 I1
p
2 Z0
2 V CZ I 3
" # 2
p
0 2
" #
a2 6 2 Z0 7 V2
D6
4 V2  Z0 I2
7D…
5 ; (3.46)
b2 I2
p
2 Z0

where
" #
1 1 Z0
…D p : (3.47)
2 Z0 1 Z0

Note that … is a transformation matrix between voltage/current and power wave at


a port.
Example 3.1 Find the two-port S parameter description for the circuit in Fig. 3.13.
As shown in Fig. 3.15, a voltage source Vs is added in the input (left) port, and a
load of the characteristic impedance Zo is connected in the output (right) port.
From
Vs Z0
I1 D I2 ; I1 D ; V2 D Vs ; V1 D Vs ; (3.48)
Z C Z0 Z C Z0
50 3 Two-Port Networks

Fig. 3.15 Transmission line I1 Z I2


circuit
+ +
Vs + V1 V2 Z0
− −

one gathers, by the superposition principle,

V1 C Z0 I1 1 C ZCZ
Z0 ZC2Z0
ZCZ
a1 D p D p 0
Vs D p 0 V s ; (3.49)
2 Z0 2 Z0 2 Z0

V1  Z0 I1 1  ZCZ
Z0 Z
ZCZ
b1 D p D p 0 Vs D p 0 Vs ; (3.50)
2 Z0 2 Z0 2 Z0

V2  Z0 I2
Z0
ZCZ0
C ZCZ
Z0 2Z0
ZCZ
b2 D p D p 0
Vs D p 0 Vs ; (3.51)
2 Z0 2 Z0 2 Z0

V2 C Z0 I2
Z0
ZCZ0
 ZCZ
Z0
a2 D p D p 0
Vs D 0: (3.52)
2 Z0 2 Z0

Therefore, as a2 D 0 in (3.52), the transfer functions from a1 to b1 and b2 ,


respectively, can be found from the ratio of (3.49) and (3.50), and the ratio of (3.49)
and (3.51) as
ˇ
b1 ˇˇ Z
S11 D ˇ D ; (3.53)
a1 a2 D0 Z C 2Z0
ˇ
b2 ˇˇ 2Z0
S21 D ˇ D : (3.54)
a1 a2 D0 Z C 2Z0

Similarly, utilizing the circuit shown in Fig. 3.16 results in


Vs Z0
I1 D I2 ; I2 D ; V1 D Vs ; V2 D Vs ; (3.55)
Z C Z0 Z C Z0
and then

V2 C Z0 I2 1 C ZCZ
Z0 ZC2Z0
ZCZ
a2 D p D p 0 Vs D p 0 V s ; (3.56)
2 Z0 2 Z0 2 Z0

V2  Z0 I2 1  ZCZ
Z0 Z
ZCZ
b2 D p D p 0 Vs D p 0 Vs ; (3.57)
2 Z0 2 Z0 2 Z0
3.6 Chain Scattering Parameters (T Parameters) 51

Fig. 3.16 Transmission line I1 Z I2


circuit
+ +
Z0 V1 V2 + V
s

− −

V1  Z0 I1
Z0
ZCZ0
C ZCZ
Z0 2Z0
ZCZ
b1 D p D p 0
Vs D p 0 Vs ; (3.58)
2 Z0 2 Z0 2 Z0

V1 C Z0 I1
Z0
ZCZ0
 ZCZ
Z0
a1 D p D p 0
Vs D 0: (3.59)
2 Z0 2 Z0

Therefore, as a1 D 0 in (3.59), the transfer functions from a2 to b1 and b2 ,


respectively, can be found from the ratio of (3.56) and (3.57), and the ratio of (3.56)
and (3.58) as
ˇ
b2 ˇˇ Z
S22 D ˇ D ; (3.60)
a2 a1 D0 Z C 2Z0
ˇ
b1 ˇˇ 2Z0
S12 D D : (3.61)
a2 ˇa1 D0 Z C 2Z0

This concludes
" # " # " #
b1 S11 S12 a1
D ; (3.62)
b2 S21 S22 a2

where
2 Z 2Z0 3
" #
S11 S12 6 Z C 2Z0 Z C 2Z0 7
D6
4
7:
5 (3.63)
S21 S22 2Z 0 Z
Z C 2Z0 Z C 2Z0

3.6 Chain Scattering Parameters (T Parameters)

As shown in Fig. 3.17, the chain scattering parameter of a two-port network is


defined as
52 3 Two-Port Networks

b1 a2

⎡T11 T12 ⎤
⎢T T ⎥ ΓL
a1 ⎣ 21 22 ⎦ b2

Fig. 3.17 Chain description of T parameter

" # " # " #


b1 T11 T12 a2
D ; (3.64)
a1 T21 T22 b2
where
ˇ ˇ ˇ ˇ
b1 ˇˇ b1 ˇˇ a1 ˇˇ a1 ˇˇ
T11 D ; T12 D ; T21 D ; T22 D : (3.65)
a2 ˇb2 D0 b2 ˇa2 D0 a2 ˇb2 D0 b2 ˇa2 D0

Apparently, the T parameter matrix is also unit-free like the S parameter.


The chain scattering parameter is often denoted as the T parameter in two-port
network circuits. Recall that the scattering parameter was proposed to describe
incident and reflected waves. It should be noted that the chain scattering matrix
T was introduced to easily cascade several networks. The relationship between a
scattering matrix (in LFT) and a chain scattering matrix (in CSD) can be determined
as follows. Rearranging signals in T yields
2 3 2 3 2 3 2 3
b1 T11 T12 b1 T11 T12
6 7 6 7 " # 6 7 6 7 " #
6 a1 7 6 T21 T22 7 a2 6 b2 7 6 0 I 7 a2
6 7 6 7 )6 7D6 7
6 a 7D6 I 7 6 a 7 6 T 7 :
4 2 5 4 0 5 b2 4 1 5 4 21 T22 5 b2
b2 0 I a2 I 0
(3.66)
Thus, the S parameter is given by
" # " # " # " # " #1 " #
b1 T11 T22 a2 T11 T12 T21 T22 a1
D D
b2 0 I b2 0 I I 0 a2
" # " #
S11 S12 a1
D ;
S21 S22 a2
(3.67)

where
" # " # " #1
S11 S12 T11 T12 T21 T22
D : (3.68)
S21 S22 0 I I 0
3.6 Chain Scattering Parameters (T Parameters) 53

b1 −1 a1
⎡T11 T12 ⎤ ⎡T21 T22 ⎤
⎢0 I ⎥⎦ ⎢I
⎣ ⎣ 0 ⎥⎦ a2
b2

ΓL


b1 a1
⎡ S11 S12 ⎤
b2 ⎢S S 22 ⎥⎦
a2
⎣ 21

ΓL

Fig. 3.18 Parameter conversion from T to S

Note that here, T22 should be invertible. Then, the S parameter as illustrated in
Fig. 3.14 can be obtained. Figure 3.18 shows the corresponding manipulations.
Similarly, one can also derive parameter T from S,
2 3 2 3 2 3 2 3
b1 S11 S12 b1 S11 S12
6 7 6 7" # 6 7 6 7" #
6 b2 7 6 S21 S22 7 a1 6 a1 7 6 I 0 7 a1
6 7 6 7 6 7D6 7
6 a 7D6 I 7 a ) 6 a 7 6 0 7 a : (3.69)
4 1 5 4 0 5 2 4 2 5 4 I 5 2

a2 0 I b2 S21 S22

If S21 is invertible, then T parameter matrix is given by


" # " #" #1 " # " #" #
b1 S11 S12 0 I a2 T11 T12 a2
D D ;
a1 I 0 S21 S22 b2 T21 T22 b2
(3.70)

where
" # " #" #1
T11 T12 S11 S12 0 I
D : (3.71)
T21 T22 I 0 S21 S22
54 3 Two-Port Networks

3.7 Conversions Between (ABCD) and (S, T) Matrix


Parameters

In this section, the conversions from impedance, admittance, chain, and hybrid
matrices to the scattering and chain scattering matrices will be discussed. Firstly,
as shown in Fig. 3.19, the conversion from transmission parameters ABCD to T
parameters will be taken as an example.
Recall (3.42), (3.45), and (3.46). Then, by (3.21), one has
" # " #" # " # " # " #
V1 A B V2 V1 A B 1 V2
D )… D… … … :
I1 C D I2 I1 C D I2
(3.72)

Hence, from (3.64),


" # " # " # " #" #
b1 A B 1
a2 T11 T12 a2
D… … D : (3.73)
a1 C D b2 T21 T22 b2

Further, by (3.47),
" # " #" #" #1
T11 T12 1 Z0 A B 1 Z0
D : (3.74)
T21 T22 1 Z0 C D 1 Z0

As illustrated in Fig. 3.20, the function of … is in fact to transform the system


states (Vi , Ii ) into (ai , bi ). This mapping is also called the “Möbius transformation”
[4, 6], mathematically speaking. In addition, readers can easily convert T parameter
into the S parameter according to (3.68), as shown in Fig. 3.18, which reveals that
the S parameter interconnects with
L . Other conversions can be carried out by the
same techniques.
Example 3.2 For the circuit of Fig. 3.15, determine its ABCD, T, and S parameters.
The chain matrix can be determined as

a I1 I2 b I1 I2
+ + + +
Vs1 + ⎡A B⎤ ⎡A B⎤ + V
V1 ⎢C D ⎥ V2 Z0 Z0 V1 ⎢C D ⎥ V2 s2
⎣ ⎦ ⎣ ⎦
− − − −

Fig. 3.19 Two-port transmission circuits: (a) With left voltage source and (b) With right voltage
source
3.8 Lossless Networks 55

V1 b1 V1 V2 a2 V2
⎡A B⎤
Π −1 Π ⎢C D ⎥ Π −1 Π ZL
a1 ⎣ ⎦
I1 I1 −I2 b2 −I2
Original Coordination T ΓL
domain transformation

Fig. 3.20 Effect of … on coordination transformation

Z eq 2 Z eq1

V1 b1 V1 V2 a2 V2
⎡A B⎤
ZS Π −1 Π ⎢C D ⎥ Π −1
Π ZL
⎣ ⎦
I1 a1 I1 −I2 b2 −I2
ΓS T ΓL

Fig. 3.21 Two-port circuit with source and load impedances

" # " #" # " #" #


V1 A B V2 1 Z V2
D D : (3.75)
I1 C D I2 0 1 I2

Then T can be determined as


2 3
" # " # Z Z
1
T11 T12 A B 6 2Z0 2Z0 7
D… …1 D6
4
7: (3.76)
T21 T22 C D Z Z 5
 1C
2Z0 2Z0

Furthermore, by (3.68), the S parameter is given by


2 Z 2Z0 3
" #
S11 S12 6 2Z0 C Z 2Z0 C Z 7
D6
4
7:
5 (3.77)
S21 S22 2Z 0 Z
2Z0 C Z 2Z0 C Z

3.8 Lossless Networks

As shown in Fig. 3.21, define the input and output reflection coefficient (
) as

b1
Zeq1  Z0

1 D D CSDr …; Zeq1 D ; (3.78)
a1 Zeq1 C Z0
56 3 Two-Port Networks

b2
Zeq2  Z0

2 D D CSDl …1 ; Zeq2 D ; (3.79)
a2 Zeq2 C Z0
" # !
A B
where Zeq1 D CSDr ; ZL is the equivalent impedance looking from
C D
" # !
A B
port 1, Zeq2 D CSDl ; ZS is the equivalent impedance looking
C D
from port 2, and Zo is the characteristic impedance. Clearly, if Zeq1 D Zo (i.e., the
equivalent impedance matches the characteristic impedance), then
1 D 0 by (3.78).
This indicates that the incident wave from port 1 will fully come out from port 2 and
will not cause any reflection in port 1. Also
1 D 0 means that the incident power
wave from port 1 does not reflect a power wave in the output of port 1 that causes
an echo. Similarly, if Zeq2 D Zo ,
2 D 0. This means that the incident power wave
from port 2 does not reflect a power wave in the output of port 2 that produces an
echo. Alternatively, all the output power waves come from another port of a two-port
network. Such a condition is called all-pass in microwave theory [2]. If the released
power is not attenuated through the propagation of two-port networks, this circuit is
considered lossless. In addition, a no-reflection echo two-port network is also called
the matched network.
Define the average delivered power at each port as

1 

Pav1 D a1 a1  b1  b1 ; (3.80)
2
1 

Pav2 D a2 a2  b2  b2 : (3.81)
2
Consequently, the total delivered power is

Pav D Pav1 C Pav2 : (3.82)

Then, one has, by (3.43),


(" # " # " # " #) (" # " # " #
1 a1 a1 b1 b1 1 a1 a1 a1
Pav D  D 
2 a2 a2 b2 b2 a2 2 a2 a2
" #) (" # " #)

a1 1 a1  
 a1
S S D I S S :
a2 2 a2 a2 (3.83)

A two-port network which consumes no power (i.e., Pav D 0) is called lossless.


From (3.83), a lossless network has the property S*S D I. Generally speaking, the
energy will not be attenuated in this propagation. For a lossless network, all of
the power (energy) directed to any one port has to be accounted for by summing
3.8 Lossless Networks 57

Fig. 3.22 Transmission line I1 I2


circuit + Ls +
V1 V2

− −

the power output at the other ports with the power reflected at the incident port.
" # " #" #1
S11 S12 T11 T12 T21 T22
From D and S*S D I, one can obtain,
S21 S22 0 I I 0
" #
I 0
for J WD ,
0 I
" # " #" # " #
T11 T12 I 0 T11 T12 I 0
T J T D D D J: (3.84)
T21 T22 0 I T21 T22 0 I

Therefore, a lossless two-port network has a unitary S matrix or a J-unitary T


matrix.
Example 3.3 Prove the S parameter is a unitary matrix in Fig. 3.22 with character-
istic impedance Zo D 1( ).
Consider the transmission line in Fig. 3.22, where the impedance is an inductance
which does not contain any real part. Then by (3.63) with Z D sL and Zo D Ro ( ),
one has
2
Ls Ro 3
6 Ls C Ro Ls C Ro 7
S D6
4
7;
5 (3.85)
R o Ls
Ls C Ro Ls C Ro

or sL (D jwL) D jX
2
jX Ro 3
6 jX C Ro jX C Ro 7
S D6
4
7;
5 (3.86)
Ro jX
jX C Ro jX C Ro

and hence,
2 3
X2 Ro 2 jRo X jRo X
6 C 
S S D 6 Ro C X 2
2
Ro C X 2
2
Ro 2 C X 2 Ro 2 C X 2 7 7 D I: (3.87)
4 jRo X jRo X Ro 2 X2 5
 2  2 C
Ro C X 2 Ro C X 2 Ro 2 C X 2 Ro 2 C X 2

This concludes that the S matrix is unitary. Consequently, this is a lossless system.
58 3 Two-Port Networks

As mentioned in Sect. 3.7, the transformation of … acts as a transformation


coordinator. The following examples will show the effect of function transformation
via the operation of …. First, two definitions are given below.
Definition 3.1 A real rational function Z(s) is called positive real if the following
conditions are satisfied [1]:
1. Z(s) is analytic in Re[s] > 0.
2. Z(s) is real when s is positive and real.
3. Z*(s) C Z(s) > 0 for Re[s] > 0.
Definition 3.2 A real rational function ¦(s) is called bounded real if the following
conditions are satisfied:
1. ¦(s) is analytic in Re[s] > 0.
2. ¦(s) is real for real positive s.
3. I 
*(s)
(s)  0 for Re[s] > 0.
" #
1 Zo
Recall that … D 2pZ 1
defined in (3.47), which is a transformation
0
1 Zo
matrix between voltage/current and power wave at a port. From Fig. 3.21, one can
obtain that
a2 ZL  Zo

L D D CSDr .…; ZL / D ; (3.88)
b2 ZL C Zo

a1
ZS  Zo

S D D CSDl …1 ; ZS D .D CSDr .…; ZS // : (3.89)
b1 ZS C Zo

In fact, these are called the bilinear transformations. For … with real Zo , the
positive real (PR) ZL and ZS such as (R,L,C) can be transformed to the bounded
real (BR) functions via …. For example, if Zo D 1( ) and ZL D sL, then
L D
1
1
CSDr .…; ZL / D sLC1sL1
, or if ZL D C1s , then
L D CSDr .…; ZL / D C1s C1 D 1CC
1C s
s
.
Cs
Figure 3.23 reveals this transformation.
The T parameter description is convenient for network cascading. However, the
representation of T matrix may not always be a proper function. Recall the series
circuit in Fig. 3.24, which gives
From Fig. 3.21, one has, for Z D sL,
2 Z Z 3
" # 1
1 Z 6 2Zo 2Zo 7
T D… …1 D 6 4
7: (3.90)
0 1 Z Z 5
 1C
2Zo 2Zo
3.8 Lossless Networks 59

PR BR PR BR
Domain Domain Domain Domain

V1 b1 V1 V2 a2 V2
⎡A B⎤
ZS Π −1 Π ⎢C D ⎥ Π −1 Π ZL
⎣ ⎦
ΓS
I1 a1 I1 −I2 b2 −I2
T ΓL

BR PR BR PR
Domain Domain Domain Domain

Fig. 3.23 The coordination transformation properties

Fig. 3.24 Transmission line ⎡ A B ⎤ ⎡1 Z ⎤


with series circuit ⎢C D ⎥ = ⎢ 0 1 ⎥ .
⎣ ⎦ ⎣ ⎦
+ Z +
I1 I2
V1 V2
− −

Fig. 3.25 Transmission line + +


with shunt circuit I1 I2
V1 ZP V2

− −

" #
1s s
For the case that Zo D 1( ) and L D 2, T D becomes an
s 1 C s
improper function matrix. However, by terminating the load ZL D 3( ), one has

L D CSDr .…; 3/ D 3C1


31
D 0:5. From (3.88), the transfer function is given by
" # !
b1 A B 2
L Zo C
L Z  Z s  1
D CSDr … ; ZL D CSDr .T;
L / D D ;
a1 C D
L Z  2Zo  Z sC2
(3.91)

which is bounded real.


From the shunt circuit depicted in Fig. 3.25, one also obtains
" # 2 3
A B 1 0
D4 1 5; (3.92)
C D 1
ZP
60 3 Two-Port Networks

Fig. 3.26 RLC circuit i1 L i2

+ +
Vs + v1 R v2 RL
− −

and
2 Zo Zo 3
" # 1 
1 0 6 2ZP 2ZP 7
T D… 1
…1 D 6
4
7:
5 (3.93)
1 Zo Zo
ZP 1C
2ZP 2ZP

For a capacitive circuit, the T matrix is improper. For the load


L , the transfer
function is
" # !
b1 A B 2
L ZP C
L Z0  Z0
D CSDr … ; ZL D CSDr .T;
L / D :
a1 C D
L Z0  2ZP  Z0
(3.94)

Converting the system description to LFT as shown in Fig. 3.18, one has
2 Z0 2ZP 3
" #
6 2ZP C Z0 2ZP C Z0 7 SP11 SP12
SP D 6 4
7D
5 (3.95)
2ZP Z0 SP21 SP22
2ZP C Z0 2ZP C Z0

which is always adequate with real Zo . Apparently,

b1 2
L ZP C
L Z0  Z0
D LFTl .SP ;
L / D : (3.96)
a1
L Z0  2ZP  Z0

Example 3.4 Consider an RLC circuit illustrated in Fig. 3.26. Derive its ABCD
parameter and T parameters when R D 10 and L D 1.
The ABCD parameter of Fig. 3.26 can be derived from simple matrix
manipulation
2 3
" # " #2 3
1 C
Ls
Ls
2 s 3
A B 1 Ls 1 0 6 R 7 1C s
GD D 4 1 5D6 7D4 10 5
C D 0 1 1 4 1 5
R 1 0:1 1
R
Exercises 61

Fig. 3.27 Two-port v1 v2


description
G RL

i1 −i2

The overall input impedance can be calculated from (Fig. 3.27),

V1 .R C RL / Ls C RRL .10 C RL / s C 10RL


D CSDr .G; RL / D D
i1 RL C R RL C 10

which is the positive" real for #any positive real RL . However, it is an improper
1 1
function. Let … D 12 , and one gets
1 1
2 3
19  9s 11s  1
16 10 10 7
T D …G…1 D 64
7
2 1  9s 11s C 21 5
10 10
RL 1
and
L D CSDr .…; RL / D RL C1
. From Fig. 3.23, one can obtain

b1 .11  9
L / s C .19
L  1/
D CSDr .T;
L / D :
a1 .11  9
L / s C .21 C
L /
1
Evidently, since
L D CSDr .…; RL / D RRLLC1 < 1, for any positive real RL > 0, the
norm of b1 is always less or equal to the norm of a1 . For example, when assuming

L D 0.7, then ab11 D CSDr .T;


L / D 4:7sC12:3
4:7sC21:7
< 1; 8s > 0; one can conclude that
b1
a1
2 BH 1 .

Exercises

1. Determine the transfer function from V1 to V2 of the following circuit, with and
without the load RL . Convert this circuit into the control block diagram and verify
V2
V1
via Mason’s gain formula and LFT, respectively.

I1 R1 R2 I2
+ +
V1 Ls V2 RL

− −
62 3 Two-Port Networks

2. Find the Y parameter of the following two-port circuit.


5Ω 1Ω 5Ω

+ +
V1 V2
20Ω 20Ω
− −

3. Determine the ABCD and H parameters for the following two-port circuit.
30Ω
+ +
V1 20Ω 20Ω V2
− −

4. Determine the S parameter of the following two-port circuit with the characteris-
tic impedance Zo D 64 .

+ 4Ω 4Ω +

V1 V2

− −

5. Determine the S parameter of the following two-port circuit with the characteris-
tic impedance Zo D 10 .
30Ω
+ +
Z0 V1 30Ω V
30Ω 2 Z0
− −

6. Determine the ABCD parameter of the given circuit and then derive the T
parameter using the ABCD parameter.
I1 R R I2
+ +
Z0 V1 Ls V2 Z0
− −
References 63

7. Check whether the S parameters of the following circuit is unitary (lossless) or


not, then transfer S to Y. Does the real part of Y equal zero?
1
Cs

+ +
Z 01 V1 V2 Z 02
− −

References

1. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice-Hall Inc, Englewood Cliffs
2. Cheng DK (1992) Fundamentals of engineering electromagnetics. Addison-Wesley, New York
3. Franco S (1995) Electric circuits fundamentals. Saunders College Publishing, Orlando
4. Knopp K (1952) Elements of the theory of functions. Dover, New York
5. Misra DK (2001) Radio-frequency and microwave communication circuits. Wiley, New York
6. Needham T (2000) Möbius transformations and inversion. Clarendon, New York
Chapter 4
Linear Fractional Transformations

This chapter introduces the linear fractional transformation (LFT), which is a


convenient and powerful formulation in control system analysis and controller
synthesis. The LFT formulation employs a two-port matrix description linked by
a terminator to represent a closed-loop feedback system with two individual open-
loop systems. This representation is inherently suitable for MIMO systems. Several
examples are given to show how to locate the interconnected transfer function
for a given system by using LFT and also how to formulate a control design
problem into LFT. Additionally, in order to understand the benefit of utilizing LFT,
the relationship between Mason’s gain formulae and LFT will be discussed in
this chapter. Inner and co-inner systems are relevant to various aspects of control
theory, especially H1 control. Definitions of inner and co-inner functions are thus
introduced in the last section of this chapter.

4.1 Linear Fractional Transformations

Figure 4.1 illustrates the framework of an LFT, which includes two parts, i.e., a two-
port matrix P and one-port feedback terminator K. Without"the#feedback
" #termination
r w
y 7! u, the two-port matrix P is an open-loop system of 7! . Thus, the
u y
matrix representation can be characterized as
(
w D P11 r C P12 u
(4.1)
y D P21 r C P22 u;

where
w ˇˇ w ˇˇ y ˇˇ y ˇˇ
P11 D ˇ ; P12 D ˇ ; P21 D ˇ ; P22 D ˇ : (4.2)
r uD0 u rD0 r uD0 u rD0

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 65
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__4,
© Springer-Verlag London 2014
66 4 Linear Fractional Transformations

Fig. 4.1 LFT form w r


⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦ u
⎣ 21

" # " #
w r
Note that each entrant in and can be vector-valued signals. Here,
y u
(4.2) symbolizes its input/output relations in the MIMO cases, not the actual
mathematical “division.” It can be said as, e.g., P11 represents how w is dependent
on r when u D 0. In the feedback control, the terminator K (to be designed) encloses
the open-loop system (4.1) via the feedback part

u D Ky: (4.3)

Apparently, the LFT formulation can be regarded as two individual open-loop


systems but linked by the terminator to represent the closed-loop transfer function
of r 7! w. From (4.1) and (4.3), one has
(
w D P11 r C P12 u D P11 r C P12 Ky
(4.4)
y D P21 r C P22 u D P21 r C P22 Ky;

therefore,

y D .I  P22 K/1 P21 r: (4.5)

Substituting (4.5) into (4.4) yields

w D LFTl .P; K/ r; (4.6)

where

LFTl .P; K/ WD P11 C P12 K.I  P22 K/1 P21 : (4.7)

In general, the LFT exploits the matrix form to describe linear systems in which the
LFT allows the terminator to be placed either below or above (interconnected) plant
P. Herein, the subscript l in (4.7) indicates that this is a lower LFT, i.e., terminator
K is below plant P.
Similarly, Fig. 4.2 shows the upper LFT form that
" # " #" #
y P11 P12 u
D (4.8)
w P21 P22 r
4.1 Linear Fractional Transformations 67

Fig. 4.2 Upper LFT form


H

y u
⎡ P11 P12 ⎤
w ⎢P P22 ⎥⎦ r
⎣ 21

a b
Δ
Δ
z ⎡ P11 P12 P13 ⎤ e
⎢P P23 ⎥⎥
w ⎢ 21 P22 r ⇒ z e
y ⎢⎣ P31 P32 P33 ⎥⎦ u ⎡ M 11 M 12 ⎤
⎢M M 22 ⎥⎦
w ⎣ 21 r
K

Fig. 4.3 LFT forms: (a) (33) LFT form and (b) (22) LFT form

and

u D Hy: (4.9)

By the same manipulations as (4.4), (4.5) and (4.6), one can then obtain
w D LFTu (P,H)r where

LFTu .P; H / WD P22 C P21 H .I  P11 H /1 P12 : (4.10)

Note that in (4.7) and (4.10), the terminators have implicitly been assumed in
the above such that (I  P22 K) and (I  P11 H) are invertible. For LFTl (P,K) (or
LFTu (P,H)), if (I  P22 K) (or (I  P11 H)) is invertible, this system is called well-
defined (or well-posed). In practice, the invertibility condition in almost all feedback
control systems design should be satisfied for the existence of a controller.
Additionally, Fig. 4.3a shows the full LFT form with the terminators that appear
both in the upper and lower presentations. For example, Fig. 4.3a can be employed
to describe a feedback control system which suffers from perturbed dynamics . In
this case, P consists of, accordingly, 9 (3  3) sub-matrices, i.e.,

(4.11)
68 4 Linear Fractional Transformations

Closing the feedback loop by u D Ky will yield, by a straightforward manipulation,


" # " #" #
z M11 M12 e
D ; (4.12)
w M21 M22 r

where

(4.13)

This shows that the full (3  3) LFT form of Fig. 4.3a can be reduced by the
closed-loop u D Ky into a (2  2) LFT form depicted in Fig. 4.3b, which includes
an uncertain dynamics appealing for robustness consideration [2].
Example 4.1 Consider the upper LFT form of (4.8) and use MATLAB to process
the LFT manipulations.
The LFT determination can be carried out in many ways and be formed as an
S-function. The following MATLAB code is an example included here for readers’
reference.

clc;
clear;
disp(‘LFTl method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(‘z D> u (P11) ?’);
P12Dinput(‘z D> y (P12) ?’);
P21Dinput(‘w D> u (P21) ?’);
P22Dinput(‘w D> y (P22) ?’);
KDinput(‘Please input K:’);
if (eye(size(P22))-P22* KDD0);
disp(‘The transfer function is singular !’);
disp(‘Plz enter Ctrl c !’);
pause;
else
Transfer_function_w_to_zDsimplify(P11CP12* K* inv(eye
(size(P22))-P22* K)* P21)
end
4.2 Application of LFT in State-Space Realizations 69

4.2 Application of LFT in State-Space Realizations

It is known that the output of a linear time-invariant (LTI) system is a convolution


of the system’s impulse response and input functions [4]. Inherently, there is no
simple analytic way to compute the convolution, especially for MIMO matrix
cases. The state-space description is a useful approach to represent the system’s
input/output relationship. Consider an nth-order (m  r) MIMO system and a state-
space description given by

P / D Ax.t / C Bu.t /; x .t0 / D x0


x.t
y.t / D C x.t / C Du.t /; (4.14)

where x(t0 ) denotes the initial state and A 2 Rn  n , B 2 Rn  r , C 2 Rm  n , D 2 Rm  r


are system matrices. The transfer matrix of the input/output relationship for the zero
initial state (i.e., x(t0 ) D 0) can be written as

Y.s/
G.s/ D ; i:e:; Y.s/ D G.s/U.s/; (4.15)
U.s/

where G(s) is the transfer function (matrix) of the system, Y(s) and U(s) are the
Laplace transforms of the output and input functions, respectively. If the state and
output equations of (4.14) are known, G(s) can be uniquely computed as

G.s/ D C .sI  A/1 B C D: (4.16)

In this book, the state-space realization of a given system G(s) is denoted as

(4.17)

In the former, the manipulation on the right-hand side of the equation is the
usual multiplication between a matrix and a vector. Figure 4.4 below shows the
corresponding block diagram of (4.17) where the integrator (i.e., 1s ) is an operator
P As an example, it is shown
to characterize the relationship of state variables x and x.
next that the LFT manipulation can be used to determine the state-space realization.
Briefly speaking, through the LFT method, one can obtain the state-space model of
a system by cutting off the connections around the integrator.
To determine the matrix P in the LFT formulation, the first step is to properly
cut off all the internal loops such that no feedback loop is left and the isolated part
will be the terminator. It can be seen from the block diagram in Fig. 4.4 that if
the integrator is truncated, there are no internal loops. Thus, a two-port matrix for
70 4 Linear Fractional Transformations

Fig. 4.4 State-space model


D

u x I
x y
B s C

y ⎡D C ⎤ u x I x
⎢ B A⎥
⎣ ⎦ s

x I x ⎡A B⎤
s y ⎢C D ⎥ u
⎣ ⎦
LFTl LFTu

Fig. 4.5 LFT representations

" # " # " #


u y D C
the relationship 7! can be found directly as or alternatively
x xP B A
" # " # " #
A B x xP
for the relationship 7! . Using Is as the terminator that
C D u y
encloses xP 7! x will yield the transfer matrix G(s) from u 7! y. Figure 4.5 shows
both lower and upper LFT representations where
" # ! " # !
D C I A B I
G.s/ D LFTl ; D LFTu ; D D C C .sI  A/1 B:
B A s C D s
(4.18)

A transfer function matrix G(s) is said to be realizable if there exists a finite-


dimensional state-space realization. A transfer function matrix G(s) is uniquely
determined from the input/output relationship for a given initial state; however, there
exist infinitely many state-space realizations for a given G(s). As an introduction
dealing with the realization problem, one can first consider the SISO, third-order
LTI system below

b2 s 2 C b1 s C b 0
Y.s/ D G.s/U.s/ D C d0 U.s/: (4.19)
s 3 C a2 s 2 C a1 s C a0

As can be seen in (4.19), the transfer function G(s) is purposely formed as the
summation of a strict proper function and a constant d0 (DG(1)). Here, (4.19) can
be rewritten as
4.2 Application of LFT in State-Space Realizations 71

! 1
!
1
b C s12 b1 C s13 b0
s 2 n
Y.s/ D C d0 U.s/ D s1
C d0 U.s/ (4.20)
1 C 1s a2 C s12 a1 C s13 a0 d s

or
" #
1 1 1 1
Y.s/ D 1
b2 C 2 b1 C 3 b0 C d0 U.s/: (4.21)
d s
s s s

Furthermore, defining

1 1
X.s/ D 1
U.s/ D U.s/ (4.22)
d s
1C 1
a
s 2
C 1
a
s2 1
C 1
a
s3 0

yields

1 1 1
1 C a2 C 2 a1 C 3 a0 X.s/ D U.s/ (4.23)
s s s

and
1 1 1
X.s/ D U.s/  a2 X.s/  2 a1 X.s/  3 a0 X.s/: (4.24)
s s s
Then,

1 1 1 1
Y.s/ D n X.s/ C d0U.s/ D b2 C 2 b1 C 3 b0 X.s/ C d0U.s/: (4.25)
s s s s

The corresponding system block diagram is depicted in Fig. 4.6, where only the
negative sign, if exists, is noted as “” in all of the summing points.
Now, the LFT approach is adopted to find the state-space representation, and the
integrators can be cut off as illustrated in Fig. 4.7. If the
" state variables
# are defined
A B
as shown in Fig. 4.7, then the realization matrix P D can be found from
C D
" # " #
x xP
7! directly as
u y

(4.26)
72 4 Linear Fractional Transformations

d0

b2

b1

u x y
1 1 1
b0
s s s

-a2

-a1

-a0

Fig. 4.6 Controller canonical form

d0

b2

b1

× ×× ×× ×
u x3 x3 x2 x2 x1 x1 y
b0

-a2

-a1

-a0

Fig. 4.7 Cut off the integrators


4.2 Application of LFT in State-Space Realizations 73

d0

b2

b1

× ×× ×× ×
u x1 x1 x2 x2 x3 x3 y
1 1 1
b0
s s s

-a2

-a1

-a0

Fig. 4.8 Define state variables

i.e.,
2 3 2 32 2 3 3
xP 1 0 1 0 0x1
6 7 6 76 7 6 7
4 xP 2 5 D 4 0 0 1 5 4 x2 5 C 4 0 5 u;
xP 3 a0 a1 a2 x3 1
2 3
x1
 6 7
y D b0 b 1 b2 4 x2 5 C d0 u: (4.27)
x3

Obviously, if the state variables are defined in a different sequence as represented in


Fig. 4.8, another state-space realization is derived as

(4.28)

Apparently, (4.26) and (4.28) indicate that these two different state-space realiza-
tions result from the same third-order transfer function. In fact, for two different
expressions with the same dimension, there always exists a similarity transformation
that links them together. For this example,
74 4 Linear Fractional Transformations

2 3
0 0 1
6 7
T D 40 1 0 5 ; (4.29)
1 0 0

which gives
" # " #" #" #
b B
A b T 1 0 A B T 0
D : (4.30)
b D
C b 0 I C D 0 I

In summary, for a general nth-order system of

bn1 s n1 C    C b1 s C b0
G.s/ D C d0 ; (4.31)
s n C an1 s n1 C    C a1 s C a0

the state-space form given by

(4.32)

is called the controllable canonical form [1]. The observable canonical realization
is dual in the form of (4.28).

4.3 Examples of Determining LFT Matrices

In order to enhance the understanding of LFT manipulations, this section provides


several examples for exercise. These exercises are aimed at assisting readers in
developing knowledge and skills in selecting the cutoff points in LFT formulations.
Example 4.2 The following shows three different state-space expressions which are
from the same transfer function G.s/ D s 2 3sC4
C3sC2
.
4.3 Examples of Determining LFT Matrices 75

× × × ×
u x1 x1 x2 x2 y
1 1
4
s s

-3

-2

Fig. 4.9 Realization in a canonical form

4.3.1 Canonical Form

The transfer function of the input/output relationship can be written as

3s C 4 Y.s/
G.s/ D D (4.33)
s2 C 3s C 2 U.s/

or equivalently,

3 s11 C 4 s12
Y.s/ D U.s/: (4.34)
1 C 3 s11 C 2 s12

Applying the procedures developed above to obtain (4.24) and (4.25) will yield the
block diagram of Fig. 4.9. Then, the integrators can be separated to obtain a two-port
matrix, of which the state-space realization is

(4.35)

which can be verified by

(4.36)
76 4 Linear Fractional Transformations

× × × ×
u x1 x1 x2 x2 y
1 1
-2
s s

-1 -2

Fig. 4.10 Realization in a cascade form

4.3.2 Cascade Form

Let
! !
3s C 4 1 2 1
2 1s
G.s/ D 2 D 3 D s
3C
s C 3s C 2 sC1 sC2 1C 1
s
1 C 2 1s
(4.37)

which will result in the block diagram of Fig. 4.10. The corresponding two-port
matrix becomes

(4.38)

where the A-matrix of this expression is low triangular. It can be seen that

(4.39)

4.3.3 Parallel Form

From
1 2
3s C 4 1 2
G.s/ D D C D s
C s
; (4.40)
s 2 C 3s C 2 sC1 sC2 1C 1
s
1C 2
s
4.3 Examples of Determining LFT Matrices 77

× ×
Fig. 4.11 Realization in a x1 x1
parallel form 1
s

u y
-1

× ×
x2 x2
1
2
s

-2

one has the block diagram of Fig. 4.11. Then the corresponding two-port matrix
becomes

(4.41)

where the A-matrix of this realization is diagonal. It can be found that

(4.42)

From this example, it can be seen that the LFT state-space representation of a given
transfer is not unique. However, from (4.35), (4.38), and (4.41),

C1 Ai1 B1 D C2 Ai2 B2 D C3 Ai3 B3 ; 8i D 0; 1; : : : ; n: (4.43)

The subscripts 1, 2, and 3 denote the (A, B, C) matrices from (4.35), (4.38), and
(4.41), respectively. This implies that their input/output mappings are the same.
Different state-space realizations are theoretically equivalent in that they lead to
the same transfer function. However, in practice, how to choose an appropriate
expression is still important due to the numerical stability and sensitivity as well as
manipulation convenience, which will always appear in physical implementations.
The following summarizes the use of LFT.
1. The LFT structure represents the interconnection of a two-port, open-loop matrix
and a feedback terminator.
78 4 Linear Fractional Transformations

Fig. 4.12 Feedback control d


case
r e u y
K ( s) G (s)

e u'
Dk Dg
d

xk xk x g xg
r e 1 u 1 y
Bk Ck Bg Cg
− s s

Ak Ag

Fig. 4.13 State-space realization of a unity feedback system

2. In determining the LFT form of a feedback connection system, one should break
all the inner feedback loops to obtain the two-port, open-loop matrix.
3. Different cutoffs will lead to a different terminator and different two-port
matrices, but the input/output mappings are the same.
Recall the unity feedback system, depicted again in Fig. 4.12 below, where G(s)
is a controlled
" # plant # K(s) is the controller. The closed-loop transfer (2  2)
" and
d e
matrix of 7! can be found as
r y
" #
KG.I C KG/1 .I C GK/1
T D : (4.44)
G.I C KG/1 GK.I C GK/1

Let and denote the state-space realizations.


In order to obtain the state-space expression of (4.44) in the full LFT formulation,
as depicted in Fig. 4.3a, one may cut off all of the integrators in Fig. 4.13 and also
eliminate Dk (or Dg ) such that there are no inner loops. Then, the full LFT matrix P
can be found directly from the assigned state variables as
4.3 Examples of Determining LFT Matrices 79

a b
I
s I
s
x ⎡ P11 P12 P13 ⎤ x
⎡e⎤ ⎢P ⎡d ⎤
⎢ y⎥ P22 P23 ⎥⎥ ⎢r ⎥ ⇒
⎣ ⎦ ⎢ 21
⎣ ⎦ x x
e ⎣⎢ P31 P32 P33 ⎦⎥ u' ⎡ AT BT ⎤
⎡e⎤ ⎢C
⎢ y⎥ ⎣ T DT ⎥⎦ ⎡d ⎤
⎢r ⎥
⎣ ⎦ ⎣ ⎦
Dk

Fig. 4.14 LFT forms: (a) (33) LFT form and (b) (22) LFT form

(4.45)

Suppose that I C Dg Dk ¤ 0, which reflects the well-posedness condition of a


feedback system
" # [4,"11].# Let the state-space realization of the closed-loop transfer
d e
matrix of 7! in (4.44) be denoted by
r y

(4.46)

where

(4.47)

" # !
AT BT
Thus, one has T .s/ D LFTl .P; Dk / D LFTu ; . Obviously, the I
s
CT D T
feedback system is always well-posed for the case of either Dk D 0 or Dg D 0, i.e.,
the strictly proper case. Then, for Dk D 0, one can find from (4.45) that the state-
space realization of the feedback system is given by Fig. 4.14b
80 4 Linear Fractional Transformations

(4.48)

4.4 Relationship Between Mason’s Gain Formulae and LFT

Mason’s approach [7, 8] has been a useful tool to determine the transfer function for
a given system for many years. A substantial number of practicing control engineers
are familiar with it. However, this approach involves a set of rules that can be
difficult for a new learner to remember.
This section reveals a relationship between the classic Mason’s gain formulae
and the LFT approach to characterize the transfer function from input to output. It
is well-known that the Mason’s gain formulae are a set of rules for determining
the transfer function of a single-input and single-output (SISO) system. From
the discussions in previous sections, one knows that the LFT formulation can
generalize the determination of transfer functions in MIMO cases. In this section,
several examples are given to show the differences and similarities between these
two approaches. Through the following examples, one can observe that these two
approaches reveal their individual benefits in determining the transfer function based
on differently structured control systems.
The Mason’s gain formulae [6] are as follows:
X
Mj j
j
1: M D

2: M D transfer function or gain of the system
3: Mj D gain of j th forward path
4: j D an integer representing the forward paths in the system
X
5: D 1  .all different loop gains/
X
C .sum of the gain products of all combinations of 2 nontouching loop gains/
X
 .sum of the gain products of all combinations of 3 nontouching loop gains/

C 

6: j D value of for that part of the block diagram that does not touch the j th
forward path: (4.49)
4.4 Relationship Between Mason’s Gain Formulae and LFT 81

L1

r w
E B A

L2

Fig. 4.15 System block diagram

Example 4.3 Determine the transfer function of the system block diagram depicted
in Fig. 4.15 from r to w using Mason’s gain formulae and the LFT formulation,
respectively.
Firstly, the Mason’s gain formulae (i.e., the following steps (1) to (5)) are applied.
Mason’s Approach
Step 1: Find all of the feedback loops and their gains.
In this example, there are two feedback loops: L1 D  CBE and L2 D  DAB.
Step 2: Find the forward paths and their gains.
A forward path is the path from r to w that does not cross the same point more than
once. In this example, there is only one forward path, i.e., j D 1 and M1 D ABE.
Step 3: Find .
In this example, there are no non-touching loop pairs so that
X
D1 loop gains D 1 C CBE C DAB: (4.50)

Step 4: Find j .
From the block diagram of Fig. 4.15, one can see that L1 D  CBE and L2 D  DAB
touch the forward path M1 D ABE so that 1 D 1.
Step 5: Final solution.
Using the Mason’s gain formulae (4.49), the transfer function from r to w is given by

w ABE
DM D : (4.51)
r 1 C ECB C DAB

LFT Approach
To find the transfer function of a feedback control system by using the LFT
approach, the truncated termination part can always be chosen to be an identity.
Note that the cutting place should break all of the internal closed loops. In this
example, the minimum cutting point is 1. Figure 4.16 shows an alternative cutting
selection. As can be seen in this figure, the number of cutting points is 2. Figure 4.17
82 4 Linear Fractional Transformations

u1 y1
C


r w
E B A

u2 y2
D

Fig. 4.16 System block diagram with two cutting points

Fig. 4.17 LFT representation r u1 u2


w
⎡ ABE ABE ABD ⎤ r
⎢ BE − BEC BD ⎥⎥

⎢⎣ ABE − ABEC ABD ⎦⎥

⎡ y1 ⎤ ⎡1 0 ⎤ ⎡ u1 ⎤
⎢y ⎥ ⎢0 1 ⎥ ⎢u ⎥
⎣ 2⎦ ⎣ ⎦ ⎣ 2⎦

illustrates its corresponding LFT formulation. The transfer function from r to w is


given by

(4.52)

Now, consider the truncation as depicted in Fig. 4.18, in which there is only
one break point. Clearly, one has the terminator K D 1, and the two-port matrix is
reduced to

(4.53)
4.4 Relationship Between Mason’s Gain Formulae and LFT 83

− y u
r w
E B A

Fig. 4.18 System block diagram with one cutting point

Fig. 4.19 Reduced LFT form w r


⎡0 AB ⎤
y ⎢E −( DA + EC ) B ⎥⎦ u

[1]

Figure 4.19 illustrates its corresponding LFT formulation. By the definition of


LFTl (P,K), the transfer function from r to w here is illustrated by

(4.54)

Apparently, the results shown in Eqs. (4.51), (4.52), and (4.54) are the same.
Here, one can observe that (I  P22 K) D 1 C (DA C EC)B in the above example is
equal to D 1 C (DA C EC)B in (4.50). The well-posedness condition of the LFT
formulation is thus related to requirement of nonzero in Mason’s gain formulae.
Example 4.4 Consider the following control block diagram, as shown in Fig. 4.20,
and determine the transfer function from r to w by using Mason’s gain formulae and
LFT, respectively.
Mason’s Approach
Step 1: Find all the loops and their gains.
Individual loops:

L1 D H1 G2 ; L2 D H2 G5 ; L3 D G5 G3 G2 G1 ; L4 D G5 G4 G2 G1 :


(4.55)
84 4 Linear Fractional Transformations

G4

r w
G1 G2 G3 G5
− − −
L1 L2

H1 L3 & L4
H2

Fig. 4.20 System block diagram

A pair of non-touching loops:

Loop 1–Loop 2 ) H1 G2 H2 G5 : (4.56)

Step 2: Find the forward paths and their gains.


In this example, there are two forward paths, so one has j D 1 and 2 as

M1 D G5 G3 G2 G1 and M2 D G5 G4 G2 G1 : (4.57)

Step 3: Find .
X X
D1 loop gains C nontouching loop gains taken two at a time

D 1  Œ.H1 G2 / C.  H2 G5 /C.  G5 G3 G2 G1 /C.  G5 G4 G2 G1 /


C H1 G2 H2 G5 : (4.58)

Step 4: Find j .

1 D 1 and 2 D 1: (4.59)

Step 5: Finalsolution.
By (4.49),

G5 .G3 C G4 / G2 G1
M D : (4.60)
1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5
4.4 Relationship Between Mason’s Gain Formulae and LFT 85

G4

y1 u1 y2 u 2
r w
G1 G2 G3 G5
− − −

H1 H2

Fig. 4.21 System block diagram with chosen cutting points

w r
⎡ 0 0 1 ⎤
⎢G G −G2 H1 −G2G1 ⎥⎥
⎢ 2 1
⎢⎣ 0 G5 (G3 + G4 ) −G5 H 2 ⎥⎦

⎡ y1 ⎤ ⎡ u1 ⎤
⎢y ⎥ ⎢u ⎥
⎣ 2⎦ ⎡1 0⎤ ⎣ 2⎦
⎢0 1⎥⎦

Fig. 4.22 LFT formulation

LFT Approach
Breaking the system as depicted in Fig. 4.21, its corresponding LFT form is as
shown in Fig. 4.22 where the two-port matrix is given by

(4.61)

By definition,
" #1 " #
  1 C G2 H1 G2 G1 G2 G1
LFTl .P; K/ D 0 1
G5 .G3 C G4 / 1 C G5 H2 0
G1 G2 .G3 C G4 / G5
D : (4.62)
1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5
86 4 Linear Fractional Transformations

Fig. 4.23 RC circuit R1 R3


+ I1 + I2 I3 +
V1 V2 R2 C V3
− − −

V1 1 I1 − V2 1 I3 1 V3
R2
R1 R3
− − Cs

Fig. 4.24 Block diagram of RC circuit

Consequently, the LFT yields the same results as Mason’s gain formula. It can be
seen from (4.61) that P22 is 2  2 with non-touching loops in the main diagonal
elements, additionally, det(I  P22 ) D , where

D 1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5 : (4.63)

In the following, engineering circuit problem is utilized for advanced com-


prehension. Readers can compare the differences between these three analysis
methodologies shown below.
Example 4.5 Determine the transfer function from V1 to V3 of the circuit in
Fig. 4.23 by using the LFT formulation.
From the given circuit, one can find the gain relationship between V1 and V3 ,
denoted by T(s), using Kirchhoff’s law [3], as

 C1s
V3 R2 jj R3 C C1s R C 1

3 Cs
D T .s/ D ; (4.64)
V1 R1 C R2 jj R3 C C1s

where R1 jj R2 WD RR2 1CR


R2
1
denotes the equivalent impedance in a parallel circuit. In
the special case that R1 D R2 D R3 D R, then T .s/ D 3RC1sC2 .
From the viewpoint of feedback control, one can draw a block diagram as shown
in Fig. 4.24 to represent this circuit. By properly choosing the cutting points to break
the two inner feedback loops, as shown in Fig. 4.25, the transfer function from V1
to V3 can be found from an LFT formulation, where matrix P in Fig. 4.26 can be
determined as
4.4 Relationship Between Mason’s Gain Formulae and LFT 87

y1 u1
V1 1 I1 − V2 1 I3 1 V3
R2
R1 R3
− − Cs
u2 y2

Fig. 4.25 Chosen cutting points

V3 V1

⎡ y1 ⎤ ⎡u ⎤
⎢y ⎥ = y u = ⎢ 1⎥
⎣ 2⎦ ⎣u2 ⎦
K

Fig. 4.26 LFT formulation of RC circuit

(4.65)

" #
1 0
For the case R1 D R2 D R3 D R, the implementing termination K D yields
0 1

V3
D LFTl .P; K/ D P11 C P12 K .I  P22 K/ 1 P21
V1
  " #!1 " #
1 1 2 R1 1
1
D0C I 1 R
D : (4.66)
C s RC s Cs
1
RC s
0 3RC sC2

Apparently, the LFT approach of Fig. 4.26 yields the same result as that by using
Kirchhoff’s law.
It should be noted that the cutting places which break the internal feedback loops
are not unique. In this example, one can also choose the other cutting points, e.g.,
88 4 Linear Fractional Transformations

y2 u2
V1 1 I1 − V2 1 I3 1 V3
R1
R2 R3
− − Cs

u1 y1

Fig. 4.27 Another cutting points of LFT

−1
L2
1 1 1
V1 R1 I1 R2 V2 R3 I 3 Cs V3
1 1 1
I2
L1 L3
−1 −1

Fig. 4.28 Signal flow diagram of RC circuit

" #
1 0
as shown in Fig. 4.27, in which one has the termination K D 1
and the
0 R3
corresponding two-port matrix P given by

(4.67)

For the case R1 D R2 D R3 D R, this yields

LFTl .P; K/ D P11 C P12 K .1  P22 K/ 1 P21


" #0 2 32 311 " #
  1 0 1 R 1 0 1
D 0 C 0 C1s @I  4 1 54 1 5A
1
0 R 1  R 0 1
Cs R
1
D : (4.68)
3RC s C 2
Here, one can also exploit Mason’s gain formulae for further verifications.
Convert this system control block diagram into the signal flow graph as shown in
Fig. 4.28.
4.5 LFT Description and Feedback Controllers 89

R2
V3 R1 R3 C s
T .s/ D D ; (4.69)
V1 1  .L1 C L2 C L3 / C L1 L3

where
R2 R2 1
L1 D  ; L2 D  ; L3 D  : (4.70)
R1 R3 R3 C s

For the case R1 D R2 D R3 D R, one can obtain

1
T .s/ D : (4.71)
3RC s C 2

Evidently, the results of (4.64), (4.66), and (4.71) are the same. The enhanced
discussions between LFT and Mason’s gain formulae will be revealed in the next
section.
From the examples above, using the LFT approach to determine the transfer
function is much more straightforward than that by Mason’s gain formulae. The
only key point in using the LFT approach is how to choose the places to break
all of the feedback loops. To summarize, for determining the closed-loop transfer
function of a feedback system, the more loops the control system has, the more
complicated Mason’s approach becomes. While in the LFT approach, it only needs
to properly choose the breaking points (even for MIMO systems), and then the
rest of the procedure is systematically manipulated. Moreover, some topological
behaviors such as non-touching loop gains can be observed in the two-port matrix
of the LFT formulation.

4.5 LFT Description and Feedback Controllers

Most of the feedback systems can be described in the standard control configuration
(SCC) of Fig. 4.1, in which the LFT approach is used to synthesize and analyze
the control problem. In this section, the feedback control design using the LFT
approach is firstly introduced. The LFT formulation is now mature, and subroutines
are available in MATLAB toolboxes. In the SCC of Fig. 4.1, LFT matrix P denotes a
generalized
" # plant that represents "
an open-loop
# system with two sets of inputs stacked
w r
as and two sets of outputs , where signals can be vector-valued functions
u y
of time. Controller K (to be designed) is to form the closed loop of y 7! u as a
terminator in the SCC. If a given system is “stabilizable” via the feedback controller
K, all internally stabilizing controllers can be found by the Youla parameterization
[10, 11], in which a generator for the stabilizing controller K is also given by an
90 4 Linear Fractional Transformations

z ⎡ P11 P12 ⎤ w
⎢P P22 ⎥⎦
w r y ⎣ 21 u
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦ u K
⎣ 21

⎡ L11 L12 ⎤
⎢L L22 ⎥⎦
⎣ 21
K a b
Φ

Fig. 4.29 Star product

Fig. 4.30 Control system G


with disturbance and model
α β
uncertainty ΔG
d
r u y
K G0

LFT in K D LFTl (…,ˆ). Then, the interconnection of two LFTs will include a
product connection, namely, the star product [9], which is very complicated in the
representation. Figure 4.29 shows the star product of two LFT matrices.
Modeluncertainties are almost inevitable in real-world control systems, and
therefore, robustness consideration plays an important role in control system theory
and control system design. Model uncertainties usually arise from unavailable
information on the plant dynamics or represent part of the dynamics which are
purposely left out for the sake of simplicity and ease in system analysis and
controller design. As a consequence, the controller, which is designed based on
the mathematical model, has to be “robust” in the sense that it must achieve
the design specifications for the real plant dynamics containing the unmodeled,
uncertain dynamic component. In fact, almost all control systems, in practice, suffer
from disturbances/noises and model uncertainties. Basically, a disturbance is an
external signal which is neither controllable nor dependent on internal variables
of the plant, such as signal d in Fig. 4.12. However, the model uncertainty describes
inconsistencies in dynamics between the model and the real plant. As shown in
Fig. 4.30 below, it is assumed that the actual plant dynamics G is the sum of
nominal model Go and model uncertainty G . The so-called additive uncertainty,
i.e., G D Go C G , includes therefore uncertain dynamics [2]. In fact, uncertainties
also arise in the model reduction issues, which simplify calculations or avoid
difficulties from the high complexity of a complete model. It is only known that
some discrepancies exist between the simulated model and the real plant. For
instance, the neglect of some high-order terms or some nonlinear phenomena will
be summed up in a global model error, and the model error block G can be a full
transfer function matrix.
4.5 LFT Description and Feedback Controllers 91

z1 z2
α β
W1 W2 d W3 Δ
r y u
K G0 y

Fig. 4.31 A controller synthesis scheme

⎡0 W3 0 W3 ⎤
α ⎢W W G W W G ⎥ β
⎡ z1 ⎤ ⎢ 1 1 0 1 1 0⎥ ⎡d ⎤
⎢z ⎥ ⎢0 0 0 W2 ⎥ ⎢r ⎥
⎣ 2⎦ y u ⎣ ⎦
⎢ ⎥
⎣⎢ I G0 0 G0 ⎦⎥

Fig. 4.32 LFT form

The synthesis of controller K would have to consider how to design a robust


controller against modeling errors and at the same time achieving engineering
requirements. Consider the controller synthesis scheme of Fig. 4.31, where W1 , W2 ,
and W3 are weighting functions with stable dynamics to reflect the corresponding
design specifications and to characterize the uncertainty profile, i.e., G D W3 .
Then, taking this synthesis scheme into the LFT form as shown in Fig. 4.32, the
controller synthesis problem can be represented in LFT form as

(4.72)

Finally, one can employ LFT matrix P to synthesize controller K based on the
specific design goal. For example, in H1 control, "one#can determine
" a#satisfactory
d z1
controller K by minimizing kTzw k1 , where w D and z D . More will
r z2
be discussed in the rest of this book.
92 4 Linear Fractional Transformations

To sum up, this section introduced the motivations of feedback controller design.
Examples were mainly designed for evaluating the presented LFT approach. Under
the LFT scheme, readers can take the synthesis target as the terminator, e.g.,
controller K. Then, via applying the LFT manipulation, one can determine the
solution of K systematically.

4.6 Inner and Co-inner Systems

This section describes inner and co-inner systems. For synthesizing a (sub)optimal
control system under the LFT framework, to be elaborated in the later part of this
book, properties of these systems reveal the significance of such concepts.
Definition 4.1 Let P  .s/ D P T .s/ and P (s) D PT (s). The transfer function
matrix P(s) is called all-pass if P P D I, for all s. An all-pass P(s) is then called
inner if P(s) is stable and P*P  I, 8 Re(s)  0. Dually, a transfer function matrix
P(s) is called co-all-pass if PP D I, for all s. A co-all-pass P(s) is then called co-
inner if P(s) is stable and PP*  I, 8 Re(s)  0. " #
z
Consider the (q1 C q2 )  (m1 C m2 ) two-port system of Fig. 4.1, where D
y
" #
w
P with q2 D m1 and q1  m2 . An inner system has the property that the 2-norm
u
of the output signal is equal to that of the input [3, 5]. That is,
   2   2   2
y .j!/  2
C z .j!/ 2 D u .j!/ 2 C w .j!/ 2 (4.73)
2

for any !.
Example 4.6 Verify that the one-port transfer function T .s/ D s1
sC1
is inner.
From T (s) D TT (s), one has
s  1
T  .s/ D T T .s/ D ;
s C 1
and then
sC1 s1
T  .s/T .s/ D  D 1:
s1 sC1

Firstly, this shows that T(s) is all-pass. From T  .s/ D T T .s/, one can obtain, for
s D  C j!,

s1 s1 .  1/2 C ! 2


T T D  D  1; 8   0:
sC1 sC1 . C 1/2 C ! 2
4.6 Inner and Co-inner Systems 93

Fig. 4.33 Distances between 1


location of s and system pole ∀ Re( s ) < 0 ∀ Re( s ) = 0 ∀ Re( s ) > 0
and zero (σ < 0) (σ = 0) (σ > 0)

0.5

Imaginary Axis
0
−1 1
( Pole) ( Zero)

-0.5
-2 -1 0 1 2
Real Axis

Evidently, T*(s)T(s)  1, 8 Re(s)  0. By this and T(s) being stable, T(s) is an inner
function. Furthermore, it can also be examined by the ratio of the distance between
the location of s in the complex plane to the system pole and zero. Now, it can be
found that

.  1/2 < . C 1/ 2 and P  .s/P .s/ < 1; 8Re.s/ > 0I


2 
.  1/ D . C 1/ 2
and P .s/P .s/ D 1; 8Re.s/ D 0I (4.74)
2 
.  1/ > . C 1/ 2
and P .s/P .s/ > 1I 8Re.s/ < 0:

Figure 4.33 shows the relationship.


Note that for the case of T D sC1
s1
, T(s) is all-pass but is not inner. Since its pole
is located in the open right-half plane,

s C 1 sC1
T  .s/T .s/ D D1 (4.75)
s  1 s1

is obtained, but T*(s)T(s)  1, 8 Re(s)  0. This means T(s) is not an inner function,
although T (s)T(s) D 1. 
" #
s1
0
Example 4.7 Verify that the two-port system P .s/ D sC1 s2 is inner.
0 sC2
From
2 32 3
.s C 1/ s1
6 .s  1/ 0 76 0 7
P  .s/P .s/ D 6 7 sC1
4 .s C 2/ 5 4 s  2 5 D I;
0 0
.s  2/ sC2
94 4 Linear Fractional Transformations

one concludes that P(s) is all-pass. For s D  C j!, one has


2 32 3
s1 s1
6 0 76 0 7
P P D 4 s C 1 s  2 5 4 s C 1 s  2 5
0 0
sC2 sC2
2 2
3
.  1/ C ! 2
6 0 7
6 . C 1/2 C ! 2 7
D6 2 2 7  I; 8   0:
4 .  2/ C ! 5
0
. C 2/2 C ! 2

Since P(s) is all-pass and P*(s)P(s)  I, 8 Re(s)  0 (i. e.,   0), and due to that all
of its poles are in the open left-half plane, this two-port P(s) is an inner system by
the definition.
" #
s1
0 sC1
Example 4.8 Verify that P .s/ D s2 is co-inner.
sC2
0
From
2 s  1 32 sC2 3
0 0
6 s C 1 76 s  2 7 D I;
P .s/P  .s/ D 4 54 5
s2 sC1
0 0
sC2 s1

we can gather that P(s) is co-all-pass. Since one can also verify that P(s)P*(s)  I,
8 Re(s)  0 for s D  C j!, and it is stable, P(s) is an co-inner system.
Theorem 4.1 Given an LFT system P(s), if P(s) is inner, then kLFTl (P,K)k1 < 1
for any K 2 BH1 .
" # " # " #
z w P11 P12
Proof Since D P and P D is inner, one has
y u P21 P22
P*(j!)P(j!) D I and kP22 k1  1. The latter is true because P22 is part of P and
kPk1 D 1. Then, by the small gain theory, one has LFTl (P,K) 2 RH1 for any
K 2 BH1 . Since P is inner, one also has

ky .j!/k2 2 C kz .j!/k2 2 D ku .j!/k2 2 C kw .j!/k2 2 : (4.76)

Utilizing u D Ky and K 2 BH1 yields

ku .j!/k2 D kKy .j!/k2 < kKk1 ky .j!/k2  ky .j!/k2 (4.77)

which gives

ku .j!/k2 2  ky .j!/k2 2 < 0: (4.78)


Exercises 95

Additionally, (4.76) and (4.78) together imply

kz .j!/k2 2 < kw .j!/k2 2 : (4.79)

Thus, one has


kz .j!/k2
< 1; for all !; (4.80)
kw .j!/k2
and further
kz .j!/k2
max < 1: (4.81)
w¤0 kw .j!/k2
From z D LFTl (P,K)w, one concludes that kLFTl (P,K)k1 < 1 as

kzk2
kLFTl .P; K/k1 D max : (4.82)
w¤0 kwk2


Example 4.9# Consider the LFT system as illustrated in Fig. 4.1, where P .s/ D
"
s1
0 sC1
s2
2 RH1 and K D sC3 1
2 BH1 . Verify that kLFTl (P,K)k1 < 1.
0
sC2  1 
From (4.7), one has  sC3  D 1 and LFTl .P; K/ D .s1/.s2/ .
1 3 .sC1/.sC2/.sC3/
Hence, one can find that kLFTl .P; K/k1 D 13 < 1.
As mentioned in Chap. 2, the LFT system can be transformed into a CSD one.
Hence, the properties of all-pass (or co-all-pass) and inner (or co-inner) have their
counterparts as J-unitary (or dual J-unitary) and J-lossless (or dual J-lossless) [5].
These properties and relations will be introduced in the next chapter.

Exercises
" #
r
1. Let w D LFTl .P; K/ in the block diagram below. Determine P and
d
LFTl (P,K) with the given cutting point.

C d

r − − w w ⎡ r⎤
A K P ⎢d ⎥
⎣ ⎦
− −

D
K
F
96 4 Linear Fractional Transformations

2. Consider a brushed DC"motor


# model
" as #given below and derive its corresponding
! Td
LFT representation of DP , and Te D Kt i.
i V

Td

V Ve 1 i Te − 1 ω
Kt
− Ls + R y u Js + B

Ke

3. Determine the transfer function of the given system below from r to w. (Hint:
Use the cutoffs indicated in the block diagram.)

− y2 u2 w
r
E B A
y1 u1 −

4. Find the transfer function of the following system by Mason’s gain formulae
as well as by LFT approach, respectively. Compare the results and discuss their
relationships.

G4
y1 u1 y2 u2
r w
G1 G2 G3 G5
− − −
H1 H2

5. Consider the following block diagram. Determine the transfer function from r to
w using Mason’s gain formulae and LFT approach, respectively.
References 97

G3
y1 u1 y2 u2
r w
G1 G2
− − −
H1 H2

H3

6. Consider the system block diagram below and determine its transfer function
from r to w using Mason’s gain formulae and LFT, respectively.

H2

L1
r y1 u1 y2 u2 w
G1 G2 G3

L2 L3

H1

References

1. Dorf RC (1995) Modern control systems. Addison-Wesley, New York


2. Doyle JC, Francis BA, Tannenbaum AR (1992) Feedback control theory. Macmillan, New York
3. Franco S (1995) Electric circuits fundamentals. Saunders College Publishing, Orlando
4. Franklin GF, Powell JD, Emami-Naeini A (2009) Feedback control of dynamic systems, 6th
edn. Addison Wesley, New York
5. Kimura H (1997) Chain-scattering approach to H1 control. Birkhäuser, Boston
6. Kuo BC (1976) Automatic control systems, 6th edn. Prentice Hall, Englewood Cliffs
7. Mason SJ (1953) Feedback theory-some properties of signal flow graphs. Proc IRE 41:1144–
1156
8. Mason SJ (1956) Feedback theory-further properties of signal flow graphs. Proc IRE 44:920–
926
9. Redheffer RM (1960) On a certain linear fractional transformation. J Math Phys 39:269–286
10. Youla DC, Jabr HA, Bongiorno JJ (1976) Modern Wiener-Hopf design of optimal controllers
part II: the multivariable case. IEEE Trans Autom Control 21:319–338
11. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 5
Chain Scattering Descriptions

The chain scattering-matrix description (CSD) of a two-port network, first intro-


duced in Chap. 2, originated from the conventional electrical circuit theory. In
contrast to the LFT, the CSD developed in the network circuits provides a straight-
forward interconnection in a cascaded way. The CSD can transform an LFT into a
two-port network connection and vice versa. Thus, many known results which have
been developed for a two-port network can then be used in control system analysis
and synthesis. Due to its benefits of describing a linear system, the CSD was later
extended to the design of robust control systems [1, 2], where different structures
of the CSD were investigated. In this chapter, the CSD will be formally defined and
explored for transformations and the descriptions of state-space realizations. The
J-lossless and dual J-lossless systems are defined in this chapter. J-lossless and dual
J-lossless both play an important role in the CSD control system manipulations,
analysis, and synthesis. In particular, the properties of J-lossless and dual J-lossless
are essential in synthesizing H1 (sub)optimal controllers using the CSD approach.

5.1 CSD Definitions and Manipulations

The chain scattering representation is also called the homographic transformation


[2, 3]. Figure 5.1 shows the representations using the right or left CSD matrix,
respectively, as defined by terminator K being on the right or left of the intercon-
nection system. As can be seen in the following, a two-port LFT system can be
represented by coupled right or left CSD matrices. In this book, signals always flow
from the lower line to the upper one. In a right CSD diagram, signals flow in the
anticlockwise direction and in the clockwise direction in a left CSD diagram. For
instance, in Fig. 5.1a, b, the flow follows “w ! y ! u ! z” but travels anticlockwise
in (a) and clockwise in (b). Note that the superscript “ ” symbolizes the dual
description in the “left” case.

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 99
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__5,
© Springer-Verlag London 2014
100 5 Chain Scattering Descriptions

a b
z u u z
~
G K K G
w y y w

Fig. 5.1 (a) Right and (b) left CSDs

 
G11 G12
Consider the right CSD matrix G D of Fig. 5.1a, where the
G21 G22
feedback connection is given by
    
z G11 G12 u
D ; u D Ky: (5.1)
w G21 G22 y

Note that G11 denotes the transfer function of u 7! z, when y D 0, similarly for the
rest as further notated:
ˇ ˇ ˇ ˇ
z ˇˇ z ˇˇ w ˇˇ w ˇˇ
G11 D ˇyD0 ; G12 D ˇ uD0 ; G21 D ˇyD0 ; G22 D ˇ uD0 : (5.2)
u y u y

From
   
z G11 K C G12
D y; (5.3)
w G21 K C G22

one has, when (G21 K C G22 ) 1 exists,

z D .G11 K C G12 / .G21 K C G22 /1 w: (5.4)

Then, define the right CSD transformation of G and K, denoted by CSDr (G,K), as
 
G11 G12
CSDr ;K WD .G12 C G11 K/ .G22 C G21 K/1 : (5.5)
G21 G22

The right CSD transformation is said to be well posed if (G22 C G21 K) is invertible.
The whole loop transfer function of w 7! z is given by CSDr (G,K). This definition
implies that G22 must be square.
Example 5.1 Consider the SCC plant as shown in Fig. 4.1, using MATLAB to
determine its corresponding right CSD manipulations.
5.1 CSD Definitions and Manipulations 101

The following MATLAB code is an example for readers’ reference.

clc;
clear;
disp(’SCC2CSDr method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’w D> u (P11) ?’);
P12Dinput(’wD> y (P12) ?’);
P21Dinput(’r D> u (P21) ?’);
P22Dinput(’r D> y (P22) ?’);
KDinput(’Please input K:’);
[row col]Dsize(P21);
if (row~Dcol)
disp(’The do not exist a right CSD matrix G !’)
disp(’Plz enter Ctrl c !’)
pause
else
G11Dsimplify(P12-P11* inv(P21)* P22);
G12Dsimplify(P11* inv(P21));
G21Dsimplify(-inv(P21)* P22);
G22Dinv(P21);
GD[G11 G12;G21 G22]
end
if (G21* KCG22DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_r_to_wDsimplify((G11* KCG12)* inv
(G21* KCG22))
end

Similarly, the diagram shown in Fig. 5.1b represents the following two sets of
equations:
    
u GQ 11 GQ 12 z
D Q Q ; u D Ky (5.6)
y G21 G22 w
102 5 Chain Scattering Descriptions

where
u ˇˇ u ˇˇ y ˇˇ y ˇˇ
GQ 11 D ˇwD0 ; GQ 12 D ˇ zD0 ; GQ 21 D ˇwD0 ; GQ 22 D ˇ zD0 : (5.7)
z w z w

From
    
u GQ 11 GQ 12 z
D Q Q (5.8)
y G21 G22 w

and
 
  K
I K y D 0; (5.9)
I

yield
  
  GQ 11 GQ 12 z
0D I K I (5.10)
GQ 21 GQ 22 w

1
hence, when GQ 11  K GQ 21 exists,

1

z D  GQ 11  K GQ 21 GQ 12  K GQ 22 w: (5.11)

Then, the left CSD transformation of GQ and K, denoted by CSDl G; Q K , is


defined as
 
GQ 11 GQ 12
1

CSDl Q Q ; K WD  GQ 11  K GQ 21 GQ 12  K GQ 22 : (5.12)
G21 G22

The left CSD transformation is said to be well posed if GQ 11  K GQ 21 is invertible.


Dually, this definition implies that GQ 11 must be square.
Example 5.2 Consider the SCC plant as shown in Fig. 4.1, using MATLAB to
determine its corresponding left CSD manipulations.
The following MATLAB code is an example for readers’ reference.

clc;
clear;
disp(’SCC2CSDl method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’w D> u (P11) ?’);
5.2 Cascaded Connection of Two CSD Matrices 103

P12Dinput(’w D> y (P12) ?’);


P21Dinput(’r D> u (P21) ?’);
P22Dinput(’r D> y (P22) ?’);
KDinput(’Please input K:’);
[row col]Dsize(P12);
if (row~Dcol)
disp(’The do not exist a left CSD matrix G !’)
disp(’Plz enter Ctrl c !’)
pause
else
G11Dinv(P12);
G12Dsimplify(-inv(P12)* P11);
G21Dsimplify(P22* inv(P12));
G22Dsimplify(P21-P22* inv(P12)* P11);
GD[G11 G12;G21 G22]
end
if (G11-K* G21DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_r_to_wDsimplify(-inv(G11-K* G21)*
(G12-K* G22))
end

5.2 Cascaded Connection of Two CSD Matrices

In this section, cases of two cascaded CSD subsystems will be investigated.


A comprehensive understanding of the construction and connection of CSD systems
is useful for general control problems. Note that K is employed as a terminator and
can also be truncated in some formulations.
Case (I) Two right CSD matrices
Figure 5.2 shows two cascaded CSD systems with the termination at the right end.
The definition provides
         
z1 u u K K
D G1 1 D G1 G2 2 D G1 G2 y2 D G y2 (5.13)
w1 y1 y2 I I
104 5 Chain Scattering Descriptions

Fig. 5.2 Cascaded two CSD


z1 u1 z2 u2
systems

w1 G1 y1 w2 G2 y2 K

Fig. 5.3 Cascaded two CSD


u1 z1 u2 z2
systems

K y1 G1 w1 y2 G 2 w2

G

and

z1 D CSDr .G1 ; CSDr .G2 ; K// w1 D CSDr .G1 G2 ; K/ w1 D CSDr .G; K/ w1 ;


(5.14)

where the product of the two right CSD matrices follows the usual (block) matrix
multiplication rule, i.e., G D G1 G2 .
Property 5.1 For the cascaded connection of two right CSD matrices:
1. CSDr (G1 , CSDr (G2 ,K)) D CSDr (G1 G2 , K).
2. CSDr (G1 G2 , K) D CSDr (I,K) D K if G1 D G 1
2 .

Case (II) Two left CSD matrices


Figure 5.3 shows two cascaded CSD systems with the termination at the left end.
Dually, from
      " #
u1 z1 z2 z2
D GQ 1 D GQ 1 GQ 2 D GQ (5.15)
y1 w1 w2 w2

and u1 D Ky1 ,

Q K w2
z2 D CSDl G; (5.16)

can be obtained, where GQ D GQ 1 GQ 2 . This can also be derived from u2 D z1 and


y2 D w1 as
  
  
u2 CSDl GQ 1 ; K z
D w1 D GQ 2 2 (5.17)
y2 I w2
5.2 Cascaded Connection of Two CSD Matrices 105

Fig. 5.4 Right CSD z a


associated with left CSD
G
w b

u a

K y G b

and then

  

 CSDl GQ 1 ; K 
 z2
0 D I CSDl GQ 1 ; K w1 D I CSDl GQ 1 ; K GQ 2 ;
I w2
(5.18)

and hence,

z2 D CSDl GQ 2 ; CSDl GQ 1 ; K w2 D CSDl G;


Q K w2 : (5.19)

Similarly, the product of the two left CSD matrices also follows the usual (block)
matrix multiplication rule.
Property 5.2 The cascaded connection of two left CSD matrices gives the
following:

1. CSDl GQ 2 ; CSDl GQ 1 ; K D CSDl GQ 1 GQ 2 ; K .


2. CSDl GQ 1 GQ 2 ; K D CSDl .I; K/ D K if GQ 1 D GQ 21 .


Case (III) Connection of right CSD associated with left CSD
Consider a feedback system of Fig. 5.4, which is represented by two coupling CSD
matrices, where
       
z a u a
DG ; D GQ (5.20)
w b y b

Q K b, one has
and the feedback termination u D Ky. From a D CSDl G;
    

z a Q K
CSDl G;
DG DG b; (5.21)
w b I

and hence,

Q K w:
z D CSDr G; CSDl G; (5.22)
106 5 Chain Scattering Descriptions

z a

G M
w b

u a

K y G M
b

Fig. 5.5 Multiplying M at right terminal

Fig. 5.6 Left CSD associated a z


with right CSD
G
b w

a u
G K
b y

       
u a z a
Also, from (5.20), it is clear that GQ 1 D ; hence, DG D
y b w b
 
a

G GQ 1 and z D CSDr G; CSDl G; Q K w D CSDr G GQ 1 ; K w. It is trivial


b
to show that CSDr (I, CSDl (I,K)) D K and CSDr (M, CSDl (M,K)) D K, where M is
any square and invertible transfer function matrix. Therefore, multiplying a square
and invertible M at the right-hand side of both top and bottom paths, as illustrated
in Fig. 5.5, will not affect the closed-loop transfer function of w 7! z.
For description ease in the connection of Fig. 5.4, the notation CSDr  CSDl is
employed to describe such a connection.
Property 5.3 For the connection of a right CSD associated with a left CSD, one
has the following:

1. CSDr G; CSDl G; Q K D CSDr GM; CSDl GM; Q K , where M is invertible.


2. CSDr G; CSDl G; Q K D CSDr .I; CSDl .I; K// D K if G D GQ is invertible.

Case (IV) Connection of left CSD associated with right CSD


Likewise, a feedback system can also be represented by two coupling CSDs as
illustrated in Fig. 5.6, where
     
a z u
D GQ DG ; (5.23)
b w y
5.3 Transformation from LFT to CSD Matrix 107

a z

M b G w

a u

M b G y K

Fig. 5.7 Multiplying M at left terminal

with the feedback termination u D Ky. From a D CSDr (G,K)b, one has
   
CSDr .G; K/ Q z
bDG ; (5.24)
I w

and therefore,

Q CSDr .G; K/ w:
z D CSDl G; (5.25)

For easier connection description of Fig. 5.6, notation CSDl  CSDr is employed
to sketch this connection. It is not useful to illustrate CSDl (I, CSDr (I,K)) D K.
Additionally, one can multiply any nonsingular MQ at the left terminal of both top
and bottom paths, as illustrated in Fig. 5.7, which is inconsequential to the closed-
loop transfer function of w 7! z.
Property 5.4 For the connection of a left CSD associated with a right CSD, one
has the following:

1. CSDl G; Q CSDr .G; K/ D CSDl MQ G; Q CSDr MQ G; K , where MQ is invert-


ible.

2. CSDl G; Q CSDr .G; K/ D CSDl .I; CSDr .I; K// D K if GQ D G is invertible.

5.3 Transformation from LFT to CSD Matrix

As illustrated in Fig. 5.8, for the case of dim(w) D dim(y) and with an invertible
 P21,
u z
the particular SCC plant P has an “equivalent” right CSD matrix G of 7!
y w
such that CSDr (G,K) D LFTl (P,K). This can be derived as follows:
108 5 Chain Scattering Descriptions

z ⎡ P11 P12 ⎤ w
G
⎢P P22 ⎥⎦
z u
y ⎣ 21 u ⇒ ⎡ P12 − P11 P21−1 P22 P11 P21−1 ⎤
⎢ −1 ⎥ K
w ⎣ − P21 P22 P21−1 ⎦ y

Fig. 5.8 LFT to right CSD

Let

(5.26)

" # " #
P12 P11 I 0
where G1 D and GQ 2 D . It can be found that
0 I P22 P21

  " #" #1    


z P12 P11 I 0 u u
D DG ; (5.27)
w 0 I P22 P21 y y

where
" #
P12  P11 P21 1 P22 P11 P21 1
GD : (5.28)
P21 1 P22 P21 1

In fact, G can also be obtained directly from Property 5.3 in that


CSDr G1 ; CSDl GQ 2 ; K D CSDr G1 M; CSDl GQ 2 M; K


D CSDr .G1 M; CSDl .I; K//
D CSDr .G1 M; K/
D CSDr .G; K/ (5.29)

 1
I 0
where M D D GQ 21 .
P22 P21
Similarly, for the
 case
 of   D dim(u) and an invertible P12 , there exists a left
dim(z)
z u

CSD matrix GQ of 7! such that CSDl G; Q K D LFTl .P; K/. One has,
w y
from (5.26), as illustrated in Fig. 5.9 in the following:
5.3 Transformation from LFT to CSD Matrix 109

z w
⎡ P11 P12 ⎤ G
⎢P P22 ⎥⎦
u z
y ⎣ 21 u ⇒ ⎡ P12 −1 − P12 −1 P11 ⎤
K ⎢ −1 ⎥
y ⎣ P22 P12 P21 − P22 P12 −1 P11 ⎦ w

Fig. 5.9 LFT to left CSD

         0
z0 P12 0 u I P11 z z
D D )
y0 P22 I y 0 P21 w y0
   
u Q z
D G2 D G1 : (5.30)
y w

It can be seen that


   1     
u P12 0 I P11 z z
D D GQ ; (5.31)
y P22 I 0 P21 w w

where
 
P12 1 P12 1 P11
GQ D 1 : (5.32)
P22 P12 P21  P22 P12 1 P11

Here, GQ can be also obtained directly from Property 5.4, giving



CSDl GQ 1 ; CSDr .G2 ; K/ D CSDl MQ GQ 1 ; CSDr MQ G2 ; K


D CSDr MQ GQ 1 ; CSDr .I; K/


D CSDl MQ GQ 1 ; K

Q K
D CSDl G; (5.33)
 
P12 0
where MQ D D G21 . The above are summarized in the following
P22 I
lemma.
Lemma 5.1 Given an LFT matrix P, there exists a right CSD matrix G such that
CSDr (G,K) D LFT Q

if P21 is invertible, or there exists a left CSD matrix G
l (P,K)
Q
such that CSDl G; K D LFTl .P; K/ if P12 is invertible.
Furthermore, if both P21 and P12 are invertible, then it can be gathered that
  1  1  
P12 P11 I 0 P12 0 I P11
G GQ D D I: (5.34)
0 I P22 P21 P22 I 0 P21
110 5 Chain Scattering Descriptions

Consequently,

Q K ;
T D LFTl .P; K/ D CSDr .G; K/ D CSDl G; (5.35)

and conversely,

Q T D CSDl .G; T / :
K D CSDr G; (5.36)

5.4 Transformation from LFT to Cascaded CSDs

For the transformation from LFT to CSD as described in the previous section, there
is a condition that P12 or P21 should be invertible. This condition can however be
relaxed, asexplained
  in this section. For any standard control configuration (SCC)
w z
plant P of 7! and u D Ky, as depicted in the left-side diagram of Fig. 5.8,
u y
it can be represented by two coupling CSDs as stated in the following. From
    
z P11 P12 w
D ; (5.37)
y P 21 P22 u

we derive

(5.38)

LFT matrix P can then be decomposed as


    
z P11 P12 u
D and (5.39)
w 0 I w

    
u I 0 u
D ; (5.40)
y P22 P21 w

which in turn can be represented by a right CSD associated with a left CSD, as
illustrated in the right-side diagram of Fig. 5.10. Since u D Ky, one has, from the
lower part of (5.40),
5.4 Transformation from LFT to Cascaded CSDs 111

G1
z u
w ⎡ P12 P11 ⎤
z ⎢0
⎡ P11 P12 ⎤ w ⎣ I ⎥⎦ w
⎢P P22 ⎥⎦
y ⎣ 21 u ⇒
G 2
u u
K ⎡ I 0⎤
K y ⎢P P21 ⎥⎦ w
⎣ 22

Fig. 5.10 Right CSD associated with left CSD

    
K I 0 u
yD ; (5.41)
I P22 P21 w

and therefore,
  " # 
  K   I 0 u
0 D I K y D I K
I P22 P21 w
 
  u
D I  KP22 KP21 : (5.42)
w

Consequently, this gives

u D .I  KP22 /1 KP21 w: (5.43)

Furthermore, from (5.39), one has


  " #  " #
z P12 P11 u P11 C P12 .I  KP22 /1 KP21
D D w: (5.44)
w 0 I w I

This reveals that the transfer function w 7! z can be represented by a right CSD
matrix associated with a left CSD matrix; in below, LFTl stands for the lower linear
fractional transformation as previously defined in Chap. 4:
   
P11 P12 P12 P11
z D LFTl ; K w D CSDr ;
P21 P22 0 I
 
I 0
CSDl ;K w: (5.45)
P22 P21
112 5 Chain Scattering Descriptions

Example 5.3 Consider the SCC plant as given in (5.37), using MATLAB to
determine its corresponding right CSD associated with a left CSD representation.
The following MATLAB code is an example for readers’ reference.

clc;
clear;
disp(’SCC_2_(CSDr associated with CSDl) method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’z D> u (P11) ?’);
P12Dinput(’z D> y (P12) ?’);
P21Dinput(’w D> u (P21) ?’);
P22Dinput(’w D> y (P22) ?’);
KDinput(’Please input K:’);
G1_11DP12;
G1_12DP11;
G1_21Dzeros(size(P12));
G1_22Deye(size(P11));
G1D[G1_11 G1_12;G1_21 G1_22 ] % upper triangle matrix
G2_11Deye(size(P22)) ;
G2_12Dzeros(size(P21));
G2_21DP22;
G2_22DP21;
G2D[G2_11 G2_12;G2_21 G2_22] % lower triangle matrix
if (G2_11-K* G2_21DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_w_to_uDsimplify(-inv(G2_11-K* G2_21)
* (G2_12-K* G2_22))

kDsimplify(-inv(G2_11-K* G2_21)* (G2_12-K* G2_22));


Transfer_function_w_to_zDsimplify((G1_11* kCG1_12)*
inv(G1_21* kCG1_22))
end
5.4 Transformation from LFT to Cascaded CSDs 113

G1
z′ z
z w ⎡ I − P11 ⎤
⎡ P11 P12 ⎤ y′ ⎢0 − P ⎥ w
⎢P ⎣ 21 ⎦
y ⎣ 21 P22 ⎥⎦
u ⇒
G2 u
z′
K ⎡ P12 0⎤
⎢P K
y′
⎣ 22 − I ⎥⎦ y

Fig. 5.11 Left CSD associated with right CSD

Likewise, (5.37) also creates

(5.46)

Let
" # " #" # " #" #
z0 I P11 z P12 0 u
D D : (5.47)
y0 0 P21 w P22 I y

Then, one can obtain the transfer function w 7! z, which is represented by a left CSD
matrix associated with a right CSD matrix with the latter being terminated by K at
the right end, as illustrated in Fig. 5.11, where z0 and y0 are the intermediate signals.
From their definition in (5.47), it is clear that the dimensions of z0 and y0 are the
same as that of z and y, respectively. It is a common usage in this book that a signal
with the prime symbol means the same dimension as that of the original signal, but
of course, they are different signals.
From (5.47) and u D Ky, one has
  " #" # " #
z0 P12 0 K P12 K
D yD y; (5.48)
y0 P22 I I P22 K  I

and hence,
 
P12 0
0
z D CSDr ; K y 0 D P12 K.I  P22 K/1 y 0 : (5.49)
P22 I
114 5 Chain Scattering Descriptions

Let
" # !
P12 0
S D CSDr ;K : (5.50)
P22 I

(5.47) offers
" # " # " #" #
z0 S 0
I P11 z
D y D ; (5.51)
y0 I 0 P21 w

and thereby,
" # " #" #
  S   I P11 z
0D I S y0 D I S : (5.52)
I 0 P21 w

This shows that


" #
h  i z
0D I P11  P12 K.I  P22 K/1 P21 (5.53)
w

and
h i
z D P11 C P12 K.I  P22 K/1 P21 w D LFTl .P; K/ w: (5.54)

One concludes that the transfer function w 7! z can be represented by a left CSD
matrix associated with a right CSD matrix as
" # ! " #
P11 P12 I P11
z D LFTl ; K w D CSDl ;
P21 P22 0 P21
" # !!
P12 0
CSDr ;K w: (5.55)
P22 I

Example 5.4 Consider the SCC plant as given in (5.37), using MATLAB to
determine its corresponding left CSD associated with a right CSD representation.
The following MATLAB code is an example for readers’ reference.

clc; clear;
disp(’SCC_2_(CSDl associated with CSDr) method’)
syms P11 P12 P21 P22 K S;
5.5 Transformation from CSD to LFT matrix 115

syms p11 p12 p21 p22 k s;


P11Dinput(’z D> u (P11) ?’);
P12Dinput(’z D> y (P12) ?’);
P21Dinput(’w D> u (P21) ?’);
P22Dinput(’w D> y (P22) ?’);
KDinput(’Please input K:’);
G1_11Deye(size(P11));
G1_12D-P11;
G1_21Dzeros(size(P21));
G1_22D-P21;
G1D[G1_11 G1_12;G1_21 G1_22 ] % upper triangle matrix
G2_11DP12;
G2_12Dzeros(size(P12));
G2_21DP22;
G2_22D-eye(size(P22)) ;
G2D[G2_11 G2_12;G2_21 G2_22] % lower triangle matrix
if (G2_21* KCG2_22DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_ye_to_zeDsimplify((G2_11* KCG2_12)*
inv(G2_21* KCG2_22))
kDsimplify((G2_11* KCG2_12)* inv(G2_21* KCG2_22));
Transfer_function_w_to_zDsimplify(-inv(G1_11-k* G1_21)
* (G1_12-k* G1_22))

end

5.5 Transformation from CSD to LFT matrix

Conversely, the aforementioned techniques can be employed to characterize the


transformation from a given right or left CSD matrix to its corresponding LFT
u
matrix. First, consider the transformation from a right CSD matrix G of 7!
y
     
z w z
to the LFT matrix P of 7! in Fig. 5.12, where G22 is assumed
w u y
invertible.
116 5 Chain Scattering Descriptions

z w
G ⎡G12G22 −1 G11 − G12G22 −1G21 ⎤
z u y ⎢ −1 ⎥ u
⎣ G22 −G22 −1G21 ⎦
⎡ G11 G12 ⎤ ⇒
⎢G ⎥ y
K
w ⎣ 21 G22 ⎦
K

Fig. 5.12 Right CSD to LFT

From
" # " #" #
z G11 G12 u
D ; (5.56)
w G21 G22 y

one has

(5.57)

hence,
" # " #" # " #" #1 " #
z G11 G12 u G11 G12 G21 G22 w
D D
y 0 I y 0 I I 0 u
" #" #
P11 P12 w
D ; (5.58)
P21 P22 u

where
" # " #
P11 P12 G12 G22 1 G11  G12 G22 1 G21
P D D : (5.59)
P21 P22 G22 1 G22 1 G21

It can also be verified by direct calculations that

LFTl .P; K/ D P11 C P12 K.I  P22 K/1 P21




1
D G12 G22 1 C G11  G12 G22 1 G21 K I C G22 1 G21 K G22 1
D .G12 C G11 K/ .G22 C G21 K/1
D CSDr .G; K/ : (5.60)
5.5 Transformation from CSD to LFT matrix 117

~
z ~ ~ ~ −1 w
G −G11−1G12 G11
u z y ~ ~ −1 ~ ~ ~ ~ −1 u
~ ~ −G21G11 G12+ G22 G21G11
G11 G12 ⇒
K y ~ ~
G21 G22 w

Fig. 5.13 Left CSD to LFT

Q
 Similarly,
  one can also derive the transformation
  froma left CSD matrix G of
z u z w
7! to its LFT matrix P of 7! as shown in Fig. 5.13,
w y y u
where GQ 11 is invertible.
From

(5.61)

one has
" # " #" # " #" #1
z I 0 z I 0 0 I
D D
y GQ 21 GQ 22 w GQ 21 GQ 22 GQ 11 GQ 12
   " w
#
w P11 P12
 D ; (5.62)
u P21 P21 u

where
" # " #
P11 P12 GQ 11 1 Q
G12 GQ 11
1
D : (5.63)
P21 P21 GQ 21 GQ 11
1 Q
G12 C GQ 22 GQ 21 GQ 11
1

This reveals that there exists a transformation



from a left CSD GQ to the LFT matrix
P such that LFTl .P; K/ D CSDl G; Q K . Similar to (5.60), one can show that

LFTl .P; K/ D P11 C P12 .I  KP22 /1 KP21


1 1

1 Q
D GQ 11 G12 C GQ 11
1
I  K GQ 21 GQ 11 K GQ 21 GQ 11
1 Q
G12 C GQ 22

1

D  GQ 11  K GQ 21 GQ 12  K GQ 22

D CSDl G; Q K : (5.64)
118 5 Chain Scattering Descriptions

Fig. 5.14 Simple RC circuit V1 V2


I1 C R I2 RL

a b
V1 ⎡1 ⎤ I1
⎢ Cs 1 ⎥
V1 ⎡ RCs + 1 1 ⎤ V2 ⎢ ⎥
⎢ RCs Cs ⎥ I2 ⎢1
⎢⎣
1
− ⎥ V2
⇒ R ⎥⎦
⎢ ⎥ RL
I1 ⎢ 1 I2
1 ⎥
⎣⎢ R ⎦⎥
RL

Fig. 5.15 Right CSD to LFT: (a) Right CSD representation and (b) LFT representation

Example 5.5 For an RC circuit


 as depicted in Fig. 5.14, determine a right 
CSD
V2 V1 I1
matrix G of 7! and its corresponding LFT matrix P of 7!
I2 I1 V2
 
V1
such that LFTl (P, RL ) D CSDr (G, RL ).
I2
UtilizingKirchhoff’s laws, Fig. 5.14 gives V2 D I2 RL and
( " # " #" # " #" #
V1 D 1
I
Cs 1
C V2 V1 P11 P12 I1 1
1 I1
) D D Cs
:
I2 D I1  R1 V2 I2 P21 P22 V2 1  R1 V2
(5.65)

Rearranging (5.65) yields


( ( " #
V1 D 1
I
Cs 1
C V2 V1 D RC sC1
V2 C C1s I2 V1
) RC s
)
I2 D I1  R1 V2 I1 D 1
V C I2
R 2
I1
" #" #
RC sC1 1
V2
D RC s
1
Cs
: (5.66)
R
1 I2

Both LFT and CSD representations are depicted in Fig. 5.15. Alternatively, the right
CSD matrix G can also be found directly by the transmission parameter matrices
described in Chap. 3 as
5.5 Transformation from CSD to LFT matrix 119

Fig. 5.16 RC circuit


V1 V2

Rs I1 R C I2

G

" # " #" #" # " #" #


1 RC sC1 1
V1 1 1 0 V2 V2
D Cs
1
D RC s
1
Cs
I1 0 1 R
1 I2 R
1 I2
" #" #
G11 G12 V2
D : (5.67)
G21 G22 I2

Furthermore, one can find LFT matrix P using (5.59) with data from (5.66) as
" # " #
1
G12 G22 G11  G12 G22 1 G21 1
1
P D D Cs
: (5.68)
G22 1 1
G22 G21 1  R1

Now consider the circuit with a load of RL . The equivalent impedance Z D V1


I1
can be derived by Kirchhoff’s laws from Fig. 5.14 as

V1 1 RRL
ZD D C : (5.69)
I1 Cs R C RL

One can obtain the same result via the LFT or CSD approach, respectively, as

V1 1 RRL
ZD
D CSDr .G; RL / D C ; (5.70)
I1 Cs R C RL

V1 1 RL 1 1 RRL
ZD D LFTl .P; RL / D C RL 1 C D C : (5.71)
I1 Cs R Cs R C RL

Clearly, (5.69), (5.70), and (5.71) are the same. These have shown that

V1
ZD D LFTl .P; RL / D CSDr .G; RL / : (5.72)
I1

Example
 5.6 Forthe RC circuit depicted in Fig. 5.16, determinea leftCSDmatrix

V V I2 V2
GQ of 2
7! 1
and its corresponding LFT matrix P of 7!
I2 I1 V1 I1

such that LFTl .P; Rs / D CSDl G; Q Rs .


120 5 Chain Scattering Descriptions

Similarly, by Kirchhoff’s laws, one has V1 D Rs I1 and

( " # " #" # " #" #


V2 D 1
I
Cs 2
C V1 V2 P11 P12 I2 1
1 I2
) D D Cs
:
I1 D I2  R1 V1 I1 P21 P22 V1 1  R1 V1
(5.73)

Rearranging (5.73) yields


8
ˆ 1 " # " #" #
< V1 D V2  I2 V1 GQ 11 GQ 12 V2
Cs ) D
1 RCS C 1 GQ 21 GQ 22
:̂ I1 D  V2 C I2 I1 I2
R RC s
2 3
1 " #
1 
6 C s 7 V2
D4
1 RCS C 1 5 I2
: (5.74)

R RC s

As in Example 5.5, the left CSD matrix GQ can also be found by the transmission
parameter matrices as

2 32 3" 2 3
" # # 1 " #
1 0 1 1 
5 4 1 Cs 5 6 7 V2
V1 V Cs
D4 1
2
D4 5 :
I1  1 0 1 I2 1 RCS C 1 I2
R 
R RC s
(5.75)
   
I2 V2
Similarly, the LFT matrix P of 7! can be obtained from (5.75) such
V1 I1
that
" # " #
GQ 111 Q
G12 GQ 11
1 1
1
P D D Cs
: (5.76)
GQ 21 GQ 11
1 Q
G12 C GQ 22 GQ 21 GQ 11
1
1  R1

The LFT representation is depicted in Fig. 5.17. With Rs and V1 D Rs I1 , the


equivalent impedance Z D VI11 can be derived by Kirchhoff’s laws as

1 RRs 1
Z D R kRs C D C : (5.77)
Cs R C Rs Cs

One can also obtain the same result via the LFT formula or CSD approach as

V2

ZD Q Rs D 1 C RRs and
D CSDl G; (5.78)
I2 Cs R C Rs
5.6 Applications of CSDs in State-Space Realizations 121

a b
V2 ⎡1 ⎤ I2
⎢ Cs 1 ⎥
V1 ⎡ 1 ⎤ V2 ⎢ ⎥
⎢ 1 −
Cs ⎥
⎢1 1
I1 − ⎥ V1
⎢⎣ R ⎥⎦
Rs ⎢ ⎥ ⇒
I1 ⎢− 1 1 + 1 ⎥ I 2
⎣⎢ R RCs ⎦⎥
Rs

Fig. 5.17 Left CSD to LFT: (a) Left CSD representation and (b) LFT representation


1 Rs 1 1 RRs
LFTl .P; Rs / D C Rs I C D C : (5.79)
Cs R Cs R C Rs

Q Rs .
Hence, LFTl .P; Rs / D CSDl G;

5.6 Applications of CSDs in State-Space Realizations

The applications of CSDs (e.g., state-space realizations, inversions, and similarity


transformations) are useful and convenient for solving several control manipula-
tions. Herein, consider a linear system y D Pu, where the state-space realization is

. That is,

" # " #" #


xP A B x
D ; (5.80)
y C D u

and
" # !
A B I
P .s/ D LFTu ; : (5.81)
C D s

Note that it has been assumed that all initial states are zero. The state-space
realization of P(s) can be represented by using a right CSD matrix associated with
a left CSD matrix. From (5.80), one has

(5.82)
122 5 Chain Scattering Descriptions

y x
⎡C D⎤
I u ⎢0 I ⎥⎦
u

s

⇒ x
x x x
I ⎡I 0⎤
⎡A B⎤ x ⎢ A B⎥ u
⎢C D ⎥ s ⎣ ⎦
y ⎣ ⎦ u

Φ(s)

Fig. 5.18 Upper LFT to CSDr  CSDl

as illustrated in Fig. 5.18.


Then, the transfer function u 7! x is obtained by
" # !
I 0 I
ˆ.s/ D CSDl ; D .sI  A/1 B: (5.83)
A B s

One can therefore have


" # !
C D 
y D CSDr ; ˆ.s/ u D D C C .sI  A/1 B u: (5.84)
0 I

This shows that, using a right CSD associated with a left CSD,
" # " # !! " # !
C D I 0 I A B I
CSDr ; CSDl ; D LFTu ;
0 I A B s C D s

D D C C .sI  A/1 B: (5.85)

One can now consider the state feedback case. Let u D Fx C Wu0 , where W is
nonsingular. Then,

(5.86)
5.6 Applications of CSDs in State-Space Realizations 123

y x x
⎡C D⎤ ⎡I 0⎤
⎢0 ⎢F W ⎥
u ⎣ I ⎥⎦ u ⎣ ⎦ u′

x x x
I ⎡ I 0⎤ ⎡I 0⎤
⎢ A B⎥ ⎢F W ⎥
s x ⎣ ⎦ u ⎣ ⎦ u′

Φ(s)

Fig. 5.19 CSD representations

Fig. 5.20 CSD y x


representations ⎡C + DF DW ⎤
u ⎢ F W ⎥⎦ u′

x x
I ⎡ I 0 ⎤
x ⎢ A + BF BW ⎥⎦
u′
s ⎣

and

(5.87)

Note that by Property 5.3,


   
I 0 I 0
CSDr ; CSDl ; ˆ.s/ D ˆ.s/: (5.88)
F W F W

This concludes that


 the representations
 of Figs. 5.18 and 5.19 are equivalent, i.e.,
I 0
multiplying any on the right by the top and bottom paths does not change
F W
the transfer function of u 7! y. Hence, Fig. 5.19 is equivalent to Fig. 5.20. Then, the
block diagram of Fig. 5.19 gives the state-space realization of Fig. 5.21.
Furthermore, from (5.80), one has
124 5 Chain Scattering Descriptions

D
 y
u′ u x I x
W B s C

Fig. 5.21 State-space control block diagram

(5.89)

or

(5.90)

A left CSD associated with a right CSD is then given by


Then the transfer function of u 7! y could be given by
" # " # !!
y I D C 0 1
D CSDl ; CSDr ; D D C C .sI  A/1 B:
u 0 B A I s
(5.91)

From Fig. 5.23, one can obtain Fig. 5.24. Similarly, Property 5.4 gives
" # " # !!
WQ 0 WQ 0
.s/ D CSDl ; CSDr ; .s/ : (5.92)
H I H I

This concludes that


 the representations
 of Figs. 5.22 and 5.24 are equivalent, i.e.,
WQ 0
multiplying any on the left by both top and bottom paths does not change
H I
the transfer function of u 7! y. The block diagram of Fig. 5.23 gives the state-space
realization of Fig. 5.25.
5.6 Applications of CSDs in State-Space Realizations 125

y
⎡− I D⎤
⎢0 B ⎥⎦ u
I ⎣
s
⇒ Ω( s )
x x
⎡A B⎤ x
⎢C D ⎥ ⎡ −C 0 ⎤ I
y ⎣ ⎦ u ⎢− A I ⎥
⎣ ⎦ s
x

Fig. 5.22 Upper LFT to CSDl  CSDr

y
⎡W 0⎤ ⎡− I D⎤
⎢ ⎥ ⎢0
⎣H I⎦ ⎣ B ⎥⎦ u

Ω( s )
x
⎡W 0⎤ ⎡ −C 0 ⎤ I
⎢ ⎥ ⎢− A I ⎥ x
⎣H I⎦ ⎣ ⎦ s

Fig. 5.23 CSD representations

Fig. 5.24 CSD y


representations ⎡ −W 
WD ⎤
⎢ ⎥ u
⎣− H HD + B ⎦

x
⎡ −WC  0⎤ I
⎢ ⎥ x
⎣ − ( A + HC ) I⎦ s

D y

x
x 1 ~
B C W
u s

Fig. 5.25 State-space control block diagram


126 5 Chain Scattering Descriptions

u y x x

⎡0 I ⎤ ⎡C D⎤ ⎡ I 0 ⎤
⎢ I 0⎥ ⎢0 I ⎥⎦ ⎢ − D −1C
y ⎣ ⎦ u ⎣ u ⎣ D −1 ⎥⎦ u′

x x x
I ⎡I 0⎤ ⎡ I 0 ⎤
⎢ A B⎥ ⎢ − D −1C
s x ⎣ ⎦ u ⎣ D −1 ⎥⎦ u′

Fig. 5.26 CSD connections

a b
u
x I x
⎡ − D −1C D −1 ⎤
y ⎢ ⎥ s
⎣ 0 I ⎦


x ⎡ A − BD −1C BD −1 ⎤
⎢ −1 ⎥
I ⎡ I 0 ⎤ ⎣ −D C D −1 ⎦
u y
x ⎢ A − BD −1C BD −1 ⎥⎦
s ⎣

Fig. 5.27 CSD representations: (a) Left CSD representation and (b) LFT representation

   
I 0 WQ 0
Herein, in Fig. 5.19 and in Fig. 5.23 play important roles
F W H I
in the state-space representations of coprime factorizations, for which F and H
can be chosen such that A C BF and A C HC are Hurwitz. Coprime factorization
will be discussed in the next chapter. Apparently, as shown in Figs. 5.19 and
5.23, the coprime factorization in state-space form can be easily generated by CSD
connections.

For a with D being nonsingular, the CSD representation is

also applicable to determining the inversion u D P(s) 1 y. For this


 case, 
one can
0 I
choose F D D1 C and W D D1 in Fig. 5.19 and multiply by at the
I 0
 
y
input terminal to obtain the CSD representation as depicted in Fig. 5.26.
u
Further, rearranging Fig. 5.26 will yield Fig. 5.27a.
One can transform the CSDs of Fig. 5.27a back to the LFT as shown in Fig. 5.27b.
Then, this gives that
5.7 An Application of CSDs to Similarity Transformations 127

y u . x y
x I
D-1 B C
s

− D −1C

Fig. 5.28 State-space representations

" # " #" #


xP A  BD 1 C BD 1 x
D ; (5.93)
u D 1 C D 1 y

i.e., u D P(s) 1 y, where

(5.94)

One can verify that the state-space representation from u to y is given by Fig. 5.28.

5.7 An Application of CSDs to Similarity Transformations

The similarity transformation of a state-space realization can be represented by the


CSD approach. Let

x D T x0 and xP D T xP 0 : (5.95)

In further explication, one can use “s,” the Laplace transform symbol, to show
the differentiation operation, i.e., x D Is T xP 0 , with a bit of abuse of notations.
Figure 5.18 shows LFT and CSD representations, respectively, in terms of state-
space realization matrices. By inserting some CSDs, Fig. 5.18 is equivalent to
Fig. 5.29. It can be found from Fig. 5.30 that the transfer function u 7! x0 is given by
 
I 0 I
1 1
‰.s/ D CSDl ; D sI  T 1 AT T B: (5.96)
T 1 AT T 1 B s

Therefore, this concludes that


128 5 Chain Scattering Descriptions

y x x'
⎡C D⎤ ⎡T 0⎤
u ⎢0
⎣ I ⎥⎦ u ⎢0 I ⎥⎦ u

x x' x x x'
I ⎡T 0⎤ ⎡T −1 0 ⎤ ⎡I 0⎤ ⎡T 0⎤
⎢0 ⎢ −1 ⎥
s x ⎣ T ⎥⎦ x ' ⎣ 0 T ⎦ x ⎢ A B⎥ u ⎢0 I ⎥⎦ u
⎣ ⎦ ⎣

Fig. 5.29 CSD representations

y x′
CT D
I
u 0 I u
s

x' x′ ⇒
x ' x'
I ⎡ I 0 ⎤

x′ ⎢T −1 AT T −1 B ⎥⎦ u ⎡T −1 AT T −1 B ⎤
s ⎣ ⎢ ⎥
y ⎣ CT D ⎦ u

Ψ (s)

Fig. 5.30 CSD representations

" # !
CT D
y D CSDr ; ‰.s/ u D .D C C T ‰.s// u
0 I

1 1  
D D C C T sI  T 1 AT T B u D D C C .sI  A/1 B u: (5.97)

5.8 State-Space Formulae of CSD Matrix Transformed


from LFT Matrix

Given an LFT P(s) matrix with proper dynamics,

(5.98)

Here the solid lines in a matrix show the matrix is a compact expression for a transfer
function matrix, while the dotted line in a matrix indicates usual matrix partition for
the sake of clarity. From (5.98), one can derive
5.8 State-Space Formulae of CSD Matrix Transformed from LFT Matrix 129

(5.99)

Assume that D21 is invertible. Then,


2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
6 7 6 76 7 6 7
4 z 5 D 4 C1 D11 D12 5 4 0 0 I 5 4 u 5: (5.100)
w 0 I 0 C2 D21 D22 y

One then obtains


2 3 2 32 3
xP A  B1 D21 1 C2 B2  B1 D21 1 D22 B1 D21 1 x
6 7 6 76 7
4 z 5 D 4 C1  D11 D21 1 C2 D12  D11 D21 1 D22 D11 D21 1 5 4 u 5 :
w D21 1 C2 D21 1 D22 D21 1 y
(5.101)
   
z u
Therefore, for the state-space representation of D G.s/ , one has
w y

(5.102)

Note that if P21 is strictly proper, then the state-space representation of G does not
exist.
Dually, from

(5.103)
130 5 Chain Scattering Descriptions

a b
G1 G1
z u z′ z
⎡ P12 P11 ⎤ ⎡ I − P11 ⎤
w ⎢0 I ⎥⎦ w y′ ⎢0 − P ⎥
⎣ ⎣ 21 ⎦
w

u G 2 u G2 u
z′
⎡ I 0⎤ ⎡ P12 0⎤
K ⎢P P21 ⎥⎦ ⎢P K
y ⎣ 22 w y′
⎣ 22 − I ⎥⎦ y

Fig. 5.31 CSDr  CSDl and CSDl  CSDr : (a) Right CSD associated with left CSD and (b) Left
CSD associated with right CSD

one has
2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
4 u 5D4 0 0 I 5 4 C1 D11 D12 5 4 z 5: (5.104)
y C2 D21 D22 0 I 0 w

Here, D12 is assumed to be invertible. One then obtains


2 3 2 32 3
xP A  B2 D12 1 C1 B2 D12 1 B1  B2 D12 1 D11 x
6 7 6 76 7
4 u 5D4 D12 1 C1 D12 1 D12 1 D11 54 z 5:
y C2  D22 D12 1 C1 D22 D12 1 D21 1  D22 D12 1 D11 w
(5.105)
   
u Q z
Hence, for the state-space representation of D G.s/ , one has
y w

(5.106)

Note that if P12 is strictly proper, then the state-space representation of GQ does not
exist.
In Sect. 5.4, it showed that a general LFT system can be transformed into a right
CSD G1 (s) associated with a left CSD GQ 2 .s/ or a left CSD GQ 1 .s/ associated with a
right CSD G2 (s). For the case of Fig. 5.31a where a right CSD G1 (s) is associated
with a left CSD GQ 2 .s/, the corresponding state-space realization can be obtained,
assuming P(s) in the state-space form of (5.98), as
5.9 State-Space Formulae of LFT Matrix Transformed from CSD Matrix 131

(5.107)

For the case of Fig. 5.31b where a left CSD GQ 1 .s/ is associated with a right CSD
G2 (s), the corresponding state-space realization can be obtained as

(5.108)

This chapter introduces the definitions of CSD and its manipulations. These
fundamentals are essential for the determination of a robust controller, which
will be presented in the following chapters. The structures of CSDr  CSDl and
CSDl  CSDr , which circumvent the difficulty of inversion, were firstly inves-
tigated by the authors of this book and coresearchers. These unique structures
offer a unified approach for the robust controller synthesis problem, which will be
discussed in detail later in the book.

5.9 State-Space Formulae of LFT Matrix Transformed


from CSD Matrix

In the following, the state-space formulae for the transformation between CSD and
LFT will be discussed. Let
" # " #" #
z G11 .s/ G12 .s/ u
D and u D Ky; (5.109)
w G21 .s/ G22 .s/ y

where

(5.110)

Assume that D22 is invertible. The problem here is to find a state-space realization
of P(s) such that

z D CSDr .G; K/ w D LFTl .P; K/ w: (5.111)


132 5 Chain Scattering Descriptions

From (5.110), one has

(5.112)

Rearrange the row by the required input-output relationship:

(5.113)

Hence, one has


2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
6 7 6 76 7 6 7
4 z 5 D 4 C1 D11 D12 5 4 C2 D21 D22 5 4 w5
y 0 0 I 0 I 0 u
2 32 3
A  B2 D22 1 C2 B2 D22 1 B1  B2 D22 1 D21 x
6 76 7
D 4 C1  D12 D22 1 C2 D12 D22 1 D11  D12 D22 1 D21 5 4 w 5 :
D22 1 C2 D22 1 D22 1 D21 u
(5.114)

Therefore,

(5.115)

Note that if G22 is strictly proper, then the state-space representation of P does not
exist.
5.9 State-Space Formulae of LFT Matrix Transformed from CSD Matrix 133

In the dual case of left CSD, let


" # " #" #
u GQ 11 .s/ GQ 12 .s/ z
D and u D Ky; (5.116)
y Q Q
G21 .s/ G22 .s/ w

where

(5.117)

If D11 is invertible, the problem is to find a state-space realization of P(s) such that

Q K w D LFTl .P; K/ w:
z D CSDl G; (5.118)

From (5.117), one has

(5.119)

and hence,

(5.120)

This gives
2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
6 7 6 76 7 6 7
4 z 5D4 0 I 0 54 0 0 I 5 4 w 5: (5.121)
y C2 D21 D22 C1 D11 D12 u
134 5 Chain Scattering Descriptions

One then obtains


2 3 2 1 1 1
32 3
xP A  B1 D11 C1 B2  B1 D11 D12 B1 D11 x
6 7 6 1 1 1 76 7
4 z 5D4 D11 C1 D11 D12 D11 54 w 5
1 1 1
y C2  D21 D11 C1 D22  D21 D11 D12 D21 D11 u
2 3
x
6 7
D P .s/ 4 w 5 ;
u (5.122)

and therefore,

(5.123)

Note that if GQ 11 is strictly proper, then the state-space representation of P does not
exist.

5.10 Star Connection

In this section, the relations between LFT (SCC) and CSD will be further investi-
gated. The CSD originates from two-port networks and has benefits for cascading
multisystems. Unlike the CSDs, the interconnections of two LFT matrices, namely,
the star product [3], look much more complicated in their representation. Figure
4.29 in Chap. 4 shows the star product of two LFT matrices, where
   
P11 P12 L11 L12
z D LFTl ; LFTl ;ˆ w: (5.124)
P21 P22 L21 L22

Next, how to transform the star product into its equivalent CSDs will be expounded.
Figure 4.29 can be converted into CSDs with the termination ˆ connected at
the right port, as shown in Fig. 5.32. When L12 is invertible, one can insert
 1
L12 0
in the configuration as in Fig. 5.33, which does not change the
L22 I
input/output relation. In Fig. 5.33, the LFT matrix P is first expressed as a right CSD
followed by a left CSD, which in turn is connected to two left CSDs and another
two right CSDs. By rearranging the middle part of the CSDs, one can obtain
5.10 Star Connection 135

z u
⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦ w

u u
⎡ I − L11 ⎤ ⎡ I 0⎤
⎢0 − L ⎥ y ⎢P P21 ⎥⎦
⎣ 21 ⎦ ⎣ 22 w

b
⎡ L12 0⎤
⎢L Φ
⎣ 22 − I ⎥⎦ a

Fig. 5.32 CSD formulation of star product

z u
⎡ P12 P11 ⎤
⎢0 I ⎥⎦ w
w ⎣

Π
u u
−1
⎡ L12 0⎤ ⎡ I − L11 ⎤ ⎡ I 0⎤
⎢L − I ⎥⎦ ⎢0 − L ⎥ y ⎢P P21 ⎥⎦ w
⎣ 22 ⎣ 21 ⎦ ⎣ 22

b
−1
⎡ L12 0⎤ ⎡ L12 0⎤
⎢L Φ
⎣ 22 − I ⎥⎦ ⎢L
⎣ 22 − I ⎥⎦ a

Fig. 5.33 CSD formulation of star product

" # " # " #


P12 P11 I 0 I L11
z D CSDr ; CSDl ; CSDl ;
0 I P22 P21 0 L21
0" #1 111 1
L12 0
CSDl @ ; ˆAAAA w:
L22 I

Controller K could be rewritten as


Q ˆ ;
K D CSDl …; (5.125)
136 5 Chain Scattering Descriptions

z ⎡ P11 P12 ⎤ w u
z
⎢P P22 ⎥⎦
y ⎣ 21 u ⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦
⎣ w
K
⎡ L11 L12 ⎤ ⇒
K
⎢L L22 ⎥⎦ u
⎣ 21 b u

⎡Π  ⎤
Π ⎡ I 0⎤
Φ ⎢
11 12
a b a Π  ⎥
Π y ⎢P
⎣ 22 P21 ⎥⎦
Φ ⎣ 21 22 ⎦ w

Fig. 5.34 Equivalent representations

where
" # " #
Q 11 …
… Q 12 L12 1 L12 1 L11
Q D
… D : (5.126)
Q 21 …
… Q 22 L22 L12 1 L21  L22 L12 1 L11

As can be seen in Fig. 5.34, the star product of LFT can be formulated into
CSDr  CSDl .
For the dual case that if L21 is invertible, one can convert Fig. 4.29 into CSDs
with the termination ˆ connected at the left port as shown in Fig. 5.35. Then, by
rearranging the middle CSDs, one can acquire Fig. 5.36. Similarly, controller K
could be rewritten as

K D CSDr .…; ˆ/ ; (5.127)

where
   
…11 …12 L12  L11 L21 1 L22 L11 L21 1
…D D : (5.128)
…21 …22 L21 1 L22 L21 1

As can be seen in Fig. 5.36, the star product of LFT can be formulated into
CSDr  CSDl . Evidently, Figs. 5.34 and 5.36 are definitely equivalent.

5.11 J-Lossless and Dual J-Lossless Systems

The properties of lossless two-port networks in the viewpoint of power wave


propagation were discussed in Chap. 3. The lossless (inner) systems in the described
LFT system were defined in Chap. 4. In this section, the J-lossless and dual J-
lossless systems will be further investigated for general control problems. They
5.11 J-Lossless and Dual J-Lossless Systems 137

z
⎡ I − P11 ⎤
⎢0 − P ⎥ w
⎣ 21 ⎦

Π
y
−1
⎡ P12 0⎤ ⎡ L12 L11 ⎤ ⎡ I 0⎤
⎢P − I ⎥⎦ u ⎢0 I ⎥⎦ ⎢L L21 ⎥⎦
⎣ 22 ⎣ ⎣ 22

b
−1
⎡ I 0⎤ ⎡ I 0⎤
Φ ⎢L
a ⎣ 22 L21 ⎥⎦ ⎢L
⎣ 22 L21 ⎥⎦

Fig. 5.35 Equivalent CSD formulations

z ⎡ P11 P12 ⎤ w z
⎢P P22 ⎥⎦
y ⎣ 21 u ⎡ I − P11 ⎤
⎢0 − P ⎥ w
⎣ 21 ⎦

K
⎡ L11 L12 ⎤ ⇒ K
⎢L L22 ⎥⎦ b
⎣ 21 u
⎡ P12 0⎤ ⎡ Π11 Π12 ⎤
⎢P Φ
a b ⎣ 22 − I ⎥⎦ y ⎢Π
⎣ 21 Π 22 ⎥⎦ a
Φ

Fig. 5.36 Equivalent CSD formulations

represent the energy balance of the two-port networks between the left and right
ports. Consider the system as illustrated in Fig. 5.1; this means

kz.j w/k2 2  kw.j w/k2 2 D ku.j w/k2 2  ky.j w/k2 2 : (5.129)


In 0
Definition 5.1 Let Jn;k D : A (n1 C n2 )  (k1 C k2 ) right CSD matrix
0 Ik

G(s) is called J-unitary if G Jn1 ;n2 G D Jk1 ;k2 , where n2 D k2 . A J-unitary G is then
called J-lossless if G  Jn1 ;n2 G  Jk1 ;k2 and 8 Re(s)  0. A (n1 C n2 )  (k1 C k2 ) left
CSD matrix G.s/Q is called dual J-unitary if GJ Q k1 ;k2 GQ  D Jn1 ;n2 , where n1 D k1 .
A dual J-unitary G is then called dual J-lossless if G.s/J Q Q
k1 ;k2 G .s/  Jn1 ;n2 and
8 Re(s)  0.
138 5 Chain Scattering Descriptions

" #
s1
0
Example 5.7 Verify that G.s/ D sC1
sC1 is J-unitary and J-lossless.
0 s1
From
2 3 2 3
s  1 " # s1
0 1 0 0
6 7 6 sC1 7
G  .s/J G.s/ D 4 s C 1
s C 1 5 0 1
4 sC1 5
0 0
s  1 s1
 
1 0
D D J; (5.130)
0 1

it concludes that G(s) is a J-unitary system. From (5.130) and s D  C j!, 8   0,


2 s1 32 3
s1
0 0
6 76 s C 1 7
G  .s/J G.s/ D 4 s C 1
s  1 54 sC1 5
0 0 
sC1 s1
2 2
3
.  1/ C ! 2
6 0 7  
6 . C 1/2 C ! 2 7 1 0
D6 7  :
4 . C 1/2 C ! 2 5 0 1
0 
.  1/2 C ! 2

It concludes that G(s) is a J-lossless


  system.
 It should be noted that an inner
w z
(lossless) LFT matrix P(s) of 7! must be stable; however, a J-lossless
u y
   
u z
CSD matrix G(s) of 7! is not always stable. It can be verified that the
y w
J-lossless matrix G(s) is not a stable system; however, its LFT matrix P(s) is lossless
(inner) and stable as
2 32 3
s1 s C 1 1
0 0
P .s/ D 4 s C 1 54 s1 5
0 1 1 0
2 3 2 3 2 s1
3
s1 0 1 0
0 6 7 6 6 sC1 7
7:
D4 sC1 54
s1 5 D 4 s1 5
0 1 0 0
sC1 sC1

As mentioned in Sect. 4.6, the J-lossless and dual J-lossless properties are
counterparts of inner and co-inner. Hence, the relationship is maintained during
the transformation between LFT and CSD. These properties are discussed in the
following.
5.11 J-Lossless and Dual J-Lossless Systems 139

Lemma 5.2 A right CSD matrix G(s) is J-lossless if and only if its corresponding
LFT matrix P(s) which makes CSDr (G, K) D LFTl (P, K) is inner.
     
G11 .s/ G12 .s/ u z
Proof Let G.s/ D be a right CSD matrix of 7!
G21 .s/ G22 .s/ y w
with dim(y) D dim(w). By (5.58), one can obtain
  1
G11 G12 G21 G22 1
P D DW G1 G2 ; (5.131)
0 I I 0

which makes CSDr (G, K) D LFTl (P, K). Then, one has
" #" #
 

G11 G21 G11 G12
G Jz;w G D
 
G12 G22 G21 G22
"    
#
G11 G11  G21 G21 G11 G12  G21 G22
D ;
   
G12 G11  G22 G21 G12 G12  G22 G22

and on the other hand,


 
   
 
  G21 I G21 G22 G11 0 G11 G12
G2 G2  G1 G1 D   
G22 0 I 0 G12 I 0 I
"    
#
G21 G21  G11 G11 C I G21 G22  G11 G12
D
   
G22 G21  G12 G11 G22 G22  G12 G12  I
D Ju;y  G  Jz;w G:

This implies that G Jz,w G D Ju,y if and only if G 


2  G2 * D G1  G1 * and, further,

1  1  1  
P  P D G1 G2 1
G1 G2 D G2 1
G1  G1 G2 D G2 1
G2 G2 G2 D I:
(5.132)

This concludes that P P D I if and only if G Jz,w G D Ju,y . That is, G(s) is J-
unitary if and only if P(s) is all pass. Furthermore, when G(s) is J-lossless, one
has G2  (s)G2 * (s)  G1  (s)G1 * (s) and 8 Re(s)  0, which implies that


P  .s/P .s/ D G1 .s/G2
1 1
.s/ G1 .s/G2 .s/
1 
D G2 .s/G1  .s/G1 .s/G2
1
.s/  I; 8Re.s/  0: (5.133)

Hence, it concludes that G(s) is J-lossless if and only if P(s) is inner (or lossless). 
140 5 Chain Scattering Descriptions

Lemma 5.3 A left CSD matrix G.s/ Q is dual


J-lossless

if and only if its correspond-
ing LFT matrix P(s) which makes CSDl G; Q K D LFTl .P; K/ is co-inner.
     
Q GQ 11 .s/ GQ 12 .s/ z u
Proof Let G.s/ D Q Q be a left CSD matrix of 7!
G21 .s/ G22 .s/ w y
with dim(y) D dim(w). From (5.62), one has
  1
I 0 0 I
P D DW GQ 2 GQ 1
1
: (5.134)
GQ 21 GQ 22 GQ 11 GQ 12

Now, from
" #" # " #" #
GQ 12 I GQ 12

GQ 22

GQ 11 0 GQ 11
 
G21
GQ 2 GQ 2

 GQ 1 GQ 1

D 
GQ 22 0 I 0 G21 I 0 I
" #
GQ 12 GQ 12

 GQ 11 GQ 11

CI GQ 12 GQ 22

 GQ 11 GQ 21

D  Q
G22 G12  G21 GQ 11
Q Q  Q Q  Q Q
G22 G22  G21 G21  I 

Q z;w GQ  ;
D Ju;y  GJ (5.135)

one can obtain that Ju;y  GJ Q z;w GQ  D 0 if and only if PP D I. Furthermore,
Q
G.s/J Q
z;w G .s/  Ju;y , and 8 Re s  0 implies that

1

P .s/P  .s/ D GQ 1  1.s/GQ 2 .s/ GQ 1 .s/GQ 2 .s/
D GQ 1
1
.s/GQ 2 .s/GQ 2

.s/GQ 1
1
.s/  I; 8Res  0: (5.136)

Q
Hence, it concludes that G.s/ is dual J-lossless if and only if P(s) is co-inner. 
Q has relations to its terminator
As illustrated in Fig. 5.1, the CSD matrix (G or G)
K, especially when the CSD matrix satisfies J-lossless (or dual J-lossless). These
properties are introduced in the following lemmas.
Lemma 5.4 If a right CSD matrix G(s) is J-lossless, then CSDr (G,ˆ) 2 BH1 for
8 ˆ 2 BH1 .
Proof Let u D ˆy, where

ku.j w/k2 D kˆy.j w/k2  kˆk1 ky.j w/k2 < ky.j w/k2 : (5.137)

From z D CSDr (G,ˆ)w and (5.129),

kz.j w/k2 2  kw.j w/k2 2 D ku.j w/k2 2  ky.j w/k2 2 ; (5.138)


Exercises 141

and because

ku.j w/k2 2  ky.j w/k2 2 < 0; (5.139)

one has kz(jw)k2 2  kw(jw)k2 2 < 0. Hence, CSDr (G,ˆ) 2 BH1 , 8 ˆ 2 BH1 .

" #
s1
0
Example 5.8 Given a J-lossless G.s/ D sC1
sC1 and ˆ D 1
sC2
2 BH1 ,
0 s1
verify that CSDr (G, ˆ) 2 BH1 .
From (5.5), one has

.s  1/2
CSDr .G; ˆ/ D 2 BH1 :
.s C 1/2 .s C 2/

Q
Lemma 5.5
Dually, if a left CSD system G.s/ is dual J-lossless, then
Q
CSDl G; ˆ 2 BH1 for 8 ˆ 2 BH1 .
Until now, we have introduced the inner and co-inner LFT systems in Chap.
4 and J-lossless and dual J-lossless CSD systems in this chapter. In the next
chapter, the coprime factorization to LFT and CSD systems will be discussed. These
factorization approaches offer a straightforward way to describe a control system
using the RH1 functions.

Exercises
" #
1 1
1. Let P be an SCC plant shown below, where P .s/ D sC1 s.sC1/
1
, K D 3.
1 s

z w
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦
u
⎣ 21

(a) Find the transfer matrix of LFTl (P,K).


(b) Transform the SCC plant P into a right CSD matrix G, and calculate the
transfer function of CSDr (G,K).
(c) Let P be represented by a right CSD matrix P1* , associated with a left
CSD matrix P2* as in the following figure. Find P1* and P2* , and determine
CSDr (P1 * , CSDl (P2 * ,K)).
142 5 Chain Scattering Descriptions

P1*
z u
⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦ w

P2*
u u
⎡ I 0⎤
K y ⎢P P21 ⎥⎦ w
⎣ 22

2. Determine the interconnected


  matrix P in the SCC plant for the following system,
v
where w D d , z D and transform it into a CSD form.
uf
uf

W1
d

y u v
K G1 W2

G2

3. Find the transfer function VV21 of the network circuit given below, using LFT and
CSD approaches, respectively, where R1 D 9, R2 D 1, and C D 1/9.
C

+ +
R1
V1 R2 V2

− −

4. Use the LFT method to obtain the transfer function of w 7! z for the block
diagram shown below and then derive its left CSD form.

H2

w
- z
G1 G2 G3
-
H1
References 143

5. Use the techniques presented in this chapter to find the transfer function of the
following:
(a) LFTl (P,K)
(b) CSDr (G,K)

(c) CSDl G; Q K

(d) CSDr G1 ; CSDl GQ 2 ; K " #



1 1
(e) CSDl GQ 1 ; CSDr .G2 ; K/ , where P .s/ D sC1 s.sC1/
1
, and stabilizing K.
1 s

z w
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦
u
⎣ 21

References

1. Green M (1992) H1 controller synthesis by J-lossless coprime factorization. SIAM J Control


Optim 30:522–547
2. Kimura H (1997) Chain-scattering approach to H1 control. Birkhäuser, Boston
3. Redheffer RM (1960) On a certain linear fractional transformation. J Math Phys 39:269–286
Chapter 6
Coprime Factorizations

Coprime factorization originates from algebra studied by French mathematician


E. Bezout [1]. In recent years, it has been used to describe dynamic systems [6].
Coprime factorization can be applied in controller synthesis for a given dynamic
system with uncertainties [7, 8]. The factorizations can be further employed to
construct the set of all stabilizing controllers for the system and to represent a
simple parameterization of all stabilized closed-loop transfer functions. In addition,
the normalized coprime factorization which will be introduced in this chapter has
a strong link to the H1 loop-shaping problem [4]. It is also relevant to the spectral
factorizations and internal stability.

6.1 Coprimeness and Coprime Factorization

One can start with the simplest case of real numbers. Consider a real rational number
r D dn , where d and n are two integers. If the greatest common divisor (g.c.d.) of the
pair of integers (d, n) is 1, then d and n are called coprime, and r D dn is called the
coprime factorization of r over the integers. It is well known
 that if a pair of integers
(d, n) is coprime, there exists a row vector of two integers xQ yQ such that
" #
  d
xQ yQ D 1; (6.1)
n
" #
  d
Q  yn
i.e., xd Q D 1, where xQ yQ is called the left inverse of . For the
n
example r D 32 , it can be found that
" #
  2
1 1 D 1:
3

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 145
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__6,
© Springer-Verlag London 2014
146 6 Coprime Factorizations

Clearly, 3 and 2 are coprime, and 1:5 D 32 is the coprime factorization of the real
rational number 1.5. In fact, the factorization 64 is also equal to 1.5. However, 6 and
4 are
" # not coprime, since it is evident that the g.c.d. of (4,6) is 2, and the vector
4
does not have a left inverse which only has integer elements. This reveals that
6
1:5 D 64 is not a coprime factorization.
Proceed forward and consider the coprimeness over a ring of polynomials with
real coefficients. Two polynomials are called coprime if they do not share common
zeros. Let d(s) be a polynomial with real coefficients denoted as

d.s/ D dn s n C dn1 s n1 C    C d1 s C d0 : (6.2)

The polynomial d(s) is said to be of degree n if dn ¤ 0. If the leading coefficient dn is


equal to 1, the polynomial is called monic. Consider two polynomials d(s) and n(s)
with d(s) ¤ 0. Then, it can be proved [2] that the pair of polynomials (d(s), n(s)) are
coprime over the polynomial ring if and only if there exist two polynomials (x(s),
y(s)) such that

xd  y n D 1: (6.3)

n.s/
Consider a simple, illustrative example F .s/ D d.s/ with d(s) D s C 1 and
n(s) D s C 2. Trivially, this is a coprime factorization over the polynomial ring
since one can" find that
# these two polynomials do not share common zero and
  sC1 n.s/
s C 1 s D 1: For an alternative factorization F .s/ D d.s/ with
sC2
d(s) D s2 C 4s C 3 and n(s) D s2 C 5s C 6, apparently, the pair of (n(s), d(s)) is not
coprime since s D  3 is a common zero. It can also be seen that this factorization
is reducible as

(6.4)

If two polynomials d(s) and n(s) are coprime over the polynomial ring, the rational
n.s/
function of d.s/ should be irreducible over the polynomial ring. Note that the
coprime factorization is not unique up to a unit (i.e., the real number) in the
polynomial ring. For instance, one can check that

sC2 2s C 4
F .s/ D D ; (6.5)
sC1 2s C 2

where [(2s C 2),(2s C 4)] is also coprime, since


6.1 Coprimeness and Coprime Factorization 147

 " #
3 1 2s C 2
sC  sC D 1: (6.6)
2 2 2s C 4

The (coprime) factorization of a rational function need be expanded when


the stability issue of a transfer function is to be considered in control system
analysis and synthesis. A transfer function is a rational, fractional function with
real coefficients. This problem can naturally be solved if the coprime factors belong
to the ring of stable rational functions. That is, one can consider the coprime
factorization of a real rational function T(s) over stable proper rational functions
(i.e., the Hardy space RH1 ). It is defined that two stable rational
functions
M(s) and
Q
N(s) are coprime if there exist two stable rational functions X.s/; YQ .s/ such that
 
  M.s/
Q
X.s/ YQ .s/ D I: (6.7)
N.s/

For instance, consider an unstable transfer function T .s/ D s2s3


. One can find a
N.s/
coprime factorization of T(s) over stable rational functions as T .s/ D M.s/ , where
M.s/ D sC1 and N.s/ D sC1 . It can be seen that M(s) and N(s) are both stable and
s3 s2

they are coprime over the stable transfer functions, since there exists a left inverse
in the following:
2 3
s3
 
4s  1 5s C 1 6 7
6 s C 1 7 D 1:
4 (6.8)
sC1 sC1 s2 5
sC1

Note that the coprime factorization is not unique but just up to a unit in RH1 (i.e.,
outer (bistable) rational function). In the above example, one can easily find another
.s3/.sC6/ .s2/.sC6/
coprime factorization such as M.s/ D .sC2/.sC5/ and N.s/ D .sC2/.sC5/ which are
N.s/
coprime factors of T(s), since T .s/ D M.s/ D s2
s3
. It can be verified that M(s) and
N(s) are coprime to each other, because there is a left inverse as
2 3
.s  3/ .s C 6/
 
.4s  1/ .s C 5/ .5s C 1/ .s C 5/ 6 7
6 .s C 1/ .s C 5/ 7 D 1:
4 (6.9)
.s C 1/ .s C 6/ .s C 1/ .s C 6/ .s  2/ .s C 6/ 5
.s C 1/ .s C 5/

Until now, one has discussed the coprimeness of SISO systems over integers,
polynomials, and stable rational functions, respectively. In the following, one will
expand the coprimeness over stable rational transfer function matrices which will be
introduced for general MIMO cases in the development of control system analysis
and synthesis.
148 6 Coprime Factorizations

6.2 Coprime Factorization over RH1

Given a transfer function matrix T(s), a basic problem is to find four transfer
function matrices N(s), M(s), MQ .s/, and NQ .s/ in RH1 such that

T .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/; (6.10)


˚ 
where the pair of fM(s), N(s)g is the right coprime factors and MQ .s/; NQ .s/ the
left coprime factors. Such coprime factors
" actually always
# exist. For instance,
sC2
2 0
for a transfer function matrix T .s/ D .sC3/
1 1
, there is a right coprime
sC1 sC1
factorization of T(s) D N(s)M(s) 1 , where
2 3 2 3
sC3 1
6 sC2 0 7 6 sC3 0 7
M.s/ D 6
4
7 2 RH1 ;
5 N.s/ D 6
4
7 2 RH1 ;
5
sC3 1
 1 0
sC2 sC1
(6.11)
 
    M
since one can find a stable XQ YQ such that XQ YQ D I , with
N
2 3
sC2 2 3
6 sC3 0 7 0 0
Q
X.s/ D6
4 sC2
7 2 RH1 and  YQ .s/ D 4 s C 1 5 2 RH1 :
sC25 0
sC3 sC3 sC3
(6.12)

Moreover, there is a left coprime factorization of T .s/ D MQ .s/1 NQ .s/, where


2 sC3 3
0
6 sC2 7
MQ .s/ D 6
4 2
7 2 RH1 ;
5
.s C 3/
 1
.s C 1/ .s C 2/
2 1 3
0
6 sC3 7
NQ .s/ D 4 2 RH1 ;
1 5
(6.13)
0
sC1
   
X   X
since one can verify a stable such that MQ NQ D I where
Y Y
6.2 Coprime Factorization over RH1 149

2 3
1 0
6 7
X.s/ D 4 .s C 3/2 s C 3 5 2 RH1 and
.s C 1/ .s C 2/ sC2
2 sC3 3
 0
6 sC2 7
Y.s/ D 4 2 RH1 :
sC15
(6.14)
0 
sC2

Definition 6.1 Two matrices M(s) and N(s) in RH1 are right coprime over RH1 if
Q
they have the same number of columns and if there exist matrices X.s/ and YQ .s/ in
RH1 such that

Q
X.s/M.s/  YQ .s/N.s/ D I: (6.15)

Similarly, two matrices MQ .s/ and NQ .s/ in RH1 are left coprime over RH1 if they
have the same number of rows and if there exist matrices X(s) and Y(s) in RH1 such
that

MQ .s/X.s/  NQ .s/Y.s/ D I: (6.16)


 
M
This is equivalent to stating that the stacked matrix is left invertible in RH1 ,
N
 
and MQ NQ is right invertible in RH1 .
For readers who are more familiar with the block diagram manipulations,
the following may help to understand the relationships. Assume that T .s/ D
N.s/M 1 .s/ D MQ 1 .s/NQ .s/ are the right and left coprime factorizations, respec-
tively [3]. Let y D Tu and u D Mu0 , where u0 is a virtual signal. Then, from
     
u I M
D uD u0 ; (6.17)
y T TM

one has
   
u M 0
D u: (6.18)
y N

Figure 6.1 gives a representation of y D Tu D NM 1 u.


Dually, from
 
  u
0D T I (6.19)
y
150 6 Coprime Factorizations

T
u M
u′
u M −1

y N y N

Fig. 6.1 Right coprime factorization

T
N u
N u
0 ⇒
− M −1 y
M y

Fig. 6.2 Left coprime factorization

u
N M

0 u′

− y
M N

Fig. 6.3 Double coprime factors

and multiplying by MQ , one obtains


   
  u   u
0 D MQ P MQ D NQ MQ : (6.20)
y y

Figure 6.2 shows another representation of y D T u D MQ 1 NQ u.


Combining (6.18) and (6.20) will yield
 
  M.s/ 0
NQ .s/ MQ .s/ u D 0; (6.21)
N.s/
 
  M.s/
which holds for any u which implies NQ .s/ MQ .s/
0
D 0. The control
N.s/
block diagram of Fig. 6.3 gives a cascaded representation.
The following lemma shows the coprimeness of the constructed state-space
 real-

I 0
izations using this approach. In the proof of the lemma, by multiplying
F W
6.2 Coprime Factorization over RH1 151

 
I H
and , respectively, state-space realizations are derived, similar to Figs. 5.21
0 WQ
and 5.25 in Sect. 5.6.

Lemma 6.1 Let be a proper real-rational transfer function

matrix with (A, B) stabilizable and (C, A) detectable. A double coprime factorization
of T .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/ in the state-space form is given by

(6.22)

(6.23)

where W and WQ are any square constant matrices which are nonsingular and F and
H are chosen such that both A C BF and A C HC are Hurwitz.
Proof Let
    
xP A B x
D ; (6.24)
y C D u

which is not required to be in minimal realization. Since (A, B) is stabilizable,


  a
x
gain matrix F can be introduced such that A C BF is Hurwitz. Then, let D
u
  
I 0 x
, where W is invertible. Hence, (6.24) can be expanded to include
F W u0
the signal u that leads to
2 3 2 3 2 3
xP A B   A B   
4y5 D 4C D5 x D 4C D5 I 0 x
: (6.25)
u F W u0
u 0 I 0 I

Then,
2 3 2 3 2 3 2 3
xP A C BF BW   xP A C BF BW  
4 y 5 D 4 C C DF 5 x 4 5 4 x
DW ) u D F W 5 0 :
u0 u
u F W y C C DF DW
(6.26)
152 6 Coprime Factorizations

By (6.18), one can conclude that

(6.27)

where T(s) D N(s)M 1 (s).


Dually, the left coprime factorization gives
2 3
    x
xP A B 0 4
D u 5: (6.28)
ye C D I
y
    
xPO I H xP
Let D . Dual to (6.25), this is equivalent to multiplying
y0 0 WQ ye
 
I H
by the left of (6.28). Then, the left factorization can yield
0 WQ
2 3 2 3
     x   x
PxO I H A B 0 4 A C H C B C HD H 4
D u 5D u 5;
y0 0 WQ C D I WQ C WQ D WQ
y y
(6.29)

where WQ is any nonsingular matrix. Since (C, A) is detectable, the injected gain
matrix H is chosen such that A C HC is Hurwitz. This structure is in fact a closed-
loop state observer where u and y are the inputs. Then, by comparing (6.20) and
(6.29), one can obtain

(6.30)

where T .s/ D MQ 1 .s/NQ .s/.


 Toensure that the state-space realization (6.22) is a pair of right coprime factors
M Q
, one now needs Definition 6.1 to construct a left inverse such that XM 
N
   1     
x I 0 x I 0 x
YQ N D I . From 0 D D 1 1 , one
u F W u W F W u
0 1 1
 u D W Fx C W u. Then, as illustratedin Fig. 6.4, a state-space
has  realization
Q Q
X Y 2 RH1 can be constructed based on NQ .s/ MQ .s/ in (6.29) as
6.2 Coprime Factorization over RH1 153

∼ ∼

Fig. 6.4 State-space realization u0 7! u0

(6.31)

For xO D x and applying the state-space algebra of (2.72) will yield

(6.32)
154 6 Coprime Factorizations

This gives

(6.33)

Then XQ M  YQ N D I can be determined by a similarity transformation


I 0
T D as
I I

(6.34)

 
M
This concludes that given by (6.22) is left invertible in RH1 , and then
N
T D NM 1 is a right coprime factorization.
Likewise, to ensure that the state-space realization
  (6.23) is of coprime factors,
Y
one now needs to construct a right inverse 2 RH1 in the state-space form
X
   1  
xP I H xPO
such that MQ X  NQ Y D I . From D Q and based on (6.26),
ye 0 W y0
one has, from Fig. 6.5,

(6.35)

For xO D x, applying the state-space algebra of (2.72) will yield

(6.36)
6.2 Coprime Factorization over RH1 155

∼−

∼ ∼

Fig. 6.5 State-space realization y0 7! y0

where
2 3
A C BF 0 BW H WQ 1
4 BF  H C A C H C BW H WQ 1 5
 WQ C WQ C 0 I
2 3
2 3 A C BF 0 BW H WQ 1
I 0 0 0 6
0 I 0 0 7
D 4 0 A C H C B C HD H 56 4
7:
F 0 W 0 5
0 WQ C WQ D WQ
 .C C DF / 0 DW WQ 1
(6.37)

Then NQ Y C MQ X D I can be determined by a similarity transformation T D



I 0
as
I I
156 6 Coprime Factorizations

M p −1 N p Np M p −1
z −1 z′ ω z ω′ −1
ω
⎡ M 11 M 12 ⎤ ⎡ N 11 N 12 ⎤ ⎡ N11 N12 ⎤ ⎡ M 11 M 12 ⎤
⎢   ⎥ ⎢ ⎥ ⎢N ⎢M ⎥
y M
⎣ 21 M 22 ⎦
y′
⎣ N 21 N 22 ⎦ u y
⎣ 21 N 22 ⎥⎦ u′ ⎣ 21 M 22 ⎦
u

K K

Fig. 6.6 SCC in right or left coprime forms

(6.38)

 
This concludes that NQ .s/ MQ .s/ given by (6.23) is right invertible in RH1 , and
then T D MQ 1 NQ is a left coprime factorization. 
Furthermore, it can be verified from (6.31) and (6.35) that

(6.39)

Hence, (6.33) and (6.36) will form the Bezout identity as summarized in the
following lemma.
Lemma 6.2 For any proper real-rational matrix T(s), there always exists a double
(left and right) coprime factorization given by (6.10), where N(s), M(s), NQ .s/, and
MQ .s/ are in RH1 , respectively. For the double coprime factorization, there exist
Q
RH1 transfer matrices X(s), Y(s), X.s/, and YQ .s/ satisfying the Bezout identity
    
Q
X.s/ YQ .s/ M.s/ Y.s/ I 0
D : (6.40)
NQ .s/ MQ .s/  N.s/ X.s/ 0 I

As stated in Chap. 5, the coprime factorization can also arise in the two-port
representation. Referring to the chain scattering approach proposed by Tsai [5], the
coprime factorization of P D Np M  Q 1 Q
p (or P D Mp Np ) as illustrated in Fig. 6.6 is
1

utilized.
6.2 Coprime Factorization over RH1 157

As shown in Fig. 6.6, let


   
N11 N12 M11 M12
Np D 2 RH1 ; Mp D 2 RH1 ; (6.41)
N21 N22 M21 M22

and
   
NQ 11 NQ 12 MQ 11 MQ 12
NQ p D 2 RH1 ; MQ p D 2 RH1 : (6.42)
NQ 21 NQ 22 MQ 21 MQ 22

Recall from the SCC of Fig. 5.8. Let


      
z P11 P21 w w
D D Np Mp1 ; (6.43)
y P12 P22 u u

where
   0   0
z w N11 N12 w
D Np 0 D ; (6.44)
y u N21 N22 u0

and
   0   0
w w M11 M12 w
D Mp D : (6.45)
u u0 M21 M22 u0

Similarly, the left coprime factorization, as shown in Fig. 6.6, can be found in the
dual way. From the SCC of Fig. 5.8, one has
      
z P11 P21 w Q 1 Q w
D D Mp Np ; (6.46)
y P12 P22 u u

where
      
z0 w NQ 11 NQ 12 w
D NQ p D Q ; (6.47)
y0 u N21 NQ 22 u

and
      
z0 z MQ 11 MQ 12 z
D MQ p D : (6.48)
y0 y MQ 21 MQ 22 y
158 6 Coprime Factorizations

Lemma 6.3 Let be a proper real-rational matrix with

(A,B2 ) stabilizable and (C2 ,A) detectable. A double coprime factorization of P .s/ D
Np .s/Mp1 .s/ D MQ p1 .s/NQ p .s/ in the state-space form is given by

(6.49)

and

(6.50)


Fu
where Wuu , Www , WQ zz , and WQ yy are nonsingular and F D and H D [Hz Hy ]
Fw
are chosen such that both A C B1 Fw C B2 Fu and A C Hz C1 C Hy C2 are Hurwitz.
Proof Let
2 3 2 3
xP A B1 B2
6 z7 6C D 72 x 3
6 7 6 1 11 D12 7
6 7 6 7
6 y 7 D 6 C2 D21 0 7 4 w 5 : (6.51)
6 7 6 7
4 w5 4 0 I 0 5 u
u 0 0 I
2 3 2 32 3
x I 0 0 x  
4 5 4 5 4 0 5 Fu
Additionally, let w D Fw Www Wwu w , where F D is
Fw
u Fu 0 Wuu u0
chosen such that A C B1 Fw C B2 Fu is Hurwitz and Wuu and Www are nonsingular.
6.2 Coprime Factorization over RH1 159

2 3
I 0 0
This is equivalent to multiplying 4 Fw Www Wwu 5 in the right-hand side of the
Fu 0 Wuu
above formulation, yielding
2 3 2 3
xP A B1 B2
6 z7 6C D 72 I 32 3
6 7 6 1 11 D12 7 0 0 x
6 7 6 7
6 y 7 D 6 C2 D21 0 7 4 Fw Www Wwu 5 4 w0 5
6 7 6 7
4 w5 4 0 I 0 5 Fu 0 Wuu u0
u 0 0 I
2 3
A C B1 Fw C B2 Fu B1 Www B1 Wwu C B2 Wuu
6C CD F CD F D W 72 x 3
6 1 11 w 12 u 11 ww D11 Wwu C D12 Wuu 7
6 74 05
D6 C2 C D21 Fw D21 Www D21 Wwu 7 w :
6 7
4 Fw Www Wwu 5 u0
Fu 0 Wuu
(6.52)

From (6.44) and (6.45), one concludes that

(6.53)

Furthermore, from
2 3
x
2 3 2 36
xP A B1 B2 0 0 6 w 77
4 ze 5 D 4 C1 D11 D12 6 7
I 056 u 7; (6.54)
6 7
ye C2 D21 0 0 I 4 z5
y
160 6 Coprime Factorizations

2 P3 2 32 3
xO I Hz Hy xP
4
let z 0 5 D 4 Q
0 Wzz 0 5 4 ze 5, where H D [Hz Hy ] is chosen such that
y 0 Q
0 Wyz Wyy Q ye
A C Hz C1 C Hy C2 is Hurwitz and WQ zz and WQ yy are nonsingular. This is equivalent
2 3
I Hz Hy
to multiplying 4 0 WQ zz 0 5 by the left-hand side of the above formulation,
0 WQ yz WQ yy
and then one has
2 3
x
2P 3 2 32 3
xO I Hz Hy A B1 B2 0 0 6 6 w 7
7
4 z 5 D 4 0 WQ zz 6 7
0
0 5 4 C1 D11 D12 I 0 5 6 u 7
6 7
y0 0 WQ yz WQ yy C2 D21 0 0 I 4  z 5
y
2 3
A C Hz C1 C Hy C2 B1 C Hz D11 C Hy D21 B2 C Hz D12 Hz Hy
D4 WQ zz C1 WQ zz D11 WQ zz D12 WQ zz 0 5
WQ yz C1 C WQ yy C2 WQ yz D11 C WQ yy D21 WQ yz D12 WQ yz WQ yy
2 3
x
6 w 7
6 7
6 7
 6 u 7:
6 7
4 z5
y (6.55)

From (6.47) and (6.48), one concludes

(6.56)

To ensure that (6.53) is a pair of right coprime factors, one needs Definition 6.1 to
Q p  YQ Np D I: From
construct a left inverse based on (6.56) in RH1 , i.e., XM

2 3 2 31 2 3
x I 0 0 x
4 w0 5 D 4 Fw Www Wwu 5 4 w 5
u0 Fu 0 Wuu u
2 32 3
I 0 0 x
6  W 1 Fw C W 1 Wwu W 1 Fu 1 1
Www Wwu Wuu1 7
D4 ww ww uu Www 54 w5;
Wuu1 Fu 0 W 1 uu
u
(6.57)
6.2 Coprime Factorization over RH1 161

one has

(6.58)

Then, one has from (6.53) and (6.58)

(6.59)

 
I 0
The similarity transformation T D will yield
I I

(6.60)

This concludes that (6.49) is left invertible in RH1 such that P(s) D Np (s)M  1
p (s)
is a coprime factorization.
Analogously, to ensure (6.50) consists of coprime factors, one needs to construct,
based on (6.52), a right inverse in RH1 . From
2 P3 2 31 2 P 3
xO I Hz Hy xO
4 z0 5 D 4 0 WQ zz 0 5 4 ze 5
y 0 Q
0 Wyz WyyQ ye
2 3
1 2 P 3
I Hz WQ zz1 C Hy WQ yy1 Q
Wyz WQ zz1 Hy WQ yy xO
6 Q 1 74 5
D40 Wzz 0 5 ze ; (6.61)
0 Q 1 Q
Wyy Wyz Wzz Q 1 Q
Wyy 1
ye
162 6 Coprime Factorizations

let

(6.62)

Then one has, from (6.56) and (6.62),

(6.63)

 
I 0
By the state similarity transformation T D , it yields NQ p Y C MQ p X D I ,
I I
since

(6.64)

This concludes that (6.50) is right invertible in RH1 such that P .s/ D
MQ p1 .s/NQ p .s/ is a coprime factorization. 
6.2 Coprime Factorization over RH1 163

Equivalently, the Bezout identity as given in Lemma 6.2 can also be checked
from
    
Q
X.s/ YQ .s/ Mp .s/ Y.s/ I 0
D ; (6.65)
NQ p .s/ MQ p .s/  Np .s/ X.s/ 0 I

where

and

Consequently, the Bezout identity is also held in the two-port transfer matrix.
Further on, the coprime factorizations for the configurations of CSDr  CSDl
and CSDl  CSDr are discussed in the following. The two-port SCC plant can be
transformed into the description CSDr  CSDl , as illustrated in Fig. 6.7. Note that
multiplication by M* does not change the overall transfer function from w to z.

Then, a right coprime factorization of can be found by

applying Lemma 6.3 such that


164 6 Coprime Factorizations

G1
P1* M*
z u u′
⎡ P12 P11 ⎤ ⎡ M 11 M 12 ⎤
z w w ⎢0 I ⎥⎦ w ⎢M M 22 ⎥⎦
⎡ P11 P12 ⎤ ⎣ ⎣ 21 w′
y ⎢P P22 ⎥⎦ u
⎣ 21
P2* M*
u u u′
K ⎡ I 0⎤ ⎡ M 11 M 12 ⎤
K y ⎢P P21 ⎥⎦ w ⎢M M 22 ⎥⎦ w′
⎣ 22 ⎣ 21

G 2

Fig. 6.7 Multiplying by M* at right terminals

(6.66)

Then, one has

(6.67)

(6.68)
6.2 Coprime Factorization over RH1 165

and

(6.69)

 
Fu
where Wuu and Www are nonsingular and F D is chosen such that
Fw
A C B1 Fw C B2 Fu is Hurwitz. The stable left inverse that satisfies

is given by

(6.70)

It should be emphasized that from (6.49) and (6.66), one can assume

. This concludes that is left invertible

 
G1
in RH1 such that M1 is a coprime factorization. Additionally, by Property
GQ 2
5.3, one has

LFTl .P; K/ D CSDr .P1 ; CSDl .P2 ; K//


D CSDr .P1 M ; CSDl .P2 M ; K//

D CSDr G1 ; CSDl GQ 2 ; K : (6.71)

On the other hand, the two-port SCC plant can be transformed into the description
of CSDl  CSDr , as illustrated in Fig. 6.8. As mentioned before, multiplication by
MQ  does not change the overall transfer function from w to z.
166 6 Coprime Factorizations

G1
M * ze P*1
z z
w
⎡ P11 P12 ⎤ ⎡ M 11 M 12 ⎤ ⎡ I − P11 ⎤
⎢P ⎢  ⎥ ⎢0 − P ⎥
P22 ⎥⎦ M 22 ⎦
y u ye w
⎣ 21 ⎣ M 21 ⎣ 21 ⎦

M * ze P*2
u
K
⎡ M 11 M 12 ⎤ ⎡ P12 0⎤
⎢  ⎥ ⎢P − I ⎥⎦
K
⎣ M 21 M 22 ⎦ ye
⎣ 22
y

G2

Q  at left terminal
Fig. 6.8 Multiplying by M

Then, a left coprime factorization can be


found by applying Lemma 6.3 such that

(6.72)

Then, one has

(6.73)

(6.74)

(6.75)
6.2 Coprime Factorization over RH1 167

 
where WQ zz and WQ yy are nonsingular and H D Hz Hy is chosen such

that A C Hz C1 C Hy C2 is Hurwitz. The stable left inverse that satisfies

is given by

(6.76)

Dually, from (6.50) and (6.72), one has

This concludes that is right invertible in RH1 such that


is a coprime factorization. Moreover, by Property 5.4, one has

LFTl .P; K/ D CSDl .P1 ; CSDr .P2 ; K//


D CSDl MQ  P1 ; CSDr MQ  P2 ; K


D CSDl GQ 1 ; CSDr .G2 ; K/ : (6.77)


168 6 Coprime Factorizations

6.3 Normalized Coprime Factorization

As mentioned in the previous section, the coprime factorization of a given plant is


not unique. There are a number of advantages in representing a possibly unstable
transfer function in terms of two stable factors in this manner and will be exploited
more in later chapters. There is however a special coprime factorization which is
the normalized coprime factorization [4], and the normalized left and right coprime
factors NQ , MQ , N, and M of a transfer function P 2 RH1 are unique [6]. Its properties
are useful in the H1 loop-shaping control.
A normalized right coprime factorization is defined as that the right coprime
factors satisfy

N  .s/N.s/ C M  .s/M.s/ D I; (6.78)


 
M.s/
i.e., is inner. Similarly, its left counterpart is defined as
N.s/

NQ .s/NQ  .s/ C MQ .s/MQ  .s/ D I; (6.79)


 
i.e., MQ .s/ NQ .s/ is co-inner.

Example 6.1 As shown in Fig. 6.9, given a controlled plant ,

determine a normalized right coprime factorization of G(s) D N(s)M 1 (s) in the


state-space form.
Taking the LFT form as shown in Fig. 6.10, one has an augmented plant (i.e., the
transfer function from r0 to u, y)

(6.80)

u
r′ r x 1 x y
W B C
s

Fig. 6.9 State-space model of a control plant


6.3 Normalized Coprime Factorization 169

Fig. 6.10 LFT formulation x x


I
s

⎡u ⎤ P
⎢ y⎥ r′
⎣ ⎦

The state-feedback gain F stabilizes the controlled plant such that A C BF is


Hurwitz. By definition, the normalized right coprime factorization requires the
augmented plant to be inner. Hence, it yields

(6.81)

 
I X
By the similarity transformation T D , it yields
0 I

(6.82)

Therefore,
8
< W DI
C T C C F T F C .A C BF / X C .A C BF / T X D 0 (6.83)
:
F D B T X:

Or equivalently, the state-feedback gain of F can be determined by the solution of


the following equation:

AT X C XA  XBB T X C C T C D 0: (6.84)

In Example 6.1, one can find that the normalized right coprime factorization

M.s/
requires a state-feedback gain F such that A C BF is Hurwitz and is inner.
N.s/
170 6 Coprime Factorizations

The algebraic Riccati equation of (6.84) needs to be solved to determine the solution
of F. In the next chapter, properties of algebraic Riccati equations, the solutions,
and applications will be further discussed. The state-space properties of normalized
coprime factorization will then be introduced therein.

Exercises

1. Compute a coprime factorization of the following system with an inner denomi-

nator .

1
2. Let G.s/ D .s1/.sC2/
.sC3/.s4/ : Find a stable coprime factorization G(s) D N(s)M (s)
and X(s), Y(s) 2 RH1 such that X(s)N(s) C Y(s)M(s) D 1.

References

1. Bézout É (1764) Cours de mathématiques: à l’usage des Gardes du Pavillon et de la Marine.


avec un traité de navigation, Paris
2. Chen G, de Figueiredo RJP (1990) Construction of the left coprime fractional representation for
a class of nonlinear control systems. Syst Control Lett 14:353–361
3. Maciejowski JM (1989) Multivariable feedback design. Addison-Wesley, Berkshire
4. McFarlane D, Glover K (1990) Robust controller design using normalized coprime factor plant
descriptions. Springer, London
5. Tsai MC, Tsai CS (1993) A chain scattering matrix description approach to H1 control. IEEE
Trans Autom Control 38:1416–1421
6. Vidyasagar M (1985) Control systems synthesis: a factorization approach. MIT Press, Cam-
bridge, MA
7. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
8. Zhou K, Doyle JC (1998) Essentials of robust control. Prentice Hall, Upper Saddle River
Chapter 7
Algebraic Riccati Equations and Spectral
Factorizations

In the last chapter, it was discussed that the algebraic Riccati equation (ARE) need
be solved in order to obtain the state-space solutions of the normalized coprime
factorizations. In Chap. 2, the Lyapunov equation was employed to determine the
controllability and observability gramians of a system. Both the algebraic Riccati
and Lyapunov equations play prominent roles in the synthesis of robust and optimal
control as well as in the stability analysis of control systems. In fact, the Lyapunov
equation is a special case of the ARE. The ARE indeed has wide applications in
control system analysis and synthesis. For example, the state-space formulation for
particular coprime factorizations with a J-lossless (or dual J-lossless) numerator
requires solving an ARE; in turn, the J-lossless and dual J-lossless systems are
essential in the synthesis of robust controllers using the CSD approach. In this
chapter, the ARE will be formally introduced. Solution procedures to AREs and
their various properties will be discussed. Towards the end of this chapter, the
coprime factorization approach to solve several spectral factorization problems is
to be considered.

7.1 Algebraic Riccati Equations

The algebraic Riccati equation is useful for solving control synthesis problems such
as the H2 /H1 (sub)optimal control problems [6, 10]. Let A, R, and Q be n  n real
matrices with R and Q being symmetric. The following matrix equation is called an
algebraic Riccati equation (ARE):

AT X C XA C XRX C Q D 0: (7.1)

The matrix H defined in (7.2) is called a Hamiltonian matrix,


 
A R
H WD 2 R2n2n : (7.2)
 Q AT

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 171
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__7,
© Springer-Verlag London 2014
172 7 Algebraic Riccati Equations and Spectral Factorizations

A Hamiltonian matrix H in this context is a 2n  2n real matrix and satisfies

J 1 HJ D H T ; (7.3)
 
0 In
where J D . Note that
In 0

J 2 D I2n and J T D J D J 1 : (7.4)

It is evident that a solution X to the ARE in (7.1) can be determined by the


corresponding Hamiltonian matrix H in (7.2). In the context of control system
analysis and synthesis, ARE solutions are further required to make A C RX Hurwitz.
Define an operator Ric : R2n  2n ! Rn  n . Ric maps a Hamiltonian matrix H to such
an ARE solution X, X D Ric(H). The domain of Ric is symbolized as dom(Ric). It
can be directly verified that if  is an eigenvalue of H, then   is also an eigenvalue
of H [8]. Also, the set of eigenvalues (the spectrum) of H is exactly the union of the
eigenvalues of A C RX and those of  (A C RX). Hence, a necessary condition for
H 2 dom(Ric) is that H does not have any eigenvalues on the imaginary axis.
In summary, if H 2 dom(Ric) and X D Ric(H), then
1. X is real symmetric.
2. X satisfies XA C AT X C XRX C Q D 0.
3. A C RX is Hurwitz.
4. H does not have pure imaginary eigenvalues.
Given an ARE as (7.1), the Schur decomposition [7] offers a solution procedure
to determine X. By the real Schur decomposition of H, there exists an orthonormal
matrix U2n  2n (i.e., UT U D I), such that
 
  V11 V12  T
H D U V U T D U1 U2 U1 U2 ; (7.5)
0 V22

where V11 contains all stable eigenvalues of H, and V22 contains all anti-stable
eigenvalues. Multiplying U1 from the right on both sides of (7.5) yields
  
  V11 V12 I
H U1 D U1 U2 D U1 V11 : (7.6)
0 V22 0

Let
     
X1 I I
U1 D D X 1 D X1 ; (7.7)
X2 X2 X11 X

where X1 is assumed nonsingular and X D X2 X  1


1 . Then, from (7.6) and (7.7),
7.1 Algebraic Riccati Equations 173

    
A R I I
H U1 D X1 D X1 V11 ; (7.8)
 Q AT X X

and therefore,
   
A R I I
D X1 V11 X11 : (7.9)
 Q AT X X
 
This gives that, by multiplying X I from the left,
    
  A R I   I
X I D X I X1 V11 X11 D 0: (7.10)
 Q AT X X

The left-hand side of (7.10) is


 
  A C RX

X I D  AT X C XA C XRX C Q ; (7.11)
 Q  AT X

which shows that such defined X solves the ARE (7.1). Furthermore, from (7.9),
   
A C RX X1 V11 X11
D : (7.12)
 Q  AT X X2 V11 X11

Note that A C RX D X1 V11 X  1 , which indicates that A C RX shares the same


1

eigenvalues of V11 and, hence, is Hurwitz. It can be proved that X T1 X2 is a Hermitian


matrix [9, 11] (i.e., M D MT in the real form or M D M* in the complex form) such
that

T
X D X2 X11 D X11 X1T X2 X11 (7.13)

is Hermitian. By the following similarity transformation involving X, H is trans-


formed into
     
I 0 A R I 0 A C RX R
D : (7.14)
X I  Q AT X I 0 .A C RX /T

This shows that these two matrices,


   
A R A C RX R
and ; (7.15)
 Q AT 0 .A C RX /T

have the same eigenvalues. Note that in most control synthesis problems, matrix
A C RX leads to the state matrix of the closed-loop system, which explains why
A C RX must be Hurwitz.
174 7 Algebraic Riccati Equations and Spectral Factorizations

In general, if there exists a Hermitian matrix X and a square matrix W such that
the following expression
    
A R I I
D W (7.16)
 Q AT X X

holds for a Hamiltonian matrix H as in (7.2) then X D Ric.H / if the spectrum of


W coincides with the set of stable eigenvalues of H.
In order to better understand the solution of an ARE, its scalar case (quadratic
equation) is discussed in detail [5] in the following. Consider

rx 2 C 2ax C q D 0: (7.17)
 
a r
The corresponding Hamiltonian matrix is H D . Define the discrimi-
 q a
nant of (7.17) as

D 4 a2  qr : (7.18)

If > 0, then Eq. (7.17) has two distinct real roots,


p
a ˙ a2  qr
x1;2 D ; (7.19)
r

which means that its graph, i.e., y D rx2 C 2ax C q, will cross the x-axis twice. If
D 0, (7.17) has two coincidental real roots,
a
x1 D x 2 D  ; (7.20)
r

and its graph is tangent to the x-axis at x D  ar . If < 0, there is no intersection


point on the x-axis, producing a graph which lies strictly either above or below
the x-axis, i.e., no real roots. In addition, one can find that the determinant of the
Hamiltonian matrix is related to the discriminant as

D 4 a2  qr D 4 det.H /: (7.21)

Recall that if r > 0, the parabolic


curve opens upward, which has the minimum
det.H / a det.H /
value of r at the vertex  r ; r . The variable r controls the speed of
increase/decrease of the quadratic function values from/to the vertex. A bigger
positive r makes the function increase/decrease faster, and thus, the opening of the
graph appears narrower. The variables a and r together indicate the x-coordinate of
the vertex at which the axis of symmetry of the parabola lies. The variable 2a alone is
the slope of the parabola as it crosses the y-axis. The variable q controls the “height”
7.1 Algebraic Riccati Equations 175

a b
y y

x
x xRic
xRic

Fig. 7.1 Plot of quadratic curve and ARE solution (a) for r > 0 (b) for r < 0

of the parabola. More precisely, it is the point where the parabola crosses the y-axis.
Similar discussions can be conducted for the case r < 0, while the function graph
would be a straight line when r D 0.
To compare the quadratic equation with the Hamiltonian matrix, one can find
from (7.18), for a solution x, that det(H) D  (a C rx)2  0 in the scale case. If > 0,
the eigenvalues of H are given by
p
 D ˙ .a C rx/ D ˙  det.H /; (7.22)

and the distance d between the two intersection points on the x-axis is equal to
p
2  det.H /
dD : (7.23)
r
When > 0, this indicates that H does not have any eigenvalues on the imaginary
axis, i.e., H 2 dom(Ric). Hence, there exists a required solution to the ARE. From
(7.19), there are two solutions of the quadratic equation (7.16). For the purpose
of a Hurwitzp(stable) A C RX (a negative number in the scalar case), the solution
a a2 qr
x2 D in (7.19) should be chosen, because it gives a C rx D
p r
 a2  qr < 0. For the case r > 0, this required solution xRic is the negative
(left) root in Fig. 7.1 and for r < 0, the positive (right) root. In fact, observation of
Fig. 7.1 reveals that the ARE solution xRic is located at the branch of the parabolic
curve which has a negative slope. This can be simply proven in the following. From
(7.12), a C rx D x1 V11 x1  1 D V11 , for scalar x1 ; therefore, x D aCV r
11
. Now, the
derivative of y D rx C 2ax C q is given by
2

y 0 D 2rx C 2a: (7.24)

Then, the slope at the point x D aCVr


11
is y0 D 2V11 , because the request V11 < 0, y0
has to be negative. This concludes that the solution xRic to the ARE is always located
at the branch of the parabolic curve which has a negative slope.
Although the above discussion is limited to the scalar case, there is some
similarity in the matrix case. When the coefficient matrices R and Q are sign
176 7 Algebraic Riccati Equations and Spectral Factorizations

Fig. 7.2 Parabolic curve of y


y D 4x2  4

x
-1 1
-4

definite and both of the same sign, the required ARE solution X, if it exists, will
be nonpositive definite, i.e., X  0. On the other hand, if R and Q have opposite
signs, the X is nonnegative definite, i.e., X  0.
Example 7.1 Find the required solution of the quadratic equation (ARE) of
4x2  4 D 0.
It can be calculated  D 16 <0, where its corresponding Hamiltonian
 that det(H)
a r 0 4
matrix is H D D . From the discussion above, one can
 q a 4 0
conclude that H 2 dom(Ric) with eig(H) D ˙ 4. Figure 7.2 shows its corresponding
upward parabolic curve y D 4x2  4 (r D 4 > 0), and clearly, two solutions satisfying
4x2  4 D 0 can be found at x D ˙ 1. Since at the point x D  1 has a negative
slope as depicted in Fig.7.2, this means that x D  1 < 0 is the ARE solution. The
following shows how to find the ARE solution step by step.
By the real Schur decomposition such as (7.5), one has
2 1 1 3 2 1 1 3T
  p p   p p  
6 27 6 27    
HD
04
D6
2 7 4 0 6 2 7 D U1 U2 V11 V12 U1 U2 T;
40 4 1 1 5 0 4 4 1 1 5 0 V22
p p p p
2 2 2 2
where
2 1 3
      p  
x1 I I 6 27 1 1
U1 D D x1 D 6
x1 ) 4 7D p :
x2 x2 x11 x 1 5 1 2
p
2

Therefore, this obtains the solution x D  1. Furthermore, one gathers that


2 1 3
       p  
A R I I 0 4 6
6 27
7 1 1
H U1 D x1 D x1 V11 ) D p .4/ ;
 Q AT x x 4 0 4 1 5 1 2
p
2
7.1 Algebraic Riccati Equations 177

Fig. 7.3 Parabolic curve


of y D  x2 C 4x  3
y

(2,1)

x
1 3

and then
    
  4 0 1   1 1 1 1
1 1 D 1 1  p .4/  p D 0:
0 4 1 1 2 2

One concludes that


2 3
" # 1 1 1
   
a C rx x1 V11 x1 1 4 6  p2 .4/  p2 7
D ) D6
4 7:
5
 q  ax x2 V11 x1 1 4 1 1 1
p .4/  p
2 2

This shows that x D  1 < 0 is the ARE solution, as depicted in Fig. 7.2, such that
V11 D  4 and a C rx D  1. Note that x D 1 is a solution of 4x2  4 D 0, but it is not
the ARE solution since a C rx D 4. 
Example 7.2 Find the solution of the quadratic equation ARE of  x2 C 4x  3 D 0.
It can be calculated that det(H) D  1< 0 and eig(H)  D ˙ 1, where the
a r 2 1
corresponding Hamiltonian matrix is H D D . One can
 q a 3 2
deduce that H 2 dom(Ric). Figure 7.3 shows its corresponding downward parabolic
curve y D  x2 C 4x  3 (r D  1 < 0), and evidently, two solutions satisfying
 x2 C 4x  3 D 0 can be found at x D 1 and x D 3. Since the point x D 3 has a
negative slope as depicted in Fig. 7.3, this means that x D 3 is the ARE solution.
In fact, one can find the ARE solution step by step to obtain the ARE solution
x D 3 such that V11 D  1 and a C rx D  1. Note that x D 1 > 0 is a solution of
 x2 C 4x  3 D 0, but it is not the ARE solution since a C rx D 2 C (1)  1 D 1.
The concept of the discriminant is useful in the scalar case to find the ARE
solution of a quadratic equation. However, this property, unfortunately, cannot be
directly extended to the general matrix ARE cases.
178 7 Algebraic Riccati Equations and Spectral Factorizations

7.2 Similarity Transformation of Hamiltonian Matrices

Recall that the similarity transformation of a square matrix M is MT D TMT 1 ,


where T is any nonsingular matrix. The similarity transformation preserves the
eigenvalues of the matrix, i.e., feigenvalues of Mg D feigenvalues of MT g, the
two matrices sharing the same spectrum. It would be interesting and useful to
see if there is any invariance property of a Hamiltonian
 matrix
 under similarity
A R
transformations. Let a Hamiltonian matrix H WD and X D Ric(H).
 Q AT
Given a nonsingular 2n  2n matrix T, would the matrix after the similarity
transformation of T, HT D THT 1 , still be Hamiltonian? If so, what kind of relations
exist between the solutions Ric(H) and Ric(HT ), in terms of T? These questions will
be answered in the following discussions.
By its definition, a Hamiltonian matrix H satisfies J 1 HJ D  HT . Hence, a
sufficient and necessary condition for H to be Hamiltonian is

HJ C JH T D 0: (7.25)
 
T11 T12
Let T D be the 2n  2n nonsingular matrix, and its inverse
T21 T22
" #
TQ11 TQ12
1
T D : (7.26)
TQ21 TQ22

For HT D THT 1 to be Hamiltonian, one has to show that HT J C JH TT D 0. That is

TH T 1 J C J T T H T T T D 0

or

H T 1 J T T C T 1 J T T H T D 0: (7.27)

One can easily see that HT is Hamiltonian if and only if the matrix HT 1 JT T is
skew symmetric. For the material presented in this book, however, just a sufficient
condition is needed, which appears in the following lemma.
Lemma 7.1 A Hamiltonian matrix H remains Hamiltonian under a similarity
transformation, if the transformation matrix T satisfies the following conditions:
(1) TQ11 TQ12
T
and TQ22 TQ21
T
are both symmetric, where TQij (i, j D 1, 2) are defined in
(7.26).
(2) TQ22 TQ11
T
 TQ21 TQ12
T
D ˛I , where ˛ is a scalar constant.
7.2 Similarity Transformation of Hamiltonian Matrices 179

Proof When T 1 JT T D ˛J, (7.27) holds because H is Hamiltonian. Consequently,


HT is Hamiltonian from (7.27). Conditions (1) and (2) can be directly derived by
block matrix manipulations in T 1 JT T D ˛J. 
With Lemma 7.1, the following four cases are to be discussed, where H is
assumed in dom(Ric) and X D Ric(H).
 
I 0
Case (I) T D and L D LT .
L I
 
1 I 0
In this case, T D . The two conditions in Lemma 7.1 hold by
L I
direct verification. Hence, HT D THT 1 is a Hamiltonian matrix, and HT T D TH.
Let X D Ric(H). By (7.14),
   
I 0 I 0
HT T D TH
X I X I
    
I 0 I 0 I 0
DT H
X I X I X I
  
I 0 A C RX R
DT :
X I 0 .A C RX /T

The left-hand side is


      
I 0 I 0 I 0 I 0
HT T D HT D HT ;
X I L I X I LCX I

and the right-hand side is


     
I 0 A C RX R IA C RX
0 R
T D
X I 0 .A C RX /T LCX I 0 .A C RX /T
 
A C RX R
D :
.L C X / .A C RX / .L C X / R  .A C RX / T

Quoting the first block column yields


     
I A C RX I
HT D D .A C RX / : (7.28)
LCX .L C X / .A C RX / LCX

From (7.16), Ric(HT ) D L C X.


 
I 0
Case (II) T D .
0 1 I
180 7 Algebraic Riccati Equations and Spectral Factorizations

 
I 0
1
In this case, T D . Again, the conditions in Lemma 7.1 hold, and HT
0 I
is Hamiltonian. Similarly,
    
I 0 A C RX I 0R
HT T DT ;
X I 0 .A C RX /T
X I
    
I 0 I 0 A C RX R
HT D ;
1 X 1 I 1 X 1 I 0 .A C RX /T

i.e.,
      
I I 0 A C RX I
HT D D .A C RX / :
1 X 1 X 1 I 0 1 X

Hence, Ric(HT ) D  1 X.
 T 
U 0
Case (III) T D and UT D U 1 , i.e., U is orthonormal.
0 UT
 
U 0
Here, T 1 D . Straightforward manipulations show that the conditions
0 U
in Lemma 7.1 are satisfactory, and thus, HT D THT 1 is a Hamiltonian matrix. One
obtains accordingly that
   
UT UT
HT D .A C RX / : (7.29)
U TX U TX

By further multiplying U from the right on both sides, it yields


   
I I
HT D U 1 .A C RX / U: (7.30)
U TX U U TX U
 
UT 0
This concludes that Ric(HT ) D U XU for T D T
.
0 UT
 
I L Q
Case (IV) T D Q D LQ T .
and L
0 I
It can be verified that HT D THT 1 is a Hamiltonian matrix, by using Lemma
7.1. One can obtain that
" # " #
I I

HT
1 D Q

1 I C LX .A C RX / I C LX Q 1 :
X I C LXQ X I C LXQ
(7.31)
7.2 Similarity Transformation of Hamiltonian Matrices 181

Table 7.1 Four cases of similarity transformations


HT D THT1 Ric (HT )
 
I 0
Case (I) T D and L D LT LCX
L I
 
I 0
Case (II) T D r1 X
0 r 1 I
 1 
U 0
Case (III) T D T and U T D U 1 UT XU
0 U
 
I L Q

Case (IV) T D and LQ DL QT Q 1


X I C LX
0 I

This concludes that Ric .HT / D Ric TH T 1 D X I C LX Q 1 . Table 7.1


summarizes these four cases.
It should be noted that in the above discussion, one neither requires the sign
definitive nor nonsingular qualities of the ARE solution X for the sake of generality.
In some applications, an ARE of the following form is considered:

AT X C XA  XRX C Q D 0; (7.32)

where assumptions of R  0 and Q  0 are made. The solution X D Ric(HX ) is


required to be nonnegative definite, and A  RX should be made Hurwitz, where
 
A R
HX D : (7.33)
 Q AT

Such an ARE has a dual form equation:

AY C YAT  YQY C R D 0; (7.34)

where Y D Ric(HY )  0 and


 
AT Q
HY D : (7.35)
 R A

It is clear that HY D H TX . From


     
I 0 I 0 A  RX R
HX D
X I X I 0 .A  RX /T
   
I 0 A  RX R I 0
) HX D
X I 0 .A  RX /T X I
182 7 Algebraic Riccati Equations and Spectral Factorizations

and
     
I 0 I 0 .A  YQ/T Q
HY D
Y I Y I 0  .A  YQ/
   
I 0 .A  YQ/T Q I 0
) HY D ;
Y I 0  .A  YQ/ Y I

one has, from HX D H TY ,


   
I 0 A  RX R I 0
X I 0 .A  RX /T X I
   
I Y A  YQ 0 I Y
D :
0 I Q .A  YQ/T 0 I

Hence,
   
A  YQ 0 I Y I 0
Q .A  YQ/T 0 I X I
   
I Y I 0 A  RX R
D :
0 I X I 0 .A  RX /T

That is,
     
AYQ 0 I CYX Y I CYX Y ARX R
D :
Q .A  YQ/T X I X I 0 .ARX /T

Considering the first block column of the product matrices on both sides yields
     
A  YQ 0 I C YX I C YX Y A  RX
D :
Q .A  YQ/T X X I 0

Comparing the first row on either side leads to

.A  YQ/ .I C YX / D .I C YX / .A  RX / :

Therefore,

.A  YQ/ D .I C YX / .A  RX / .I C YX /1 : (7.36)

Equation (7.36) shows the similarity transformation relationship between A  RX


and A  YQ, which is useful for the solutions to H1 control.
7.3 Lyapunov Equation 183

7.3 Lyapunov Equation

Assuming that A is Hurwitz and P and Q are symmetric,

PA C AT P D Q (7.37)

is called a Lyapunov equation. When A is Hurwitz and Q is semi-positive, i.e.,


Q D QT  0, one can have that the solution P is positive definite, P > 0. On the other
hand, if Q D QT  0 and P > 0, then matrix A is Hurwitz. These two results will be
shown in a lemma below. In theory, a solution to the Lyapunov equation (7.37) can
be found as
Z 1
T
P D eA t QeAt dt: (7.38)
0

It can be shown that such a P indeed solves (7.37) by


Z 1 Z 1
T T
PA C AT P D eA t QeAt Adt C AT eA t QeAt dt
0 0
Z
d AT t At 
1
D e Qe dt
0 dt
h T i
D eA t QeAt j1 0

D Q: (7.39)

The last equation comes from eAt ! 0, as t ! 1, due to A being Hurwitz. Note that
a Lyapunov equation is a special case of the ARE (i.e., R D 0), AT X C XA C Q D 0,
and
 
A 0
X D Ric :
 Q AT

The observability and controllability gramians were previously mentioned in


Chap. 2. In this section, they are defined in detail as follows. Given a control system,
let the observability gramian of (C,A) be defined by
Z 1
T
Po WD eA t C T C eAt dt; (7.40)
0

which can be shown similarly as in the deduction of (7.39) to satisfy

AT Po C Po A D C T C: (7.41)
184 7 Algebraic Riccati Equations and Spectral Factorizations

Observability gramian Po determines the total energy in the system output, which is
driven by a given initial state in the case of identical zero input.
In control system analysis and synthesis, the constant term on the right-hand side
of a Lyapunov equation is usually not negative definite; rather it is nonpositive. For
such cases, one has the following result.
Lemma 7.2 Let (C, A) be observable. Then AT Po C Po A D  CT C has a positive
definite solution Po if and only if A is Hurwitz.
Proof (Sufficiency) Construct matrix Po as in (7.40). If A is Hurwitz, it can
be shown as in the deduction of (7.39) that such a Po is indeed a solution to
the Lyapunov equation. Po is obviously nonnegative. Suppose that Po is rank
deficient. Let N be the null space of Po and Np be its matrix representation, i.e.,
Np D [ 1 , : : : , l ] , 1  l < n, where n is the order of the system (dimension of A
and thus of Po ), and Po Np D On * l . Multiplying Np T and Np from the left and right,
respectively, on both sides of (7.41) concludes that CNp D O. Then, multiplying Np
from the right on (7.41) leads to Po ANp D O. Hence, ANp falls into N, and there
exists matrix L of l  l dimension such that

ANp D Np L: (7.42)

From CNp D O and CAj Np D CNp Lj D O , j D 1, : : : , n  1, one concludes that


2 3
C
6 CA 7
6 7
6 : 7 is not of full rank, and therefore, (C,A) is not completely observable
4 :: 5
CAn1
which contradicts the assumption. Hence, Po is of full rank, i.e., Po > 0.
(Necessity) If A is not Hurwitz, let  D ˛ C jˇ be an unstable eigenvalue of A
and the corresponding eigenvector be (¤0), i.e., ˛  0 and A D . Multiplying
* and  from the left and right, respectively, on both sides of (7.41), yields

  AT P0  C   P0 A D   C T C 

  P0  C   P0  D   C T C 

 C    P0  D  .C /  .C /

2˛  P0  D .C / .C / : (7.43)

For a positive definite Po , the left-hand side of (7.43) is nonnegative while its
right-hand side is nonpositive. Hence, C D 0. Considering that  is also such that
A D , this contradicts the assumption of (C,A) being completely observable.
Hence, A is Hurwitz. 
7.4 State-Space Formulae for Spectral Factorizations. . . 185

This property can be easily seen in the scalar case when (7.41) becomes a linear
equation 2ap D  c2 . Then, one can obtain p D c
2
2a
. It shows that p > 0 (i.e., a
positive definite solution) if and only if a < 0 (i.e., Hurwitz).
Similarly, define controllability gramian as
Z 1
T
Pc WD eAt BB T eA t dt; (7.44)
0

which satisfies

APc C Pc AT D BB T : (7.45)

In a physical engineering system with an “impulse” input and zero initial states,
controllability gramian Pc determines the total energy in the states generated.
A result dual to Lemma 7.2 is also available under the condition of complete
controllability of (A,B).

7.4 State-Space Formulae for Spectral Factorizations


Using Coprime Factorization Approach

In this section, three cases of spectral factorizations in the state-space form are to be
introduced. They are obtained via a unified procedure, i.e., by employing coprime
factorizations. State-space formulae are derived to find certain coprime factors for
each spectral factorization. The factorization procedure is characterized by using the
so-called weighted all-pass function, which is a generalization of all-pass functions
introduced earlier in Chap. 6 and will be formally defined next.
^ ^T
Definition 7.1 Let † D †T and † D † be constant matrices with compatible
dimensions. Then, P(s) 2 RL1 satisfying
^
P  .s/†P .s/ D † (7.46)

is defined as weighted all-pass. Dually, the weighted co-all-pass is defined as


^
P .s/†P  .s/ D †: (7.47)

The following lemma gives conditions for a transfer function matrix P(s) to be
weighted all-pass.
^ ^T
Lemma 7.3 Let . Given † D †T and † D † , if
186 7 Algebraic Riccati Equations and Spectral Factorizations

^
D T †D D †; (7.48)

and there exists a matrix X D XT  0 such that

XB C C T †D D 0; (7.49)

AT X C XA C C T †C D 0; (7.50)

^
then P  .s/†P .s/ D †. Dually, if
^
D†D T D †; (7.51)

and there exists a matrix Y D YT  0 such that

Y C T C B†D T D 0; (7.52)

AY C YAT C B†B T D 0; (7.53)

^
then P .s/†P  .s/ D †.
Proof The weighted all-pass (and co-all-pass) proof follows the proof procedure
of a correspondent result on standard inner (and co-inner) functions in [1, 3] and,
therefore, is omitted here. 
^
Note that for the case P(s) 2 RH n1 k , † D In , and † D Ik , the weighted all-
pass (or weighted co-all-pass) system P(s) becomes an inner (or co-inner)
 function.

I 0
Additionally, for the case that P .s/ 2 RH11 Cn2 1 Ck2 , † D
.n /.k / n1
, and
0 In2
 
^ Ik1 0
†D , the weighted all-pass P(s) becomes a J-lossless function. The
0 Ik2
properties of lossless two-port networks in the viewpoint of power wave propagation
were discussed in Chap. 3. Recall that in Chap. 5, it demonstrated that properties of
J-lossless and dual J-lossless both play an important role in CSD control systems.
For certain engineering systems, spectral factorization is a useful tool. Spectral
factorization separates the causal and minimum-phase component from the rest, and
this in turn reveals energy transformation involved with this system. In some sense,
a spectral factor shows the magnitude of the system. Next, the formal definition of
standard spectral factorization will be given first, and subsequently, several spectral
factorizations that are often used in control system synthesis and analysis will be
introduced. State-space formulae of these spectral factorizations will be described;
7.4 State-Space Formulae for Spectral Factorizations. . . 187

they are all obtained via weighted all-pass functions, which are constructed in a
unified framework of coprime factorizations.
Definition 7.2 [2] Consider a square matrix (s) having the properties

2 RL1 ; 1 2 RL1 ;  D ; and .1/ > 0: (7.54)

Then,

D ˆ ˆ (7.55)

is called a spectral factorization of , where ˆ is a spectral factor and

ˆ 2 GH1 ; (7.56)

i.e., both ˆ and its inverse are stable.


Note that such a matrix (s) has poles and zeros in symmetry about the
imaginary axis and such a spectral factor ˆ(s) is also called outer (stable with
minimum phase).

Let and P .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/ be the right

and left coprime factorizations, respectively. By Lemma 6.1, state-space formulae


of the coprime factors are given by, for purposely chosen F and H,

(7.57)

(7.58)

Since matrices F, W, H, and WQ are free parameters, one can choose them to suit
various requirements on the coprime factors in addition to stability. In the following,
one shows how to find these matrices for the required spectral factorizations, where

is assumed. Note that for a general P(s) 2 RL1 , an extra

coprime factorization can be applied first in order to obtain spectral factorizations.


188 7 Algebraic Riccati Equations and Spectral Factorizations

7.4.1 Spectral Factorization Case I

The first type of spectral factorization is to find an outer matrix ˆ(s) such that

R C P  .s/QP .s/ D ˆ .s/ˆ.s/; (7.59)

where Q D QT and R D RT are given, constant matrices. Let P(s) D


N(s)M 1 (s) be a right coprime factorization. The left-hand side of (7.59) can
be rewritten as


R C P  .s/QP .s/ D R C M 1 .s/ N  .s/QN.s/M 1 .s/


D M 1 .s/ .M  .s/RM . s /CN  . s /QN . s // M 1 .s/
    

 M.s/  R 0 M.s/
D M 1 .s/ M 1 .s/:
N.s/ 0 Q N.s/
 
R 0 ^
By defining † D , † D I , one may see that if the coprime factorization
0 Q
 
1 M
P D NM is such that Ps D is a weighted all-pass function with M being
N
outer, i.e., the following equation holds,
    
M.s/ R 0 M.s/
D I; (7.60)
N.s/ 0 Q N.s/

then M 1 (s) would be the required spectral factor ˆ(s) provided that M(s) is also
outer. Next, itshows how to choose F and W in the coprime factorization of P(s) to
M.s/
make weighted all-pass, i.e., to satisfy (7.60) and to ensure M(s) outer.
N.s/
 
R 0
Herein, substituting the state-space realization of (7.57) and † D ;
0 Q
^
† D I into (7.48), (7.49) and (7.50) will obtain that

W T R C D T QD W D I; (7.61)

XB C C T QD C F T R C D T QD D 0; (7.62)

.A C BF /T X C X .A C BF / C F T RF C .C C DF /T Q .C C DF / D 0:
(7.63)

If one has R C P (s)QP(s) > 0, 8 s D j!, then Rx D (R C DT QD) > 0. Hence, define
7.4 State-Space Formulae for Spectral Factorizations. . . 189

W D Rx1=2 ; (7.64)

F D Rx 1 B T X C D T QC ; (7.65)

where X is the solution to the following ARE,



T

A  BRx 1 D T QC X C X A  BRx 1 D T QC  XBRx 1 B T X


C C T Q  QDRx 1 D T Q C D 0: (7.66)

That is,
" #
A  BRx 1 D T QC BRx 1 B T
X D Ric

T  0:
 C T Q  QDRx 1 D T Q C  A  BRx 1 D T QC
(7.67)

Note that X is such that A C BF is Hurwitz. By (7.57), the “denominator” is given by

(7.68)

Note that . Hence, ˆ(s) D M(s) 2 GH1 is required.

Dually, the spectral factorizations of Case (I) are to find an outer matrix ˆ(s)
such that

R C P .s/QP  .s/ D ˆ.s/ˆ .s/; (7.69)

where Q D QT and R D RT . Similarly,


 a dual procedure can be followed starting from
R 0 ^
P .s/ D MQ 1 .s/NQ .s/, † D , and † D I . Define Ry D (R C DQDT ) > 0,
0 Q
and

WQ D Ry1=2 ; (7.70)

H D  Y C T C BQD T Ry 1 ; (7.71)

where Y solves

T

Y A  BD T Ry 1 QC C A  BD T Ry 1 QC Y

 Y C T Q  QDRy 1 D T Q C Y C BRy 1 B T D 0: (7.72)


190 7 Algebraic Riccati Equations and Spectral Factorizations

That is,
2
3
A  BD T Ry 1 QC C T Q  QDRy 1 D T Q C
Y D Ric 4
T 5  0: (7.73)
 BRy 1 B T  A  BD T Ry 1 QC

Note that Y is chosen such that A C HC is Hurwitz. By (7.58), MQ .s/ is given by

(7.74)

Note that and ˆ.s/ D MQ 1 .s/ 2 GH1 are required.

The above is summarized in the lemma below.


Lemma 7.4 Let Q D QT and R D RT . There exists a right coprime factorization of

P(s) D N(s)M 1 (s) given by , where

W D Rx1=2 ; (7.75)

Rx D R C D T QD ; (7.76)

F D Rx 1 B T X C D T QC ; (7.77)
2 3
A  BRx 1 D T QC BRx 1 B T
X D Ric 4

T 5 : (7.78)
 C T Q  QDRx 1 D T Q C  A  BRx 1 D T QC

Additionally, an outer function satisfying that R C P (s)QP(s) D ˆ (s)ˆ(s) is


given by

(7.79)

Moreover, there exists a left coprime factorization of P .s/ D MQ 1 .s/NQ .s/ given

by , where
7.4 State-Space Formulae for Spectral Factorizations. . . 191

WQ D Ry1=2 ; (7.80)

Ry D R C DQD T ; (7.81)

H D  Y C T C BQD T Ry 1 ; (7.82)

"
#
A  BD T Ry 1 QC C T Q  QDRy 1 D T Q C
Y D Ric
T : (7.83)
 BRy 1 B T  A  BD T Ry 1 QC

Furthermore, an outer function such that R C P(s)QP (s) D ˆ(s)ˆ (s) is given by

(7.84)


In Case (I), different R and Q correspond to different applications. Four
applications of Case (I) are noted in the following.

7.4.1.1 Normalized Coprime Factorization


Let P .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/ be a right (left) coprime factorization.

Here it should be noted that is general, not assumed to be stable.

One needs specific coprime factors which satisfy

M  .s/M.s/ C N  .s/N.s/ D I (7.85)

or

MQ .s/MQ  .s/ C NQ .s/NQ  .s/ D I: (7.86)

Such factorizations are called normalized coprimefactorizations.


 This is equivalent
I 0 ^
to Case (I) by setting R D I and Q D I, i.e., † D and † D I . By Lemma
0 I
7.4, then there exists a normalized right coprime factorization of P(s) D N(s)M 1 (s)

given by , where
192 7 Algebraic Riccati Equations and Spectral Factorizations

F D Rx 1 B T X C D T C ; (7.87)

Rx D I C D T D; (7.88)

W T I C D T D W D I; (7.89)

and X  0 is the solution of the following ARE:



T

A  BRx1 D T C X C X A  BRx1 D T C  XBRx1 B T X


C C T I  DRx1 D T C D 0: (7.90)

In this case, the inverse of M(s) is not necessarily stable for an unstable P(s), which
is not required in this application.
In addition, a normalized left coprime factorization P .s/ D MQ 1 .s/NQ .s/ can be

found by , where

H D  Y C T C BD T Ry 1 ; (7.91)

Ry D I C DD T ; (7.92)

WQ T I C DD T WQ D I; (7.93)

and Y  0 is the solution of the following ARE:


T 
Y A  BD T Ry1 C C A  BD T Ry1 C Y

 Y C T I  DRy1 D T C Y C BRy1 B T D 0: (7.94)

For the case D D 0, the AREs of (7.90) will be

AT X C XA  XBB T X C C T C D 0: (7.95)

Dually, the ARE of (7.94) will be

YAT C AY  Y C T C Y C BB T D 0: (7.96)

Example 7.3 Consider P .s/ D sC3 s1


, using the spectral factorizations to find the
normalized right coprime factors of G(s).
7.4 State-Space Formulae for Spectral Factorizations. . . 193

Applying the method discussed in Case (I) to solve it, one has

Hence,


1 1
Rx D R C D T QD D 2; and W D Rx  2 D p :
2

From

T

A  BRx 1 D T QC X C X A  BRx 1 D T QC  XBRx 1 B T X


C C T Q  QDRx 1 D T Q C D 0;

one has X D 2.4721 and F D  Rx  1 (BT X C DT QC) D 0.7639. Then,

A C BF D 2:2361 < 0

is Hurwitz. Consequently, the coprime factors of (7.57) yield

It can be seen that M (s)M(s) C N (s)N(s) D 1. 


Example 7.4 Using MATLAB, determine the state-space realization of the normal-

ized coprime factorizations of a given plant

The following MATLAB code is an example for readers’ reference.

clear all;clc;
disp(’Normalized Coprime Factorization’)
syms A B C D;
ADinput(’A:’);
BDinput(’B:’);
CDinput(’C:’);
DDinput(’D:’);
194 7 Algebraic Riccati Equations and Spectral Factorizations

RD1;
QD1;
sysDss(A,B,C,D);
%Right coprime
RxD(RCD’*Q*D);
WDRxˆ(-1/2);
Hx_A D (A-B*Rxˆ(-1)*D’*Q*C);
Hx_B D B’;
Hx_C D (C’*(Q-Q*D*Rxˆ(-1)*D’*Q)*C);
[x,l,g]Dcare(Hx_A,Hx_B,Hx_C,Rx);
FD-Rxˆ(-1)*(B’*xCD’*Q*C);
disp(’eigenvalues of ACB*F’)
eig(ACB*F)
disp(’Value of X’)
x
M_invDss(A,B,Rxˆ(-1/2)*(B’*xCD’*Q*C),Rxˆ(1/2));
disp(’State-space of M’)
MDinv(M_inv)
disp(’State-space of N’)
NDss(ACB*F,B*W,CCD*F,D*W)
%Left coprime
RyD(RCD*Q*D’);
W_wDRyˆ(-1/2);
Hy_A D (A-B*D’*Ryˆ(-1)*Q*C);
Hy_B D (C’*(Q-Q*D*Ryˆ(-1)*D’*Q)*C);
Hy_C D B*inv(Ry)*B’;
[y,ll,gg]Dcare(Hy_A,Hy_B,Hy_C);
HD-(y*C’CB*Q*D’)*inv(Ry);
M_w_invDss(A,(y*C’CB*Q*D)*Ryˆ(-1/2),C,Ryˆ(1/2));
disp(’State-space of Mw’)
MDinv(M_inv)
disp(’State-space of Nw’)
NDss(ACH*C,BCH*D,W_w*C,W_w*D) 
7.4 State-Space Formulae for Spectral Factorizations. . . 195

ulqr w

1
I
R2

u′ u x 1 x y 1 ylqr
W B C Q2
s

Fig. 7.4 LQR problem with DD0

7.4.1.2 Optimal Linear Quadratic Regulation

As shown in Fig. 7.4, the linear quadratic regulation (LQR) is to find a stabilizing
state feedback gain F to minimize the deterministic cost function
Z 1 T

Jlqr D y .t /Qy.t / C uT .t /Ru.t / dt ; (7.97)


0

where R > 0 and Q  0 are the weights. From Fig. 7.4, the right coprime factoriza-
   
u M
tion of D u0 yields . Let
y N

   
ulqr R1=2 M
D u0 : (7.98)
ylqr Q1=2 N

It can be seen in Chap. 8 that the optimal LQR control problem is equivalent to
solving for particular coprime factors such that

M  .s/RM.s/ C N  .s/QN.s/ D I: (7.99)

Thus, the optimal state feedback gain F and feedforward


 1=2 gain
 W, which mini-
R M
mize the cost function, can be obtained such that is inner. Actually,
Q1=2 N
(7.99) is equivalent to forming the weighted all-pass function (7.46) by setting
196 7 Algebraic Riccati Equations and Spectral Factorizations

 
R 0 ^
† D and † D I . This optimal LQR problem for any initial state is
0 Q

thus, in fact, an application of Lemma 7.4. Again, is general and

not assumed to be stable.

7.4.1.3 Inner-Outer Factorization

For P(s) 2 RH1 , the inner-outer factorization P(s) D N(s)ˆ(s) is to find


ˆ(s) 2 GH1 such that P (s)P(s) D ˆ (s)ˆ(s). Let P(s) D N(s)M 1 (s) be a coprime
factorization of P(s). Solving the inner-outer factorization
  problem is equivalent to
0 0 ^
setting R D 0 and Q D I in (7.59), i.e., † D and † D I . By Lemma
0 I
7.4 with R D 0 and Q D I, one can have the state-space solution of the coprime
factorization P(s) D N(s)M 1 (s) such that

N  .s/N.s/ D I: (7.100)

So, N(s) is the inner part and M 1 (s) D ˆ(s) 2 GH1 , where M(s) is the outer part
of P(s) 2 RH1 . That is, M(s) is an outer function.
.s2/
Example 7.5 Given P .s/ D .sC5/ , compute the inner-outer factorization such that
 
P (s)P(s) D ˆ (s)ˆ(s), using spectral factorizations. Implement the solution to
construct the right coprime factors of G(s).
(1) Compute ˆ(s) 2 GH1 directly. It is evident that

s  2 s2 s2 sC2
P  .s/P .s/ D D
s C 5 sC5 s5 sC5

s C 2 sC2
D D ˆ .s/ˆ.s/;
s C 5 sC5

where ˆ.s/ D sC2


sC5
.
(2) Apply the method noted in Case (I) to solve it. It yields

Hence,

1
Rx D R C D T QD D 1; and W D Rx  2 D 1:
7.4 State-Space Formulae for Spectral Factorizations. . . 197

From

T

A  BRx 1 D T QC X C X A  BRx 1 D T QC  XBRx 1 B T X


 C T Q  QDRx 1 D T Q C D 0;

one has X D 4 and


F D Rx 1 B T X C D T QC D 1 .1  .4/ C 1  1  .7// D 3:

Then,

A C BF D 5 C 1  .3/ D 2 < 0

is Hurwitz. Then, by (7.57), an outer function such that (M 1 (s)) M 1 (s) D
ˆ (s)ˆ(s) is given by

It is apparent that results (1) and (2) are the same. 

7.4.1.4 Bounded Real Lemma

Suppose that and kP(s)k1 < . Consider the spectral

factorization

2 I  P  .s/P .s/ D ˆ .s/ˆ.s/: (7.101)

 is equivalent to setting R D I and Q D  I in (7.59), i.e., † D


2
This problem
 2 ^
I 0
and † D I . By Lemma 7.4 with R D 2 I and Q D  I, one can obtain
0 I
the state-space solution of the coprime factorization P(s) D N(s)M 1 (s) such that

2 M  .s/M.s/  N  .s/N.s/ D I: (7.102)

Then, ˆ(s) is given by (7.68),

(7.103)
198 7 Algebraic Riccati Equations and Spectral Factorizations

where

Rx D 2 I  D T D ; and (7.104)

" #
A C BRx 1 D T C BRx 1 B T
X D Ric

T  0: (7.105)
C T I C DRx 1 D T C  A C BRx 1 D T C

The spectral factorization of (7.101) is actually the well-known and widely applied
bounded real lemma (BRL) [4]. The above deduction shows that the BRL can be
proved via a spectral factorization, which is solved by a unified approach using the
weighed all-pass concept and coprime factorization.

7.4.2 Spectral Factorization Case II

The second case of spectral factorization problems is to find an outer matrix ˆ(s)
such that, for R D RT > 0,

R C P  .s/ C P .s/ D ˆ .s/ˆ.s/: (7.106)


 
R I ^
For P(s) D N(s)M 1 (s), † D , and † D I , one finds that if the coprime
I 0
 
M
factorization is such that Ps .s/ D is weighted all-pass, with M being outer,
N
^
with regard to such defined † and †, i.e., the following equation holds,
    
M R I M
D I: (7.107)
N I 0 N

Then, because


R C P  .s/ C P .s/ D R C M 1 .s/ N  .s/ C N.s/M 1 .s/
D .M  .s//1 .M  .s/RM.s/ C N  .s/M.s/ C M  .s/N.s// M 1 .s/


D M 1 .s/ M 1 .s/;

ˆ(s) D M 1 (s) is a required spectral factor provided that M(s) is outer.


Hence, similar to Case (I), one can define,

Rx D R C D T C D > 0; (7.108)
7.4 State-Space Formulae for Spectral Factorizations. . . 199

and

W D Rx 1=2 ; (7.109)

F D Rx 1 B T X C C ; (7.110)

where
" #
A  BRx 1 C BRx 1 B T
X D Ric
T  0: (7.111)
C T Rx 1 C  A  BRx 1 C

Then, the “denominator” which makes Ps (s) weighted all-pass is

(7.112)

and

(7.113)

Because both M(s) and M1 (s) are stable, ˆ(s) D M 1 (s) is a solution to (7.106).
Note that for the case R D 0, P (s) C P(s) D ˆ (s)ˆ(s) is the spectral factorization
of a strictly positive real matrix. Readers can refer to the definitions of positive real
functions in Chap. 3.  
R I ^
Q 1 Q
Also, from P .s/ D M .s/N .s/, † D , and † D I , one finds
I 0
 
  R I  
MQ NQ MQ NQ D I: (7.114)
I 0

H and WQ can be found to make the above viable, i.e.,

MQ .s/RMQ  .s/ C NQ .s/MQ  .s/ C MQ .s/NQ  .s/ D I: (7.115)

Hence,

 
MQ 1 .s/ MQ .s/RMQ  .s/CNQ .s/MQ  .s/CMQ .s/NQ  .s/ MQ 1 .s/ DMQ 1 .s/ MQ 1 .s/

) R C G.s/ C G  .s/ D MQ 1 .s/ MQ 1 .s/ D ˆ.s/ˆ .s/: (7.116)
200 7 Algebraic Riccati Equations and Spectral Factorizations

 
 R I
Suppose that R C P(s) C P (s) > 0 (positive real), 8 s D j!. With † D
I 0
and the same manipulation, one gathers

WQ D Ry1=2 ; (7.117)

H D  Y C T C B Ry 1 ; (7.118)

Ry D R C D C D T > 0; (7.119)

and
" #
A  BRy 1 C C T Ry 1 C
Y D Ric
T  0: (7.120)
 BRy 1 B T  A  BRy 1 C

Then, an outer function such that

R C P  .s/ C P .s/ D ˆ .s/ˆ.s/ (7.121)

is given by

(7.122)

The above is summarized in the next lemma for the convenience of future
reference.
Lemma 7.5 For R > 0, there exists a right coprime factorization of P(s) D

N(s)M 1 (s) given by , where

W D Rx 1=2 ; (7.123)

F D Rx 1 B T X C C ; (7.124)
7.4 State-Space Formulae for Spectral Factorizations. . . 201

Rx D R C D T C D > 0; (7.125)

and
" #
A  BRx 1 C BRx 1 B T
X D Ric 1

T  0: (7.126)
T
C Rx C  A  BRx 1 C

Additionally, an outer function such that R C G (s) C G(s) D ˆ (s)ˆ(s) is given by

(7.127)

Furthermore, there exists a left coprime factorization of P .s/ D MQ 1 .s/NQ .s/

given by , where

WQ D Ry1=2 ; (7.128)

H D  Y C T C B Ry 1 ; (7.129)

Ry D R C D C D T > 0; (7.130)

and
" #
A  BRy 1 C C T Ry 1 C
Y D Ric
T  0: (7.131)
 BRy 1 B T  A  BRy 1 C

Moreover, an outer function ˆ(s) such that R C G (s) C G(s) D ˆ (s)ˆ(s) is


given by

(7.132)


202 7 Algebraic Riccati Equations and Spectral Factorizations

Example 7.6 Given P .s/ D sC8s4


, R D 2, use spectral factorization to find an outer
matrix ˆ(s) such that R C P (s) C P(s) D ˆ (s)ˆ(s).


(1) Compute ˆ(s) 2 GH1 directly. It is evident that

s  4 s4 4s 2  64
R C P  .s/ C P .s/ D 2 C C D 2
s C 8 sC8 s  64
.s  4/ .s C 4/ .s C 4/ .s C 4/
D2 2 D2 2
.s  8/ .s C 8/ .s C 8/ .s C 8/
D ˆ .s/ˆ.s/;

where ˆ.s/ D 2.sC4/


sC8
.
(2) Apply the method noted in Case II to solve it. It yields

Let R D 2 D RT  0, then

Rx D R C D T C D D .2 C 1 C 1/ D 4 > 0;

1
W D Rx 1=2 D 41=2 D :
2
From

T

A  BRx 1 C X C X A  BRx 1 C  XBRx 1 B T X  C T Rx 1 C D 0;

one finds X D (36,  4). When X D 4,


1
F D Rx 1 B T X C C D  .1  .4/ C .12// D 4:
4
Then,

A C BF D 8 C 1  4 D 4 < 0

s4
is Hurwitz. An outer function such that 2 C sC8
C s4
sC8
D ˆ .s/ˆ.s/ is given by

Consequently, results (1) and (2) are the same. 


7.4 State-Space Formulae for Spectral Factorizations. . . 203

7.4.3 Spectral Factorization Case III

 
P11 .s/ P12 .s/
2 RH11 Cn2
.n /.k1 Ck2 /
Let be partitioned as
P21 .s/ P22 .s/
with n1  k1 and n2 D k2 . The final case, which is so-called J-spectral factorization,
is to find a matrix function ˆ.s/ 2 GH11 Ck2 1 Ck2 such that
.k /.k /

P  .s/J1 P .s/ D ˆ .s/J2 ˆ.s/; (7.133)


   
In1 0 I 0
where J1 D and J2 D k1 .
0 In2 0 Ik2
 
0 0 ^
By defining † D , † D J2 , one may see that the factorization can
0 J1
 
1 M
be solved if the coprime factorization P D NM is such that is a weighted
N
all-pass function and M is outer, i.e., the following equation maintains
    
M.s/ 0 0 M.s/
D J2 : (7.134)
N.s/ 0 J1 N.s/

From (7.134), one has, for P D NM 1 ,


1
 

M N J1 NM 1 D M 1 J2 M 1 : (7.135)

It is clear that if the coprime factorization of P(s) is found such that M(s) is outer,
then ˆ D M 1 would be a solution to this spectral factorization problem. Using the
M.s/
state-space formula in (7.57), is weighted all-pass with a specified † and
N.s/
^
† if and only if there exists a nonsingular matrix W satisfying WT Rx W D J2 , where
Rx D DT J1 D, and the corresponding ARE is

T

A  BRx1 D T J1 C X C X A  BRx1 D T J1 C  XBRx1 B T X


C C T J1  J1 DRx1 D T J1 C D 0: (7.136)

Hence, with the stabilizing solution


" #
A  BRx1 D T J1 C BRx1 B T
X D Ric

T ; (7.137)
 C T J1  J1 DRx1 D T J1 C  A  BRx1 D T J1 C
204 7 Algebraic Riccati Equations and Spectral Factorizations

an outer function satisfying that

P  .s/J1 P .s/ D ˆ .s/J2 ˆ.s/ (7.138)

is given by

(7.139)

where

F D Rx1 B T X C D T J1 C : (7.140)
 
W11 0
Note that a solution W of WT Rx W D J2 is given by W D , where
W21 W22
T
1=2
W22 D D22 D22  D12
T
D12 ; (7.141)

h T
T T
i1=2
W11 D D11
T
D11 D21
T
D21 C D12 D11 D22
T
D21 W222 D12 D11 D22
T
D21 ; and
(7.142)
T

W21 D W222 D12 D11  D22


T
D21 W11 : (7.143)

Clearly, if D21 D 0 and DT22 D22 > DT12 D12 , then W is nonsingular.
 
P11 .s/ P12 .s/
Dually, let be partitioned as 2
P21 .s/ P22 .s/
RH11 Cn2 1 Ck2 with n1 D k1 and n2  k2 . One needs to find a matrix function
.n /.k /

ˆ.s/ 2 GH11 Cn2 1 Cn2 satisfying


.n /.n /

P .s/J2 P  .s/ D ˆ.s/J1 ˆ .s/: (7.144)


 
0 0 ^
Also, by defining † D , † D J1 , one may see that if the left coprime
0 J2
 
factorization P .s/ D MQ 1 .s/NQ .s/ is found such that MQ NQ is a weighted all-
pass function with M being outer, i.e., the following equation keeps
 
  0 0  
MQ NQ MQ NQ D J1 : (7.145)
0 J2

From (7.144), one obtains




MQ 1 NQ J2 NQ  MQ 1 D ˆJ1 ˆ : (7.146)
7.4 State-Space Formulae for Spectral Factorizations. . . 205

P D MQ 1 NQ can be found such that NQ .s/J2 NQ  .s/ D J1 (dual J-lossless) if


and only if there exists a nonsingular matrix WQ satisfying WQ Ry WQ T D J1 , where
Ry D DJ2 DT , and the ARE is


T
A  BJ2 D T Ry 1 C Y C Y A  BJ2 D T Ry 1 C  Y C T Ry 1 C Y

 B J2  J2 D T Ry 1 DJ2 B T D 0: (7.147)

Hence,
"
T #
A  BJ2 D T Ry 1 C C T Ry 1 C
Y D Ric

(7.148)
B J2  J2 D T Ry 1 DJ2 B T  A  BJ2 D T Ry 1 C

has a stabilizing solution. Then, an outer function satisfying

P .s/J2 P  .s/ D ˆ.s/J1 ˆ .s/ (7.149)

is given by

(7.150)

where

H D Y C T  BJ2 D T Ry1 : (7.151)


 
WQ 11 0
Note that a solution WQ Ry WQ T D J2 is given by WQ D , where
WQ 21 WQ 22
h
1 T i1=2
WQ 22 D D21 I  D11
T
D11 D21 ; (7.152)

h
i1=2
T 1
WQ 11 D D12
T
I  D11 D11 D12 ; and (7.153)

h
1 T i1=2
1 T
WQ 21 D  D21 I  D11
T
D11 D21 D21 I  D11
T
D11 D11 D12 : (7.154)

Furthermore, the proposed methods can be directly applied to the discrete-time spec-
 Q Q
tral factorization. Equations (7.134)
(7.145) show that N J1 N D J2 (N J2 N D
and
J1 ). Hence, the coprime factor N NQ is J-lossless (dual J-lossless). The factoriza-
tion of P(s) into a J-lossless and outer parts has important applications. Therefore,
the results are summarized in the following theorem.
206 7 Algebraic Riccati Equations and Spectral Factorizations

Theorem 7.1 Let .

1. There exists an rcf P(s) D ‚(s)… 1 (s) such that ‚(s) is J-lossless, and
W11 0
…(s) is outer if there exists a nonsingular matrix W D 2
W21 W22
R.k1 Ck2 /.k1 Ck2 / satisfying W T D T J1 DW D J2 , and AHx 2 dom .Ric/,

1 T

F D  D T J1 D B X C D T J1 C where X D Ric .AHx /  0,


2
1 T
1 T 3
A  B D T J1 D D J1 C B D T J1 D B
AHx D4
1 T 
1 T T 5 :
C T J1  J1 D D T J1 D D J1 C  A  B D T J1 D D J1 C
(7.155)

2. There exists an lcf Q


such that ‚.s/ is dual
 
WQ 11 0
J-lossless if there exists a nonsingular matrix WQ D 2
WQ 21 WQ 22
R.n1 Cn2 /.n1 Cn2 / satisfying WQ DJ2 D T WQ T D J1 , and AHy 2 dom .Ric/,


1

H D Y C T  BJ2 D T DJ2 D T , where Y D Ric AHy  0,


2
1 T
1 3
A  BJ2 D T DJ2 D T C C T DJ2 D T C
AHy D4
1 
1  5 :
B J2  J2 D T DJ2 D T DJ2 B T  A  BJ2 D T DJ2 D T C
(7.156)


" #
s1
0 .1C1/.1C1/
Example 7.7 Let P .s/ D sC2 2 RH1 . Use the J-spectral
0 sC3sC4
factorization, which is defined earlier in this section, to find an outer matrix function
.1 C 1/  .1 C 1/
ˆ(s) 2 GH 1 such that P (s)J1 P(s) D ˆ (s)J2 ˆ(s).
Here, one can determine the state-space realization of P(s) as
7.4 State-Space Formulae for Spectral Factorizations. . . 207

 
1 0
By Case (III), Rx D D J1 D D
T
leads to
0 1

T

A  BRx 1 D T J1 C X C X A  BRx 1 D T J1 C  XBRx 1 B T X


C C T J1  J1 DRx 1 D T J1 C D 0

and
 
2 0
XD ;
0 0
 
T
1 0
F D Rx1 B X C D J1 C D
T
:
0 1

Thus,

     
2 0 1 0 1 0 1 0
A C BF D C D <0
0 4 0 1 0 1 0 3

is Hurwitz. From
T
1=2
W22 D D22 D22  D12
T
D12 ;

h T
T T
i1=2
W11 D D11
T
D11 D21
T
D21 C D12 D11  D22
T
D21 W222 D12 D11 D22
T
D21 ;

W21 D W222 D12 D11  D22


T
D21 W11 ;

it yields
   
W11 0 1 0
W D D :
W21 W22 0 1

Then,


208 7 Algebraic Riccati Equations and Spectral Factorizations

In this chapter, the matrix equation of ARE was introduced. In determination


of stable coprime factors of a given system, solutions to AREs are required. The
AREs are also employed in the formulation of a CSD for determining the conditions
of J-lossless and dual J-lossless. This is the reason why one introduces the ARE
in a seemingly “independent” chapter, before introducing controller synthesis. As
mentioned in Chap. 1, this book aims to propose a unified controller synthesis
approach based on the CSDs. In the next chapters, the issues on control systems
synthesis will be discussed.

Exercises
" #
s1
0
1. Find a J-lossless spectral coprime factorization of G.s/ D sC4
s1 .
0 sC2
2. Given G.s/ D s1s4
, compute spectral factor ˆ(s) using spectral factorizations
Case (I) and R D
2 Q D 1. 3
0,
3.s3/.s4/

3. Given G.s/ D 4 4.s3/.s4/ 5, compute spectral factor ˆ(s) using the spectral
.sC1/.sC2/

.sC1/.sC2/
factorization Case (I) and R D 0, Q D 1.
4. Given G.s/ D sC4s3
D N.s/M 1 .s/, compute the normalized coprime factors
N(s) and M(s) using spectral factorizations.
5. Given G.s/ D 2.s1/.s3/
.sC1/.sC2/ , compute inner-outer factorization using coprime
factorizations.
6. Compute the J-lossless factorization of the following systems:

(a)

(b)
References 209

7. Given G.s/ D s32


, use coprime factorization to find an outer matrix ˆ(s) such
that R C G (s)QG(s) D ˆ (s)ˆ(s), where R D 1/4 and Q D 10.

.s1/
8. Given G.s/ D .sC2/.sC3/ , use spectral factorization to find an outer matrix ˆ(s)
such that 1  G (s)G(s) D ˆ (s)ˆ(s).


9. Show that system an LFT SCC P matrix is unitary if G JG D J, where


LFTl (P,ˆ) D CSDr (‚,ˆ).
10. Provided that ˆ 2 BH1 and ‚ is J-lossless, prove that CSDr (‚,ˆ) 2 BH1 .

References

1. Doyle JC (1984) Lecture notes in advanced multivariable control. Lecture notes at


ONR/Honeywell workshop, Minneapolis
2. Francis BA (1987) A course in H1 control theory. Springer, Berlin
3. Glover K (1984) All optimal Hankel-norm approximations of linear multivariable systems and
their L1 -error bounds. Int J Control 39:1115–1193
4. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
5. Jan CW (1974) On the existence of a non-positive solution to the Riccati equation. IEEE Trans
Autom Control 19(5):592–593
6. John D, Keith G, Promad K, Bruce F (1989) State-space solutions to standard H2 and H1
control problems. IEEE Trans Autom Control 34(8):831–847
7. Kailath T (1980) Linear systems. Prentice-Hall, Englewood Cliffs
8. Maciejowski JM (1989) Multivariable feedback design. Addison-Wesley, Berkshire
9. Rajendra B (1997) Matrix analysis. Springer, New York
10. Sergio B, Alan JL, Jan CW (1991) The Riccati equation. Springer, Berlin
11. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 8
CSD Approach to Stabilization Control
and H2 Optimal Control

For a control system under investigation, an important design objective is to find a


controller such that the closed-loop system satisfies a set of desired specifications.
Furthermore, it is desirable to characterize all such controllers which achieve the
required specifications. To accomplish this task, the required specifications can be
first formulated into an adequate performance index, and then one can minimize
the index to obtain sought controllers. This chapter presents a unified approach
for solving a fairly general class of control synthesis problems by employing
chain scattering-matrix descriptions and coprime factorizations. In particular, robust
stabilization problems and the optimal H2 control will be discussed in this chapter,
respectively. The H1 (sub)optimal control problem can also be solved along the
same approach which will be presented in the next chapter.
It is shown that general control synthesis problems, including those for stabiliz-
ing controllers or further for H2 /H1 controllers, can be first represented in terms of
two associated chain scattering-matrix descriptions and then be solved by means
of two coprime factorizations. The proposed method is virtually different from
conventional approaches [8, 9, 12] in that it follows a straightforward development
without introducing any pseudo signals for augmentation. Another major benefit
of the proposed method is in its unified framework, which directly characterizes
all stabilizing controllers and then leads to the optimal H2 solution by finding a
pair of special coprime factorizations. The corresponding formulae in the form of
state-space realizations are included to show how the solutions are constructed with
numerical advantages. Related graphic network representations are also presented
to help explaining the whole concept which one believes will benefit engineering
readers.

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 211
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__8,
© Springer-Verlag London 2014
212 8 CSD Approach to Stabilization Control and H2 Optimal Control

8.1 Introduction

The issue of controller design has always played a major role in dealing with
dynamic systems. Abundant techniques have been developed based on numerous
mathematical tools and various requirements. For the first controller, one can refer to
the governor for steam engines investigated by J. Watt in 1787, which significantly
initiated the studies of control systems. So far the development of corresponding
research could roughly be traced back into four periods. In 1932, Nyquist [14] used
the theory of complex variables to build the stability criterion for single-input and
single-output systems based on the frequency response. By late 1930s, Bode and
Nichols progressed the analysis of frequency domain and led to great innovations
followed by the root locus developed by Evans in 1948 [4], in which the importance
of pole/zero of a system was characterized. All the above mentioned methods mainly
compose the kernel of classical control. During the second period, Kalman and
others probed into the optimal control theory by the state-space description in 1960s.
With the assumption that the plant model is accurately representative, the so-called
linear quadratic regulator (LQR) controller was derived, in which an infinity gain
margin and at least 60ı phase margin were congenitally obtained, proved by Safonov
and Athans [16].
The third stage, led by Rosenbrock and MacFarlane in 1970s, has turned back
to frequency analysis, due to its good robustness property, and much focus on
the multidimensional systems. Even Rosenbrock [15] and MacFarlane [13] tried
to promote an integration design for overcoming the stability problem caused by the
altering of loop gain, there was not a complete framework with respect to minimize
the influence of uncertainties. As a summary, model uncertainty and disturbance
were not considered formally in controller design within these three periods.
Since the end of 1980s, robust control theory has become more and more well
known because the issue of model uncertainty increases its recognition over the
years, which significantly turned the research into the fourth period. In robust
control theory, H2 and H1 controls provide powerful tools to deal with model
uncertainty, which also have been tremendously developed for linear time-invariant
systems since the work of Zames [5, 19, 20]. In 1992, Smith and Doyle [17]
optimized a perturbed coprime factor plant and connected the robust control issues
to coprime factorizations. The contribution is significant in control synthesis and
has received considerable attention in later developments [7]. In Smith and Doyle’s
method, noise is utilized as an exogenous input of a generalized plant, in which the
uncertainty block is connected as surrounding feedback. Furthermore, the general
model consistency problem was solved in the frequency domain for uncertain
discrete-time plants represented by linear fractional transformations (LFTs) in [2].
Chen showed that these general consistency problems can be transformed into
a set of convex optimization problems, which can be efficiently evaluated by
mathematical techniques. Moreover, Benoit and Bruce [1] simplified the consistency
examination, which made this method more easily applicable.
8.2 Characterization of All Stabilizing Controllers 213

In 1995, Kimura used a single chain scattering description (CSD) to carry


out the general H1 synthesis in which the so-called four-block problem can be
solved by augmenting with some fictitious signals [9, 10]. The method proposed
by Kimura greatly simplifies the sophisticated mathematics encountered in the H1
optimization synthesis. However, signal augmentation and additional calculations
are still required for constructing a controller generator for the general H1
control synthesis problem. For coping with this shortcoming, an alternative CSD
approach which significantly differs from Kimura’s method was proposed in that
the framework involves constructing two coupled (right and left) CSD matrices and
just coprime factorizations [18]. The two coupled CSD approach does not require
any fictitious signals for the augmentation so that it is fairly straightforward. This
chapter proposes a comprehensive treatment of control synthesis via the latter CSD
framework which is based on two coupled coprime factorizations of the SCC plant
P to construct a general Bezout identity and then to characterize all the stabilization
solutions. Based on these results, the H2 optimal solution is readily obtained. In
addition to the complete state-space formulae given, an outstanding feature of this
approach is to use intensively graphic representations and manipulations which
will hopefully help readers understand and grasp the synthesis procedures without
requiring too much mathematics.

8.2 Characterization of All Stabilizing Controllers

Recall the general feedback control framework discussed in Chap. 4, as shown
P11 P12
in Fig. 8.1 where the two-port transfer function matrix P D denotes
P21 P22
the standard control configuration (SCC) and K is the stabilizing controller to be
designed. The closed-loop transfer function from w to z is given as
 
P11 P12
LFTl ; K D P11 C P12 K.I  P22 K/1 P21 (8.1)
P21 P22

In contrast to the linear fractional transformation (LFT), the CSD developed in


Chap. 5, originally for use of the network circuits, provides a straightforward
interconnection in a cascaded manner.
It was shown in Chap. 5 that if P21 (or P12
) in Fig. 8.1 is square, then there
exists CSDr (G,K) D LFTl (P,K) (or CSDl G; Q K D LFTl .P; K/). It can be found

z é P11 P12ù w
êP P ú
y ë 21 22û u

Fig. 8.1 Linear fractional


K
transformation
214 8 CSD Approach to Stabilization Control and H2 Optimal Control

a b
P1* P*1
z u z′ z
⎡ P12 P11 ⎤ ⎡ I − P11 ⎤
w ⎢0 I ⎥⎦ w y′ ⎢0 − P ⎥
⎣ ⎣ 21 ⎦
w

P2* P*2 u
u u z′
⎡ I 0⎤ ⎡ P12 0⎤
⎢P P21 ⎥⎦ ⎢P K
− I ⎥⎦
K y w y′ y
⎣ 22 ⎣ 22

Fig. 8.2 Two coupled CSDs for an LFT matrix. (a) CSDr  CSDl , (b) CSDl  CSDr

that if both P21 and P12 are not square (i.e., the most general four-block problem),
then a single equivalent CSD does not exist for the purpose. Therefore, utilizing
pseudo signals is a possible approach for satisfying the square augmentation as in
[9, 11]. However, as mentioned early in Chap. 2, the P matrix in SCC can always be
represented by two coupled CSDs, i.e., right C left or left C right, as follows:
     
P11 P12 P12 P11 I 0
LFT ; K D CSDr ; CSDl ;K (8.2)
P21 P22 0 I P22 P21

or
     
P11 P12 I P11 P12 0
LFT ; K D CSDl ; CSDr ;K : (8.3)
P21 P22 0 P21 P22 I

This shows the equivalence of LFT and a pair of CSD representations. In other
words, the P matrix in Fig. 8.1 can always be represented either by a right CSD
matrix associated with a left CSD matrix or by a left CSD matrix associated with a
right CSD matrix, as illustrated in Fig. 8.2. Note that CSDr  CSDl will be used
in the following to denote the framework of a right CSD associated with a left
one and CSDl  CSDr refers to its dual part. In the following, a new approach to
directly characterize stabilization solutions for a given SCC plant is presented. The
approach follows the framework of constructing two coupled CSDs and solving two
coprime factorizations. Two methods are considered, corresponding to the two CSD
representations in Fig. 8.2, respectively.

8.2.1 Method I: CSDr  CSDl Using a Right CSD Coupled


with a Left CSD

To solve the stabilizing problem of Fig. 8.2a, let


8.2 Characterization of All Stabilizing Controllers 215

G1
P1*
z u u¢
é P12 P11ù
w ê0 I úû w
M*
ë w'

P2*
u u u¢
é I 0ù
K y êP P ú
ë 22 21û
w M* w'

G% 2 = P %
% -1Q

Fig. 8.3 Right coprime factorization over CSDr  CSDl

(8.4)

be a right coprime factorization (rcf ) over RH1 . Consequently, as illustrated in


Fig. 8.3, the stabilization problem of Fig. 8.1 could be realized by multiplying
 the
P1
denominator M* on the right side of the CSD representations, as M  . The
P2
 
G
resultant is the Q 1 from (8.4). Given an invertible M* 2 RH1 , by Property 5.3,
G2
one has

LFTl .P; K/ D CSDr .P1 ; CSDl .P2 ; K//


D CSDr .P1 M ; CSDl .P2 M ; K//

D CSDr G1 ; CSDl GQ 2 ; K : (8.5)

Also let GQ 2 D … Q be a left coprime factorization (lcf ) over RH1 , where …


Q 1 ‚

Q
Q
is a unit in RH1 . By letting K D CSDl …; ˆ ; 8ˆ 2 RH1 , the stabilization
problem of Fig. 8.1 is represented in Fig. 8.4 where the closed-loop transfer function
of w 7! z is given as
   
G11 G12 Q 11 ‚
‚ Q 12
z D CSDr ; CSDl Q 22 ; ˆ
Q 21 ‚ w: (8.6)
G21 G22 ‚
216 8 CSD Approach to Stabilization Control and H2 Optimal Control

G1
z u¢
é G11 G12 ù
w êG ú w'
ë 21 G22 û

K G% 2
%
Q
u% u u% u¢
%
éP % ù
P %
éP % ù -1
P %
éQ % ù
Q
F ê%
11 12
% ú
11 12 11 12
y% y ê% % ú y% ê% % ú
ëP 21 P 22 û ëP 21 P 22 û ëQ 21 Q 22 û w'

Fig. 8.4 GQ 2 D …
Q 1 ‚ Q ˆ
Q and K D CSDl …;

z u¢
z w éG11 G12 ù
é P11 P12ù ê 0 I úû
w ë w
y êP P ú u
ë 21 22û
Þ
u% u¢
K éI Q% ù
F ê
12
y% 0 % ú
Q w
ë 22 û

Fig. 8.5 Equivalent framework

As can be seen, the overall stability of the feedback system in Fig. 8.4 is not
obvious, and it would be difficult to verify the stability with a general form of G1
Q However, if G1 in the rcf of (8.4) can be found to be in the triangular form
and ‚.  
G11 G12
of G1 D , then one concludes w0 D w as shown in Fig. 8.5. Furthermore,
0 I
Q Q 1 Q Q
in the lcf of G  2 D … ‚, ‚ can be found to be an upper triangular form of
Q
‚Q D I ‚12 . (Detailed formulae of such factorizations in the state-space form
0‚ Q 22
will be given in Sect. 8.3.) With such G1 and ‚,Q the input/output relation between
w and z can be easily determined as, by direct manipulations,
   
G11 G12 Q 12
I ‚ 

z D CSDr ; CSDl D G12  G11 ‚Q 12  ˆ‚
Q 22 w:
0 I 0‚Q 22 ; ˆ
(8.7)

Therefore, because the coprime factors G1 ; ‚ Q and the free parameter ˆ are all in
RH1 , the closed-loop transfer function of w 7! z is naturally stable.
8.2 Characterization of All Stabilizing Controllers 217

The above procedure of finding a set of stabilizing controllers is characterized by


solving two coprime factorization problems as summarized below:
   
P1 G
Step 1: Find a right coprime factorization of D Q 1 M1 over RH1 such
P2 G2
 
G11 G12
that G1 is the upper triangular form of .
0 I
Step 2: Find a left coprime factorization of GQ 2 D …Q 1 ‚ Q over RH1 such that …

Q 2
Q
GH1 and ‚ Q is of the upper triangular form of I ‚12 .
0‚ Q 22
Step 3: All stabilizing controllers are determined by


Q ˆ D …
K D CSDl …; Q 21 1 …
Q 11  ˆ… Q 12  ˆ…
Q 22 (8.8)

where ˆ 2 RH1 can also be called the Youla parameter. 


Lemma 8.1 The controller described in (8.8) stabilizes the transfer function from
w to z for any ˆ belonging to RH1 .
Proof The result can be proved directly from (8.7). 
Furthermore, we may have (8.8) characterizing the set of all stabilizing con-
trollers as described in the following theorem.
Theorem 8.1 A controller K stabilizes the SCC in Fig. 8.1 if and only if it is given
by (8.8) for any ˆ in RH1 .
Proof The sufficiency is given by Lemma 8.1. The necessary part will be proved in
Sect. 8.3.1, after the derivation of state-space formulae. 

8.2.2 Method II: CSD1  CSDr Using a Left CSD Coupled


with a Right CSD

Dually, the two-port SCC plant is described by a left CSD associated with a right
one as shown in Fig. 8.2b. To solve this stabilization problem, let

(8.9)

be an lcf over RH1 . Also let G2 D ‚… 1 be an rcf over RH1 , where … 2 GH1 is
a unit in RH1 . Dually, given an invertible MQ  2 RH1 , by Property 5.4, one has
218 8 CSD Approach to Stabilization Control and H2 Optimal Control

G1
P*1
z

M *
é I - P11ù
ê0 - P ú w
ë 21û

P*2
u u

M *
é P12 0 ù
êP -I ú
ë 22 û
y P P -1 y K

G2

Q  and inserting … 1 … D I in Fig. 8.2b


Fig. 8.6 Multiplying M

G% 1
z
é G% 11 G% 12 ù
ê% % ú w
ëG21 G22 û
G2 K
Q P -1
u u%
é Q11 Q12 ù
-1
é P11 P12 ù P11 P12
êQ ú êP y% F
ë 21 Q22 û ë 21 P22 úû y P21 P22

Fig. 8.7 G2 D ‚… 1 and K D CSDr (…,ˆ)

LFTl .P; K/ D CSDl .P 1 ; CSDr .P 2 ; K//


D CSDl MQ  P 1 ; CSDr MQ  P 2 ; K

D CSDl GQ 1 ; CSDr .G2 ; K/ : (8.10)

As a dual form, this method can be realized by multiplying the denominator MQ 


on the left side and by inserting the identity of … 1 … D I between K and G2 as
illustrated in Fig. 8.6. By letting K D CSDr (…,ˆ) for ˆ 2 RH1 , the stabilization
problem of Fig. 8.1 will be depicted as in Fig. 8.7 where the closed-loop transfer
function of w 7! z is given by
   
GQ 11 GQ 12 ‚11 ‚12
z D CSDl ; CSDl ;ˆ w: (8.11)
GQ 21 GQ 22 ‚21 ‚22
 
I GQ 12
Apparently, if one can characterize the particular coprime factors GQ 1 D
0 GQ 22
 
‚11 ‚12
and ‚ D , then the input/output relation between w and z, from the
0 I
interconnection system in Fig. 8.8, can be easily determined as
8.2 Characterization of All Stabilizing Controllers 219

z
é I G% 12 ù
z w ê
é P11 P12ù % ú w
ë0 G22 û
y êP P ú u
ë 21 22û
Þ
u%
éQ11 Q12 ù
K
ê 0 F
I úû
y%
ë

Fig. 8.8 Equivalent framework

   
I GQ 12 ‚11 ‚12  
z D CSDl ; CSDr ;ˆ D .‚11 ˆ C ‚12 / GQ 22  GQ 12 w:
0 GQ 22 0 I
(8.12)

Dually, characterizing a set of stabilizing controllers is required for solving two


related coprime factorization problems as summarized below:

Step 1: Find a left coprime factorization of over


 
I GQ 12
RH1 such that GQ 1 is the upper triangular form of .
0 GQ 22
1
Step 2: Find a right coprime factorization
  G2 D ‚… over RH1 such that ‚ is
of
‚11 ‚12
the upper triangular form of .
0 I
Step 3: All stabilizing controllers are determined by

K D CSDr .…; ˆ/ D .…11 ˆ C …12 / .…21 ˆ C …22 /1 (8.13)

where ˆ 2 RH1 can also be called the Youla parameter. 


So far two alternative methods for finding all stabilizing controllers have been
characterized. It should be emphasized that the approach of the right CSD associated
with a left one is equivalent to the dual topology of the left CSD associated with a
right one, in the sense that the stabilizing controller sets generated by (8.8) and
(8.13) are the same. This result will be demonstrated at the end of Sect. 8.3.
In the next section, state-space formulae for the required coprime factorizations
used in the above procedures will be derived and so those for the stabilizing
controllers.
220 8 CSD Approach to Stabilization Control and H2 Optimal Control

8.3 State-Space Formulae of Stabilizing Controllers

8.3.1 Method I: CSDr  CSDl

In the following, the set of stabilizing controllers in the state-space form are to
be constructed via three steps by a right CSD which is coupled with a left CSD
formation. In the first step, the SCC plant is rearranged into a column stacked
matrix form, which directly characterizes the transfer function in terms of a coupled
CSD representation, and then a right coprime factorization is found of which the
“numerator” matrix of the top half is upper triangular. In the second step, a left
coprime factorization of the left CSD matrix (which is stable) is found such that the
“denominator matrix” is outer and the “numerator” matrix is also upper triangular.
Finally, all admissible controllers can be generated by the denominator which is a
square bistable two-port matrix, i.e., in GH1 . The details of the construction in the
state-space form for each step are described below.
Consider the LFT interconnected system of Fig. 8.1, where P(s) is a
(q1 C q2 )  (m1 C m2 ) proper, real-rational, transfer function matrix given by

(8.14)

with (A, B2 ) stabilizable and (C2 , A) detectable. One can assume D22 D 0 without
loss of generality concerning the stabilization problem [6]. Recall that the solid
lines in a matrix as in (8.14) show the matrix is a compact expression for a transfer
function matrix, while dotted line in a matrix indicates usual matrix partition for the
sake of clarity. In fact, (8.14) has the notation of state-space matrix form as
2 3 2 32 3
xP A B1 B2 x
4 z 5 D 4 C1 D11 D12 5 4 w 5 (8.15)
y C2 D21 0 u

with initial condition x(0) D 0. As presented in Chap. 6, the following introduces the
state-space procedures for the required coprime factorizations to parameterize a set
of proper, real-rational controllers which stabilizes the SCC plant P.
An advantage of using the expression of a control system state-space model in
(8.15) is that usual (constant) matrix manipulations can be applied when there are
changes of system variables, input and output variables in particular. For example,
for the state feedback control of u D Fx C Wu0 , the state-space form of the closed-
2 3 2 32 3
x I 0 0 x
loop system can be readily obtained from (8.15), with 4 w 5 D 4 0 I 0 5 4 w 5,
u F 0W u0
as
8.3 State-Space Formulae of Stabilizing Controllers 221

2 3 2 32 32 3 2 32 3
xP A B1 B2 I 0 0 x ACB2 F B1 B2 W x
4 z 5 D 4 C1 D11 D12 5 4 0 I 0 5 4 w 5 D 4 C1 CD12 F D11 D12 W 5 4 w 5 :
y C2 D21 0 F 0W u0 C2 D21 0 u0
(8.16)

This simple and intuitive approach is believed to be welcomed by engineers and will
be extensively used in manipulations in the rest of this book.
Consider the coupled CSD representation depicted in Fig. 8.2a. Then (8.15) could
be rewritten by exchanging the order of rows and columns to give the state-space
realization of the coupled CSD representation as

(8.17)

The following describes the state-space realization of each step in Method I.


Step 1: Find an rcf of a SCC plant P with an upper block triangular numerator G1 .
 
G
As stated in Chap. 6, the state-space form of the coprime factors Q 1 and M*
G2
can be constructed by the state feedback of
2 3 2 32 3
x I 0 0 x
4 u 5 D 4 Fu Wuu 0 5 4 u0 5 : (8.18)
w Fw Wwu Www w0

Then it can be found from (8.17) that

(8.19)

 
Fu
where F D is chosen such that A C B1 Fw C B2 Fu is Hurwitz and both Wuu
Fw
and Www are nonsingular. The order of the vectors on the left and right sides of the
222 8 CSD Approach to Stabilization Control and H2 Optimal Control

2 3
P1
first equation above is arranged according to 4 P2 5 which is the transfer function
I
2 3
z
 
u 6 w7
matrix from the open-loop system input to the output 6
4
7 stacking above
w u5
y
the input itself. To find the right coprime factors with the required triangular form,
let Fw D 0, Wwu D 0 and Www D I. Then the coprime factors are readily derived from
the above equation as

(8.20)

where

(8.21)

(8.22)

(8.23)

Note that A C B2 Fu is Hurwitz, with the selection request set earlier in the present
choice of Fw D 0. As can be seen above, multiplying the denominator M* of (8.23)
on the right side of the CSD representations, as shown in Fig. 8.3, does not change
the signal flow of the overall system but leads to a stable block triangular right CSD
matrix G1 and a stable left CSD matrix GQ 2 .
Step 2: Find an lcf of GQ 2 with an upper block triangular numerator.
8.3 State-Space Formulae of Stabilizing Controllers 223

Moreover, for a stable GQ 2 of (8.22), a particular


 lcf of GQ 2 D … Q 2 RH1
Q 1 ‚
I ‚Q 12 .s/
Q is outer and ‚.s/
Q D
is sought such that …
0‚ Q 22 .s/ 2 RH1 . Since (C2 , A) is
detectable, there exists an injection gain matrix Hy such that A C Hy C2 is Hurwitz.
Similar to Step 1, a general lcf of GQ 2 can be derived by the output injection of
2 3 2 32 3
xQ I Hu H y x
4 uQ 5 D 4 0 WQ uu WQ uy 5 4 u 5 : (8.24)
yQ 0 0 WQ yy y

To find the required triangular form of the numerator, let Hu D  B2 , WQ uy D 0 and


WQ uu D Wuu1 . Note that the negative term of WQ yy is defined such that Method I and
Method II could be the dual cases, as derived in the end of Sect. 8.3 later. Then the
coprime factors are readily derived from (8.22) as

(8.25)

Q
where ‚.s/ 2 RH1 is given by

(8.26)

Step 3: Find the stabilizing controllers.


Consequently, a set of all stabilizing controllers can be generated by
Q ˆ/; 8ˆ 2 RH1 where ….s/,
K D CSDl .…; Q from Step 2 above, is given by

(8.27)


Q 1 .s/ is A C B2 Fu and, therefore, the
It can be verified that the A-matrix of …
denominator of (8.27) belongs to GH1 .
224 8 CSD Approach to Stabilization Control and H2 Optimal Control

Fig. 8.9 Equivalent u y


framework u% u % ( s)
P
y% P
u%
F( s) % ( s)
P Þ
y% y
F(s)

As illustrated in Fig. 8.9 which was presented in Chap. 5, the left CSD
characterization of (8.27) can be equivalently transformed

into
its LFT

form by
(5.123), denoted by …Q P , such that K D CSDl …;Q ˆ D LFTl … Q P ; ˆ where

(8.28)

Thus, the central stabilizing controller is given by

(8.29)

One may now complete the proof of Theorem 8.1.


Proof of Theorem 8.1 (continued) To prove the necessity part, one considers that a
controller K stabilizes P(s) in the SCC as in Fig. 8.1 if and only if K stabilizes P22 .
As in the book [21], if K is a stabilizing controller, there exists a Q(s) 2 RH1 such
that K D LFTl (J,Q), where

(8.30)

Without loss of generality, one may assume D22 D 0. In Step 3 above, it is obvious
by letting Wuu D I, WQ yy D I that … Q P .s/ of (8.28) is identical to J(s), which shows
that set of controllers generated by (8.8) (i.e., the state-space form of (8.27)) indeed
includes all stabilizing controllers. 

8.3.2 Method II: CSD1  CSDr

A dual scheme of Method I can also be developed as well in which a left CSD is
coupled with a right CSD. The procedure is similar to Method I. The details are
summarized as follows.
8.3 State-Space Formulae of Stabilizing Controllers 225

Step 1: Find an lcf of a SCC plant P with an upper block triangular numerator GQ 1 .
In the dual approach of stabilization, the state-space formulae can be derived as
follows. A state-space realization of the row stacked matrix for (8.9) is given by

(8.31)

As presented in Chap. 6, by setting

(8.32)

the rcf of can be constructed as

(8.33)

 
where H D Hz Hy is chosen such that A C Hz C1 C Hy C2 is Hurwitz and WQ zz and
WQ yy are nonsingular. Letting Hz D 0, WQ yz D 0, and WQ zz D I in the above equation
will yield the required coprime factors as

(8.34)

(8.35)

(8.36)
226 8 CSD Approach to Stabilization Control and H2 Optimal Control

Fig. 8.10 K D CSDr (…,ˆ) u y


D LFTl (…p ,ˆ) u% u PP(s)
y% u%
P(s) F(s) Þ
y% y
F(s)

Step 2: Find an rcf of G2 with an upper block triangular numerator.


Moreover, given G2 in (8.35), a state-space realization of G2 D ‚… 1 is given by

(8.37)

where

(8.38)

Step 3: Find the stabilizing controller.


Then a set of all stabilizing controllers can be generated by K D CSDr (…,ˆ),
8 ˆ 2 RH1 where, from (8.37),

(8.39)

As illustrated in Fig. 8.10, the right CSD characterization can also be transformed
into its LFT form, denoted by …P , such that K D CSDr (…,ˆ) D LFTl (…P ,ˆ) where

(8.40)

Thus the central solution of the stabilizing controller is given by

(8.41)

Note that the central solution is the same as that obtained by (8.29).
8.4 Example of Finding Stabilizing Controllers 227

So far two alternative methods for finding all stabilizing controllers are character-
ized. It should be emphasized that the right CSD associated with a left one is indeed
the dual topology of the left CSD associated with a right one. It can be verified from
the inverse of (8.27) as

(8.42)

Then from (8.39), one concludes that …… Q D I , i.e., CSDr .…; ˆ/ D CSDl …; Q ˆ .
This implies that Method I and Method II are virtually the dual approach to each
other for the general case. Note that the notation introduced in Method I for coprime
factorizations has been employed purposely in Method II such that a general Bezout
Q
identity (i.e., ….s/….s/ D I ) can be constructed from these two methods. The
parameters of Fu , Hy , Wuu , and WQ yy in the controller generator formulae of … (or
Q will be characterized by the H2 control optimization later in Sect. 8.5.
…)

8.4 Example of Finding Stabilizing Controllers

The following gives an example to show how to characterize all internally stabilizing
solutions for a typical control problem using the proposed CSD approach. Consider
the feedback stabilization problem in Fig. 8.11, where G is the
 controlled
  plant
 and
w1 z1
K is the controller to be designed. Then the SCC plant of 7! from
w2 z2
Fig. 8.11 can be found as
2 3
  0 0 I
P11 P12
P D D4 0G G 5: (8.43)
P21 P22

I G G

The closed-loop transfer function of w 7! z with u D Ky is


   
z1 w1
D LFTl .P; K/ ; (8.44)
z2 w2

é z1 ù é w1 ù
w2 z2 êz ú êw ú
ë 2û ë 2û
G (s) é P11 P12ù
y êP P ú u
ë 21 22û
Þ
z1 u y w1
Fig. 8.11 Feedback control K ( s)
K
system
228 8 CSD Approach to Stabilization Control and H2 Optimal Control

where
" #
K.I  GK/1 1
h K.I  GK/ G i :
LFTl .P; K/ D (8.45)
GK.I  GK/1 G I C K.I  GK/1 G

Assume that the state-space realization of is a minimal realiza-

tion. Then the state-space realization of the augmented SCC plant (8.43) is given by

(8.46)

8.4.1 Method I: CSDr  CSDl Using a Right CSD Associated


with a Left CSD
 
P1
Step 1: By (8.46), a state-space realization of the column stacked matrix can
P2
be found from (8.17) with D11 D 0 as

(8.47)

The coupled CSD form of Fig. 8.1 becomes Fig. 8.12.


 
G
Step 2: Right coprime factors Q 1 and M* , by (8.20) with Fu D F and Wuu D I,
G2
can be constructed as
8.4 Example of Finding Stabilizing Controllers 229

Fig. 8.12 Equivalent P1*


CSDr  CSDl æ z1 ö
ç ÷
è z2 ø éI 0 0ù u
êG 0 G úú
æ w1 ö ê æ w1 ö
ç ÷
è w2 ø ê0 I 0ú ç ÷
è w2 ø
ê ú
êë 0 0 I úû
u u
éI 0 0ù
æ w1 ö
K y êG I G úû ç ÷
ë è w2 ø

P2*

(8.48)

(8.49)

(8.50)

To derive the CSD representations in terms of the transfer function level, let G D
NM 1 D MQ 1 NQ denote the doubly coprime factorizations of the controlled plant.
By Lemma 6.2, there exists the Bezout identity of (6.40) depicted as
    
Q
X.s/ YQ .s/ M.s/ Y.s/ I 0
D : (8.51)
N .s/ MQ .s/
Q  N.s/ X.s/ 0I

From (6.22), (6.23), (6.31), and (6.35) with W D WQ D I , the state-space form of
(8.51) can be represented by

(8.52)
230 8 CSD Approach to Stabilization Control and H2 Optimal Control

P1* G1
M*
z éI 0 0ù u
êG éM 0 M - I ù u¢
0 G úú ê0 I
ê
ê 0 úú
w ê0 I 0ú w w
ê ú êë 0 0 I úû
ëê 0 0 I ûú

u u u¢
éM 0 M - I ù
éI 0 0ù
ê0 I
êG 0 úú
G úû
K
y I w ê w
ë êë 0 0 I úû
P2*
M* G2

Fig. 8.13 Multiplying M* at the right terminals

On the other hand, one can represent (8.48), (8.49) and (8.50) in the transfer function
level as (Fig. 8.13)
2 3
M 0 M I
M D 4 0 I 0 5; (8.53)
0 0 I

(8.54)

(8.55)

Q and …
Step 3: Given (8.49), the required coprime factors ‚ Q can be found from (8.25)
with Hy D H, WQ yy D I , and Wuu D I as

(8.56)

(8.57)
8.4 Example of Finding Stabilizing Controllers 231

G1
z
éM 0 M -Iù u¢
êN 0 N úú
ê
w ê0 I 0 ú w¢
ê ú
ë0 0 I û
K
%
P u % -1
P %
P G% 2 u¢
é X% -Y% ù é X% -Y% ù
-1
é X% -Y% ù éM 0 M - I ù
F êN I N úû
ê % ú y ê N% ú ê % ú
ëN - M% û ë - M% û ëN - M% û w¢
ë
%
Q

Q 1 …
Fig. 8.14 Insertion of … Q

Q
Therefore, ….s/ above can be written in transfer function level such as

(8.58)

It should be emphasized that the state-space form defined in (8.58) is identical to


(8.52); therefore, one concludes that XQ M  YQ N D I from (8.51), and

(8.59)

Finally, let K D CSDl …; Q ˆ ; 8ˆ 2 RH1 as shown in Fig. 8.14. Then the


closed-loop transfer function of w 7! z can be found from (8.54) and (8.59) as

LFTl .P; K/

D G12  G11 ‚ Q 12  ˆ‚Q 22


   
0 M I M  Q

D C Y C ˆMQ ˆNQ C XQ  I
0 N N
  


0 M I M YQ C ˆMQ
M ˆNQ C XQ  I

D C
0 N N YQ C ˆMQ N ˆNQ C XQ  I


M YQ C ˆMQ MˆN Q C M XQ 
I
D
N YQ C N ˆMQ N ˆNQ C XQ
 
.Y C Mˆ/ MQ
.Mˆ Q
C Y / N
:
D (8.60)
N YQ C ˆMQ N ˆNQ C XQ
232 8 CSD Approach to Stabilization Control and H2 Optimal Control

It can be concluded that the closed-loop transfer function of w 7! z is stable for any
ˆ 2 RH1 and the stabilizing controllers determined by (8.57) are then given by


Q ˆ D XQ  ˆNQ 1 YQ C ˆMQ ;
K D CSDl …; 8ˆ 2 RH1 : (8.61)

8.4.2 Method II: CSD1  CSDr Using a Left CSD Associated


with a Right CSD

Dually left coprime factors of can be constructed by (8.33)


with Hy D H and WQ yy D I . From (8.34), (8.35), and (8.36), one has, respectively,

(8.62)

(8.63)

(8.64)

Dually, it can also be found in the transfer function level as (Fig. 8.15)
2 3
I 0 0
MQ  D 4 0 I MQ  I 5 ; (8.65)
0 0 MQ

(8.66)
8.4 Example of Finding Stabilizing Controllers 233

Q  at
Fig. 8.15 Multiplying M
G1 M* P*1
the left terminal
éI 0 0 ù éI 0 0 0 ù z
ê ú ê0 I 0 -G úú
ê0 I M - I ú ê
ê0 0 M úû ê0 0 - I -G úû w
ë ë

u
éI 0 0 ù éI 0 ù
ê ú êG 0 ú
ê0 I M - I ú ê ú y
K
ê0 0 M úû êG - I ú
ë ë û
G2 P*2
M*

(8.67)

Furthermore, the right coprime factors ‚ and …, given by (8.38) and (8.39) with
Fu D F; Wuu D I; WQ yy D I , can be found as

(8.68)

(8.69)

Similarly, …(s) above can be written in transfer function level as

(8.70)
234 8 CSD Approach to Stabilization Control and H2 Optimal Control

G%1
éI 0 0 0 ù z
ê % ú
ê0 M I - M% - N% ú
w
ê0 0 - M% - N% úû
ë

Q
K
u
éI 0 ù
ê N% I - M% úú éM -Y ù éM -Y ù
-1
éM -Y ù
ê êN F
ê N% - M% úû ë - X úû êN
ë - X úû y
êN
ë - X úû
ë
G2 P P -1

Fig. 8.16 Insertion of …… 1

Since MQ X  NQ Y D I , one concludes that (Fig. 8.16)

(8.71)

Then the overall closed-loop transfer function of w 7! z is given by

z D LFTl .P; K/ w

D CSDl GQ 1 ; CSDr .G2 ; K/ w


D CSDl GQ 1 ; CSDr .‚; ˆ/ w; (8.72)


 
Y C Mˆ
where CSDr .‚; ˆ/ D . Therefore, it yields
I  Nˆ  X

LFTl .P; K/
D .‚11 ˆ C ‚12 / GQ 22  GQ 12
     
M Y   0 0
D ˆC Q Q
M N C Q
N I X M  I NQ
" #
 .Mˆ C Y / MQ  .Mˆ C Y / NQ
D
N ˆMQ C X MQ  I .N ˆ C X / NQ
" #
.Y C Mˆ/ MQ .Mˆ C Y / NQ
D

: (8.73)
N YQ C ˆMQ N ˆNQ C XQ
8.5 Stabilization of Special SCC Formulations 235

As expected, the result of (8.73) is identical to the one in (8.60). Then the stabilizing
controller determined by (8.70) is given by

K D CSDr .…; ˆ/ D .Y C Mˆ/ .X C N ˆ/1 ; 8ˆ 2 RH1 : (8.74)

It can be verified from (8.57) and (8.70) that

(8.75)

whose minimum realization gives the Bezout identity …… Q D I , which is the same
as that defined in (8.51).
Therefore,

the set of all internally stabilizing controllers is
given by K D CSDl …; Q ˆ D CSDr .…; ˆ/, for any ˆ 2 RH1 .

8.5 Stabilization of Special SCC Formulations

The procedure to find two successive coprime factorizations is utilized for stabiliza-
tion controller synthesis and, later, H2 problems, in the proposed CSD framework.
The above solution procedure shows that the feedback control problem is reduced to
the solutions of two less complicated problems which contain the determination of
a feedback regulator gain matrix, an observer gain matrix, and two accompanying
nonsingular matrices. It can be found that both Methods I and II proceed via two
coprime factorizations although the operational sequences are different from each
other. In fact as far as result is concerned, using Method I is identical to using
Method II for a general P(s) of (8.14). However, for specific control problems [3],
it is sometime beneficial to choose one of the two methods to make the solution
process more efficient. In this section, the six popular synthesis problems, listed
in Table 8.1, will be discussed in the CSD stabilization framework. The formulae
for … (or …)Q are shown and sets of all stabilizing controllers to be characterized.
Appropriate selection of methods and corresponding results will be summarized at
the end of this section.
In the following, the above specific problems are characterized as examples to
show which method, I or II, would be more appropriate and how only one coprime
factorization is needed in these cases.
236

Table 8.1 Special SCC formulations


Formulation of system Corresponding plant Formulation of system Corresponding plant
Disturbance feedforward (DF) Output estimation (OE)

Full information (FI) Full control (FC)

State feedback (SF) Output injection (OI)


8 CSD Approach to Stabilization Control and H2 Optimal Control
8.5 Stabilization of Special SCC Formulations 237

8.5.1 Disturbance Feedforward (DF) Case

At first, DF problem is noted, where the state-space realization is given by

(8.76)

where A  B1 C2 is Hurwitz and D21 D I. In the so-called DF problem, the framework


of CSDr  CSDl is adopted in which one just needs to carry out a single coprime
factorization for obtaining stabilizing controllers. Note that P21 is square and its
inverse is stable (i.e., a 2-block problem). Then by the procedure of Step 1, one has,
from (8.17) with D21 D I,

(8.77)

   
P1 ;DF G1;DF 1
According to the procedure of Step 1, one has P;DF D D MDF
P2 ;DF GQ 2;DF
from (8.20) with D21 D I where

(8.78)

and

(8.79)

Furthermore, for the DF problem listed in Table 8.1 with the assumption that
A  B1 C2 is Hurwitz, the state-space formula of GQ 2;DF D … Q
Q 1 ‚
DF DF is given by
(8.25), with replacements of D21 D I, Hy D  B1 , and WQ yy D I ,
238 8 CSD Approach to Stabilization Control and H2 Optimal Control

(8.80)

 
where ‚ Q DF D I 0 and … Q DF D GQ 1 . Note that ‚
Q DF derived for this DF problem
2;DF
0I
is actually an
identity
matrix. Then
the stabilizing

controllers can be generated by
K D CSDl GQ 2;DF1
; ˆ D CSDr GQ 2;DF ; ˆ ; 8ˆ 2 RH1 . With such G1 and ‚, Q the
input/output relation between w and z in can be easily determined from (8.7) as
   
G11 G12 I 0
z D CSDr ; CSDl ;ˆ D .G12 C G11 ˆ/ w: (8.81)
0 I 0I

8.5.2 Full Information (FI) Case

With an analogous topology of the DF problems, the SCC plant of FI problems is


given by

(8.82)

Similarly, the stacked matrix is derived by the procedure of Step 1 from (8.17) as

(8.83)

   
P1 ;FI G1;FI
and the right coprime factorization P  ;FI D D MFI1 is given
P2 ;FI GQ 2;FI
from (8.20) as

(8.84)
8.5 Stabilization of Special SCC Formulations 239

and

(8.85)

Q 1 ‚
Finally, by applying Step 2, the state-space formula of GQ 2;FI D … FI
Q FI in (8.25)
can be realized by

(8.86)

This gives that

(8.87)

where the central solution is given by a state feedback gain CSDl …Q FI .s/; 0 D
 
Fu 0 , i.e., u(t) D Kx(t) D Fu x(t), and therefore z D G12 w.

8.5.3 State Feedback (SF) Case

Consider the SF problems as

(8.88)
240 8 CSD Approach to Stabilization Control and H2 Optimal Control

The state-space formula of stacked matrix and the right coprime factorization can
be calculated from (8.17) and (8.20) with C2 D I and D21 D 0; the lcf of GQ 2;SF D
… Q
Q 1 ‚
SF SF is realized by

(8.89)

This gives that

(8.90)

With such G1 and ‚,Q the input/output relation between w and z can be easily
determined from
   
G11 G12 I 0

z D CSDr ; CSDl ;ˆ D G12 C G11 ˆ‚Q 22 w; (8.91)


0 I Q
0 ‚22

where the central solutions is given by a state feedback gain CSDl …Q SF .s/; 0 D Fu ,
i.e., u(t) D Kx(t) D Fu x(t), and therefore z D G12 w.

8.5.4 Output Estimation (OE) Case

On the other hand, similar to the DF problem, for the OE problem in which D12 D I,
the framework of CSD1  CSDr is better to be utilized by Method II, for the same
reason. Consider the OE problem of

(8.92)
8.5 Stabilization of Special SCC Formulations 241

where A  B2 C1 is Hurwitz and D12 D I. Note that P12 is square and its inverse is
stable (i.e., a 2-block problem). Then one has, from (8.31) with D12 D I,

(8.93)

According to the procedure of Step 1 in Method II, one has


from (8.33) with D21 D I where

(8.94)

and

(8.95)

Since A  B2 C1 is Hurwitz, G2,OE D ‚OE … 1


OE in Step 2 are given by (8.37) with
D12 D I, Fu D  C1 , and Wuu D I as

(8.96)

 
I 0
such that ‚OE D and …OE D G 1
2;OE . Then all stabilizing controllers can be
0I
generated by K D CSDr (G Q
2;OE ,ˆ) D CSDl (G2,OE ,ˆ), 8 ˆ 2 RH1 . With such G1
1

and ‚, the input/output relation between w and z can be easily determined as


   
I GQ 12 I 0

z D CSDl ; CSDr ;ˆ D ˆGQ 22  GQ 12 w: (8.97)


0 GQ 22 0I
242 8 CSD Approach to Stabilization Control and H2 Optimal Control

8.5.5 Full Control (FC) Case

Similarly, with an analogous topology of the OE problems, the SCC plant of FC


problems is given by

(8.98)

and the left coprime factorization is given from


(8.33) as

(8.99)

and

(8.100)

Then the state-space formula of G2,OE D ‚OE … 1


OE can be realized by

(8.101)
8.5 Stabilization of Special SCC Formulations 243

This gives that

(8.102)

wherethe central solution is given by an output injection gain CSDr .…FC .s/; 0/ D

Hy
, and therefore, z D GQ 12 w.
0

8.5.6 Output Injection (OI) Case

Finally, consider the OI problems as

(8.103)

The state-space formula of stacked matrix and the left coprime factorization can
be calculated from (8.33) with B2 D I and D12 D 0; the rcf of G2,OI D ‚OI … 1
OI is
realized by

(8.104)
244 8 CSD Approach to Stabilization Control and H2 Optimal Control

This gives that

(8.105)

With such GQ 1 and ‚, the input/output relation between w and z can be easily
determined from (8.12) as
   
I GQ 12 ‚11 0  
z D CSDl Q ; CSD r ;ˆ D ‚11 ˆGQ 22  GQ 12 w; (8.106)
0 G22 0 I

and the central solutions are given by an output injection gain CSDr (…OI (s), 0) D Hy
and therefore z D GQ 12 w.
In summary, with respect to the FI and FC issues, one adopts CSDr  CSDl
and CSD1  CSDr for each one, respectively, and consequently, ‚.s/ Q (or ‚(s))
in the second coprime factorization would always be of
a diagonal
 form.
Noted that the central solution is given by CSDl … Q FI .s/; 0 D Fu 0 and
 
Hy
CSDr .…FC .s/; 0/ D , respectively. For the state feedback (SF) and output
0
injection (OI) problems, the framework of CSDr  CSDl and CSD 1  CSD
r are
utilized, respectively, and each central solution is given by CSDl … Q SF .s/; 0 D Fu
and CSDr (…OI (s), 0) D Hy . Explicit state-space solutions for those special problems
are summarized in Table 8.2.
The structure of Tables 8.1 and 8.2 clearly shows that FC, OE, and OI are duals
of FI, DF, and SF; furthermore, FI and FC are equivalent to DF and OE; SF and OI
are the simplified cases of FI and FC. These relationships are depicted in Fig. 8.17.
To clarify the meaning of “equivalent” described in Fig. 8.17, one examines the
connection between the problems of DF and FI as an example. From (8.78) and
(8.84), one concludes that G1,DF D G1,FI , and furthermore, from (8.79) and (8.85), it
can be verified that

(8.107)
8.5 Stabilization of Special SCC Formulations 245

Table 8.2 Solutions of special SCC formulations


Utilized
System framework The second lcf or the second rcf
DF CSDr  CSDl

FI CSDr  CSDl

SF CSDr  CSDl

OE CSD1  CSDr

FC CSD1  CSDr

OI CSD1  CSDr

Fig. 8.17 Relationships dual


DF OE
between each special case

equivalent equivalent

dual FC
FI

simplified simplified

SF dual OI
246 8 CSD Approach to Stabilization Control and H2 Optimal Control

z u¢
G1, FI ( s )
w (= G1, DF ( s )) w'
K FI
K DF


éI 0 0ù
F DF P DF ê0 C I úû
G2,FI
ë 2 w'

G2,DF

Fig. 8.18 Relationship between FI and DF problem

Figure 8.18 illustrates the topologies of FI and DF problems. Finally, the I/O
relationship can be realized by

z D LFTl .PFI ; KFI / w


D CSDr G1;FI ; CSDl GQ 2;FI ; KFI w


 
Q Q I 0 0
D CSDr .G1;FI ; CSDl . G2;FI ; CSDl …DF ; ˆDF // w
0 C2 I

D CSDr .G1;FI ; CSDl . GQ 2;DF ; CSDl … Q DF ; ˆDF // w

D CSDr .G1;FI ; CSDl .I; ˆDF // w


D CSDr .G1;DF ; ˆDF / w: (8.108)

Similarly, problem OE is equivalent to FC. From (8.94) and (8.99), one has
GQ 1;OE .s/ D GQ 1;FC .s/. From (8.95) and (8.100), one can find that

(8.109)
8.6 Optimal H2 Controller 247

G1
z
G1,OE ( s )
(= G% ( s ))
1, FC
w
K FC
K OE

é B2 0ù
êI 0 úú
G2,FC ê P OE F OE
ëê 0 I ûú

G2,OE

Fig. 8.19 Relationship between FC and OE problem

Figure 8.19 illustrates the topologies of OE and FC problems. Finally, the I/O
relationship can be realized by

z D LFTl .PFC ; KFC / w


D CSDl GQ 1;FC ; CSDr .G2;FC ; KFC / w


0 0 02 3 111
B2 0
D CSDl @GQ 1;FC ; CSDr @G2;FC ; CSDr @4 I 0 5 …OE ; ˆOE AAA w
0 I

D CSDl GQ 1;FC ; CSDr G2;OE ; CSDr .…OE ; ˆOE / // w

D CSDl GQ 1;FC ; CSDr .I; ˆOE / w


D CSDl .G1;OE ; ˆOE / w: (8.110)

8.6 Optimal H2 Controller

For analytic design of control systems, system performance is often measured


by means of norm of the closed-loop system from the exogenous signals to the
regulated variables. One of performance indices used in a linear control system
is the H2 norm, which is the root-mean-square of the energy of its (unit) impulse
response. The impulse response of a linear time-invariant system G is denoted by
g(t). The squared H2 norm of G may be computed as
Z 1  
kGk22 D trace g T .t /g.t / dt: (8.111)
1
248 8 CSD Approach to Stabilization Control and H2 Optimal Control

Due to different mathematical tools applied, the H2 control problem can be


solved at various levels of generality. In this section, we extend the proposed CSD
approach to the optimal H2 controller synthesis and derive the solution in the
state-space form. The benefit of the state-space solution lies in its computational
efficiency. The four parameters Hy , Fu , Wuu , and WQ yy in the controller generator of
(8.27) or (8.39), the remaining design freedom from stabilization, are designated
here in order to minimize the H2 norm of G D LFTl (P,K). Of course, these
parameters have to satisfy that Fu and Hy make A C B2 Fu and A C Hy C2 Hurwitz,
respectively, and Wuu , WQ yy are invertible.
Let the SCC plant in (8.14) with D11 D 0 satisfy the following assumptions:
 
A  j!I B2
(I) rank D m2 C n and rank D12 D m2 .
C1 D12
 
A  j!I B1
(II) rank D q2 C n and rank D21 D q2 .
C2 D21
Note that for some special cases as depicted in Table 8.1, (I) and (II) would not
always be satisfied. Particular factorizations are therefore required in the second
step, as shown in Table 8.2.
The H2 optimization problem requires finding a stabilizing controller K such that
the cost function kLFTl (P,K)k22 is minimized. The solution approaches using CSD
framework are presented below.

8.6.1 Method I: Using a Right CSD Associated with a Left One

Considering the H2 norm of the closed-loop transfer function from w to z in Fig. 8.5,
one has, from (8.1) and (8.7),
     2
2 
 G11 G12 I ‚Q 12 
kLFTl .P; K/k2 DCSDr ; CSDl Q ;ˆ   DkG12 CG11 ‰k2 ;
2
0 I 0 ‚22 2
(8.112)

where
 
I Q 12

‰ D CSDl Q Q
0 Q 22 ; ˆ D ˆ‚22  ‚12 :

(8.113)

The H2 solution is summarized in the following lemma. The detailed construction


of such an optimal controller is given in the proof of the lemma.
Lemma 8.2 The H2 norm of (8.112) is minimized with ˆ D 0, if G11 is inner and
Q 22 co-inner. Furthermore, the H2 optimal controller is obtained as

(8.114)
8.6 Optimal H2 Controller 249

where
T
1 T

Fu D  D12 D12 B2 X C D12


T
C1 ; (8.115)

T 1
Hy D  Y C2T C B1 D21
T
D21 D21 ; (8.116)

in which X and Y satisfy, respectively,


T
1 T  T

A  B2 D12 D12 D12 C1 X C X .A  B2 . D12


T
D12 1 D12
T
C1
T
1

 XB2 D12 D12 B2T X C C1T .I  D12 . D12


T
D12 1 D12
T
C1 D 0; (8.117)

and



T 1 T 1
A  B1 D21
T
D21 D21 C2 Y C Y A  B1 D21
T
D21 D21 C2 T

T 1

T 1

 Y C2T D21 D21 C2 Y C B1 I  D21


T
D21 D21 D21 B1T D 0: (8.118)

The minimized H2 norm is given by


 
min kLFTl .P; K/k22 D kG12 k22 C ‚Q 12 2
2
stabilizing K


T
1 T
1
D trace B1T XB1 C trace D12 D12 2 Fu YFuT D12 D12 2 : (8.119)

Proof From (8.21), one has


First, the following shows how to make G11 inner. It can be derived that

(8.120)

   1
I 0 I 0
Applying the state similarity transformation on the left and
X 0 X 0
 
I 0
D on the right will yield
X 0
250 8 CSD Approach to Stabilization Control and H2 Optimal Control

(8.121)

Let Wuu D (DT12 D12 ) 0.5 and Fu D  (DT12 D12 ) 1 (BT2 X C DT12 C1 ), where X satisfies

.A C B2 Fu /T X C X .A C B2 Fu / C FuT D12
T
C C1T .C1 C D12 Fu / D 0:

Then the ARE of (8.117) is derived by substituting such defined Fu into the above
equation. This gives G11 G11 D I. Note that X is the observability gramian of G12 and
the realization of G
11 12 is given by, with the similarity transformation used in the
G
state-space model,

(8.122)

Therefore, one concludes that for an inner G11 in (8.112), G


11 G12 is always an anti-
stable function in RH2 ? . The H2 norm of (8.112) can then be rewritten by

 
min kG12 C G11 ‰k22 D min kG11 G12 C‰k22 D min kG11 G12 k22 Ck‰k22
‰2RH1 ‰2RH1 ‰2RH1


D kG11 G12 k22 C min k‰k22 :
‰2RH1 (8.123)

Thus the optimal 2-norm of the overall transfer function is equivalent to minimize
the 2-norm of the stable transfer function ‰.
Q 22 , one has
Similarly, for a co-inner ‚
   2
min k‰k22 D min ‚ Q 12  ˆ‚ Q 22 2 D min ‚
2
Q 12 ‚
Q 
22  ˆ 2
ˆ2RH1 ˆ2RH1 ˆ2RH1
 2 
D min ‚ Q 12 ‚
Q  2
22 2 C kˆk2 ; (8.124)
ˆ2RH1

T 0:5
where WQ yy D D21 D21 , Hy D  (YCT2 C B1 DT21 )(D21 DT21 ) 1 , and Y satisfies


T

T
B1 C Hy D21 B1 C Hy D21 C A C Hy C2 Y C Y A C Hy C2 D 0;
(8.125)
8.6 Optimal H2 Controller 251

and the ARE of (8.118) is given by substituting Hy into above equation. Obviously,
the optimal solution is given by ˆ D 0 and then
 
min kLFTl .P; K/k22 D kG12 k22 C ‚Q 12 2
2
stabilizing K


1 T
1
D trace B1T XB1 C trace D12
T
D12 2 Fu YFuT D12 D12 2 : (8.126)


The dual approach can also be derived as in the following.

8.6.2 Method II: Using a Left CSD Associated


with a Right One

Dually for a left CSD associated with a right one, one has, from (8.1) and (8.12),

kLFTl .P; K/k22


     2
 I GQ 12 .s/ ‚11 ‚12  


D CSDl ; CSDr ;ˆ  Q GQ 22 2 ;
D  GQ 12  ‰
Q
0 G22 .s/ 0 I  2

(8.127)

where
 
Q D CSDr ‚11 ‚12
‰ ; ˆ D ‚11 ˆ C ‚12 : (8.128)
0 I

The optimal H2 controller is given by Lemma 8.3; proof of the dual part is skipped. It
is interesting but not surprising to find that the two resultant controllers are identical.
The optimal controller is actually unique.
Lemma 8.3 The H2 norm of (8.127) is minimized with a co-inner GQ 22 and inner
‚11 , where ˆ D 0, the minimized H2 norm, is given by
 2
min kLFTl .P; K/k22 D GQ 12 2 C k‚12 k22
stabilizing K




T 12 T 12
D trace C1 Y C1T C trace D21 D21 HyT XHy D21 D21 : (8.129)

The H2 optimal controller can be obtained as

(8.130)
252 8 CSD Approach to Stabilization Control and H2 Optimal Control

where …(s) is obtained from (8.39) and other elements are identical to the ones in
(8.114). 
It should be reminded that for the H1 synthesis, the numerators of the coprime
Q
factorizations in (8.6) are devoted to make G1 (s) J-lossless and ‚.s/ dual J-lossless,
and the dual work of (8.11) is to have a dual J-lossless matrix GQ 2 .s/ and a J-lossless
matrix ‚(s), as will be discussed in Chap. 9.

8.7 Example of the Output Feedback H2 Optimal


Control Problem

Consider a standard control problem shown in Fig. 8.20 where the realization of
is minimum with dim(A) D n and the weighting matrices U, V, Q,
and R are symmetric with U  0, Q  0, V > 0, and R > 0. The H2 control problem
in Fig. 8.20, which is essentially the same as the one shown in Fig. 8.11 but
inclusive of the weighting function, is to find a stabilizing feedback controller
 K 
w1
which minimizes the 2-norm of the closed-loop transfer function matrix from
w2
 
z
to 1 , i.e.,
z2

mink LFTl .P K/k2


K

where

w2

U 1/ 2 G (s)
x 1 x z1
B C Q1/ 2
s

z2 u y w1
R1/ 2 K ( s) V 1/ 2

Fig. 8.20 H2 control problem


8.7 Example of the Output Feedback H2 Optimal Control Problem 253

It can be found in this example that DT12 C1 D 0 and B1 DT21 D 0. Suppose that U  0

and Q  0 are chosen such that are of

full column rank and of full row rank on the imaginary axis, respectively.
By applying a right CSD associated with a left one in Sect. 8.3, one has, by
assumption of Sect. 8.5 that from Lemma 8.2,
T
 1
1
T 2
WQ yy D D21 D21
1 1
Wuu D D12 D12 2 D R 2 ; D V 2

Fu D R1 B T X; Hy D Y C T V 1

where
 
A BR1 B T
X D Ric .HX / D Ric ; (8.131)
 C T QC AT
 
AT C T V 1 C
Y D Ric .HY / D Ric ; (8.132)
 BUB T A

such that G11 is inner and ‚ Q 22 is co-inner. Note that all eigenvalues of
A C BFu D A  BR 1 BT X and A C Hy C D A  YCT V 1 C are in the open left-half
plane. Thus one has

(8.133)

and

(8.134)

Furthermore, it can be verified that

(8.135)
254 8 CSD Approach to Stabilization Control and H2 Optimal Control

(8.136)

Then one has, from (8.27),

(8.137)

and from (8.114), the optimal controller by letting ˆ D 0 is given by

(8.138)

It shows that the optimal controller is an observer-based type where Fu and Hy are
Q 22 is co-inner.
solved, such that G11 is inner and ‚

8.7.1 A Numerical Example

Consider the motion of an antenna discussed in the book [12], which can be
described by the state differential equation with defining proper state variables:
       
0 1 0 0 0
P /D
x.t x.t / C u.t / C d .t / D Ax.t / C Bu.t / C d .t /;
0 ˛ 

where  d denotes the disturbing torque. Furthermore, one assumes that the observed
variable is given by
 
.t / D 1 0 x.t / C vm .t / D C x.t / C vm .t /;

in which vm (t) denotes the white noise with constant scalar intensity Vm . The
simplified block diagram of the control system is depicted in Fig. 8.21. Then, an
optimal H2 observer-based control synthesis problem is proposed in Fig. 8.22. In
this example, the purpose of the control scheme is to minimize the criterion
Z 1 Z 1
 2  2 
Ru .t / C Qy 2 .t / dt D z1 .t / C z22 .t / dt:
0 0
8.7 Example of the Output Feedback H2 Optimal Control Problem 255

Reference Controller Disturbance torque Output


input gain td response
w y
K Motor
Driving Observation
- voltage noise
vm
Observed variable h

Fig. 8.21 Block diagram of system

Fig. 8.22 Block diagram of w u x& x y


the observer-based controller 1
B C
s

H -
x̂& 1 x̂ ŷ
B C
s

With the specified yardstick, one can rewrite Fig. 8.22 into Fig. 8.23, in which
U, Q, R, and V are the weighting functions. The corresponding LFT and CSD
representations are illustrated in Fig. 8.1, where

One now considers the following numerical values [12]:  D 0.787 rad/(Vs2 ),
˛ D 4.6 s 1 , D 0.1 kg 1 m 2 , Vd D 10 N2 m2 s, and Vm D 10 7 rad2 s. Furthermore,
one has U D 0.4018, Q D 1, R D 0.00002, V D 10 7 ; then by computing the CSD
form derived before, the optimal controller and observer gain could be obtained by
Lemma 8.2 such as

 1
WQ yy D 107 2 ;
1
Wuu D R 2 D .0:0002/ 2 ;
1

   
A BR1 B T 0:1098 0:0059
X D Ric D ;
 C QC
T
AT 0:0059 0:0005
256 8 CSD Approach to Stabilization Control and H2 Optimal Control

z1 td

R1/ 2 U 1/ 2
G (s)
w u x x y
1
B C
s z2
Q1/ 2
A vm
V 1/ 2
h
H
-
x 1 x̂ ŷ
B C
s

Fig. 8.23 Block diagram with the weighting function

   
AT C T V 1 C 4:0357  106 8:1436  105
Y D Ric D ;
 BUB T
A 8:1436  105 3:6611  103

then
 
F D R1 B T X D  223:6068 18:6992
" #
T 1
40:3573
H D Y C V D  :
814:3574

The optimal controller is given by

The optimal 2-norm of the closed-loop system can be obtained by Lemma 8.2 in
(8.119) as

min kLFTl .P; K/k22 D trace B1T XB1 C trace Fu YFuT D 9:0779  105 ;
stabilizing K
8.8 Example of LQR Controller 257

or from the dual part, one can also verify the same result by (8.129) such as

min kLFTl .P; K/k22


stabilizing K




T 12 T 12
D trace C1 Y C1T C trace D21 D21 HyT XHy D21 D21 D 9:0779  105 :

Those two calculation results above reveal again that these two topologies are in
fact the same in essence. Note that all numerical solutions are identical to the ones
in the reference [12].

8.8 Example of LQR Controller

In Sect. 7.4, one first discussed the LQR problem via the coprime factorization.
Consider this problem again as depicted in Fig. 8.24. This problem involves a
stabilizing state feedback u D Fu x C Wuu u0 such that
 2  2
min Q1=2 y 2 C R1=2 u2 ; (8.139)
uDF x

where Q D QT  0 and R D RT > 0 subject to

P / D Ax.t / C Bu.t /;
x.t x.0/ D xo
y.t / D C x.t /: (8.140)

Now consider the problem in Fig. 8.24 in the form of the H2 optimal control
problem. That is to find a stabilizing feedback gain
 which minimizes the 2-norm
yw
of the closed-loop transfer function from w to , i.e.,
uw

minkLFTl .P; K/k2 ;


K

uw w

R 1/2 I
u¢ x& x y yw
1
W B C Q 1/2
s

A
u
F

Fig. 8.24 Feedback system with weighting functions


258 8 CSD Approach to Stabilization Control and H2 Optimal Control

where

It can be found here that D11 D 0, DT12 C1 D 0, DT12 D12 D R > 0, and B1 DT21 D 0.

Suppose that is a minimum realization and then for

R > 0 is the full column rank in the imaginary axis. Suppose that Q is given such

that is of full column rank on the imaginary axis.

For a right CSD associated with a left CSD, one has, from (8.21) and (8.22),

and

(8.141)

Since P12 (s) is of full column rank on the imaginary axis, one has by Lemma 8.2
T
 1 1
that Wuu D D12 D12 2 D R 2 , Fu D  R 1 BT X where
 
A BR1 B T
X D Ric ; (8.142)
 C T QC AT
8.9 More Numerical Examples 259

and then is inner.

Since D21 D 0 and C2 D I in this example, this is in fact an SF problem as listed


in Table 8.1. Then the state-space realization of the lcf GQ 2 D … Q can be found as
Q 1 ‚

(8.143)

where

(8.144)

and

(8.145)

Q 0 D Fu with
The optimal H2 controller is then determined by K2 D CSDl …;

min kLFTl .P; K/ k2 D kG12 k2 D trace.X / (8.146)


stabilizing K

for B1 D I.

8.9 More Numerical Examples

To demonstrate the validity and potential effectiveness of the theoretical results


as well as state-space formulae presented above, one considers a model reference
control problem shown in Fig. 8.25, which was originally discussed in the book [22].
260 8 CSD Approach to Stabilization Control and H2 Optimal Control

Fig. 8.25 Model reference uw


control W
- y u
K PP -
r e

Fig. 8.26 LFT framework


for the model reference e r
control T
_
uw
W

_
PP
y u

The target of this problem is to minimize the weighted control effort uw and the
model tracking error
 e through minimizing the H2 norm of the transfer function
e
matrix from r to over all stabilizing controllers, i.e.,
uw
Z 1 Z 1
2 2
min kuw k dt C kek dt :
0 0

It is assumed that is a minimal realization,

, and .
Figure 8.26 describes explicitly the input/output signals of SCC with regard to
this particular design problem, of which a state-space block diagram is depicted in
Fig. 8.27. By taking out the integrators and controller, the interconnection 2matrix
3
xT
6x 7
6 p7
6 7
P(s) in the SCC is actually the “open-loop” system from all “input” signals 6 xw 7
6 7
4w5
u
8.9 More Numerical Examples 261

Dw

x& w xw uw
1
Bw Cw
s

Aw

DP

w y u x& P 1 xP uP
K BP CP
- s

AP - e

x& T 1 xT uT
BT CT
s

AT

DT

Fig. 8.27 State-space representation

2 3
xP T
6 xP 7
6 p7
6 7
6 xP 7
to the “output” 6 w 7. The state-space form of P can then be directly obtained from
6 e 7
6 7
4 uw 5
y
Fig. 8.27 as
262 8 CSD Approach to Stabilization Control and H2 Optimal Control

As an illustration example, the following textbook-like data are assumed for this
design. Let

Then the state-space representation of P is determined as

It can be verified that (A,B2 ) is stabilizable and (C2 ,A) detectable with
D11 D 0, D22 D 0 and the system satisfies the following assumptions:
 
A  j!I B2
1. rank D 1 C 1 and rank D12 D 1.
C1 D12
 
A  j!I B1
2. rank D 1 C 1 and rank D21 D 1.
C2 D21
To determine the H2 optimal solution, the four parameters Hy , Fu , Wuu , and WQ yy
in (8.27) with (8.39) can be selected as, according to Lemma 8.2,
T
 1
1
T 2
Wuu D D12 D12 2 D 10; WQ yy D D21 D21 D 1;

 
Fu D 38:6975 4:5182 199:1704 123:494 12:1515 1:7038 ;
 T
Hy D 0 1 0 0 0 0 :

Therefore, the optimal controller was found to yield a closed-loop H2 performance


with the SCC plant

P. All stabilizing controllers can be characterized by (8.27) as
K D CSDl …; Q ˆ ; 8ˆ 2 RH1 or by (8.39) as K D CSDr (…,ˆ), 8 ˆ 2 RH1
8.9 More Numerical Examples 263

where

Q
it
can be verified that …… D I , which in turn shows CSDr .…; ˆ/ D
Note that
CSDl …;Q ˆ .
Consequently, the optimal H2 controller K(s) is given from (8.114) or (8.130) as

This gives

4:5s 5 C 97:4s 4 C 652:2s 3 C 1417:1s 2 C 1244:8s C 387


Kopt .s/ D :
s6 C 25:4s 5 C 323:8s 4 C 2054:4s 3 C 5082:4s 2 C 4292:5s C 260:5

The optimal 2-norm of the closed-loop system can be obtained from (8.119) as

min kLFTl .P; K/k22


stabilizing K


T
1 T
1
D trace B1T XB1 C trace D12 D12 2 Fu YFuT D12 D12 2 D 0:013;
264 8 CSD Approach to Stabilization Control and H2 Optimal Control

or from the dual part, one can also verify the same result by (8.129) such as

min kLFTl .P; K/k22


stabilizing K



1 T
1
D trace C1 Y C1 C trace D21 D21 Hy XHy D21 D21
T T 2 T 2
D 0:013:

The two calculation results above reconfirm that these two approaches are in fact
the same in essence. All the calculations and results can be verified by using the
function h2syn in MATLAB® Robust Control Toolbox.

8.10 Summary

This chapter has proposed a unified approach to describe and synthesize the
stabilizing controllers and the H2 optimal controller by finding two coupled CSD
matrices. Note that the selection of weighting function is not the major issue
concerned in this book. The obtained results reveal an interesting feature in that
the original output feedback problem can be simplified to the solutions of two
less complicated subproblems. It is found that using the proposed approach admits
separate computations of estimator and regulator gains. In fact, this result is similar
to the “separation principle” in linear control systems theory. The feedback control
gain and the observer gain are found, respectively, in the subproblems to satisfy
a specific cost function. Notice that on the basis of the proposed CSD method,
the solution of specific control problems can be easily solved in an explicit form.
The explicit formulae obtained from the coupled CSD method are beneficial for
analyzing the closed-loop characteristics in various control problems.

References

1. Boulet B, Francis BA (1998) Consistency of open-loop experimental frequency-response date


with coprime factor plant models. IEEE Trans Autom Control 43:1680–1691
2. Chen J (1997) Frequency-domain tests for validation of linear fractional uncertain models.
IEEE Trans Autom Control 42:748–760
3. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2
and H1 control problems. IEEE Trans Autom Control 34:831–847
4. Evans WR (1948) Graphical analysis of control systems. Trans AIEE 67:547–551
5. Francis BA, Zames G (1987) On H1 -optimal sensitivity theory for SISO feedback systems.
IEEE Trans Autom Control 29:9–16
6. Glover K, Doyle JC (1988) State-space formulae for all stabilizing controllers that satisfy an
H1 -norm bounded and relations to risk sensitivity. Syst Control Lett 11:167–172
7. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
8. Kimura H (1987) Directional interpolation approach to H1 optimization and robust stabiliza-
tion. IEEE Trans Autom Control 32:1085–1093
References 265

9. Kimura H (1995) Chain-scattering representation, J-lossless factorization and H1 control.


J Math Syst Estim Control 5:203–255
10. Kimura H (1997) Chain-scattering approach to H1 control. Birkhäuser, Boston
11. Kimura H, Okunishi F (1995) Chain-scattering approach to control system design. In: Isidori
A (ed) Trends in control: an European perspective. Springer, Berlin
12. Kwakernaak H, Sivan R (1972) Linear optimal control systems. Wiley, New York
13. MacFarlane AGJ, Postlethwaite I (1977) Characteristic frequency functions and characteristic
gain functions. Int J Control 26:265–278
14. Nyquist H (1932) Regeneration theory. Bell Syst Tech J 11:126–147
15. Rosenbrock HH (1974) Computer aided control system design. Academic, New York
16. Safonov MG, Athans M (1977) Gain and phase margin for multiloop LQG regulators. IEEE
Trans Autom Control 22:173–178
17. Smith RS, Doyle JC (1992) Model validation: a connection between robust control and
identification. IEEE Trans Autom Control 37:942–952
18. Tsai MC, Tsai CS, Sun YY (1993) On discrete-time H1 control: a J-lossless coprime
factorization approach. IEEE Trans Autom Control 38:1143–1147
19. Zames G (1981) Feedback and optimal sensitivity: model reference transformation, multiplica-
tive seminorms, and approximated inverse. IEEE Trans Autom Control 26:301–320
20. Zames G, Francis BA (1983) Feedback, minimax sensitivity, and optimal robustness. IEEE
Trans Autom Control 28:585–601
21. Zhou K, Doyle JC, Glover K (1995) Robust and optimal control. Prentice Hall, Upper Saddle
River
22. Zhou K, Doyle JC (1998) Essentials of robust control. Prentice Hall, Upper Saddle River
Chapter 9
A CSD Approach to H-Infinity Controller
Synthesis

H1 optimal control, which minimizes the H1 -norm of a closed-loop system,


has been developed in the last 30 years and been applied in various domains.
The original H1 optimal control problem involves an equivalent model matching
problem, which can be transformed into a four-block distance problem. By applying
spectral factorizations, the four-block distance problem can be reduced to a Nehari
problem, and Hankel norm approximation can be considered [2–4, 6]. Operator
theory approach is very mathematics involved, and numerical solution procedures
are difficult to be developed for general form problems. Notable progress was made
in finding suboptimal solutions of a general control synthesis problem by solving
two algebraic Riccati equations (AREs) [2, 3, 10, 13]. However, even with such
solution procedures, questions of “why?” and “how?” often arise from students
and engineers who want to understand and use them. An alternative development
based on the framework of J-lossless coprime factorizations was proposed by Green
in which the solutions can be characterized in terms of transfer function matrices
[5]. A similar framework based on a single chain scattering description (CSD) was
initially proposed by Kimura [10]. As described in Chap. 8, the general four-block
problem can be solved by augmenting with some fictitious signals. Furthermore,
since the transformation from LFT to CSD does not guarantee stability of the
resulting CSD matrix, the J-lossless factorization with an outer matrix cannot
be found directly by the coprime factorization-based method. In this book, the
proposed H1 CSD solution framework involves constructing two coupled (right
and left) CSD matrices by solving two J-lossless coprime factorizations and is fairly
straightforward. The method is generally valid and does not need to introduce any
fictitious signals for matrix augmentation. Based on Green’s approach of J-lossless
coprime factorizations, the proposed CSDs framework is significantly different from
Kimura’s approach.
Many applications using the CSD and J-lossless factorization approaches can
be found in various control problems, such as the influence of the weighting
function adjustment [1], the state-delayed problem [8], the nonlinear systems

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 267
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__9,
© Springer-Verlag London 2014
268 9 A CSD Approach to H-Infinity Controller Synthesis

control problem [9], and simultaneous stabilization problem [11]. This chapter
aims at developing a CSD framework to solve H1 suboptimal control synthesis
problem. As in Chap. 8, the standard control configuration (SCC) description is
first formulated into a coupled chain scattering-matrix description where graphic
representations are utilized to interpret the matrix description. Specific right and
left CSDs are to be constructed, in the state-space form and at the transfer function
level, to characterize the H1 solutions. This approach provides a comprehensive
understanding for control engineers. Illustration examples are given to show the
solution procedures where the general H1 solutions are derived from finding
two coupled CSDs (one right and one left) as well as two successive coprime
factorizations with J-lossless numerators.

9.1 H1 Control Problem

Consider the general


 feedback  control framework discussed in Chap. 8. For a given
P11 P12
SCC plant P D and a prespecified > 0, the general H1 suboptimal
P21 P22
control problem is to find a stabilizing controller, denoted by K1 , such that the
closed-loop transfer function from w to z satisfies

kLFTl .P; K1 /k1 < (9.1)

or, equivalently,


LFTl P ; K1  < 1; (9.2)
1

where
2 3
2 3 1
1 1 P11 P12
P11 P12 6 7
P D 4 5 or P D 6
41
7:
5 (9.3)
P21 P22 P21 P22

Similar to the deduction from Figs. 8.1 to 8.4, one has, as depicted in Fig. 9.1,
   

G11 G12 ‚Q 11 ‚
Q 12
z D LFTl P ; K1 w D CSDr ; CSDl Q 22 ; ˆ
Q 21 ‚ w:
G21 G22 ‚
(9.4)

As mentioned earlier in Chap. 8, it is not so straightforward to deter-


mine the overall stability of the feedback system here from Fig. 9.1. However,
Q dual J-lossless, one can ensure
for the particular case when G1 is J-lossless and ‚
the stability of the interconnection system for any ˆ 2 BH1 by the small gain
theory.
9.1 H1 Control Problem 269

G1
z u¢
é G11 G12 ù
z é1 1 ù w êG ú
ê g P11 g P12ú w ë 21 G22 û w
ê ú
y ëê P21 P22 ûú u Þ
%
Q
u u¢
K¥ %
éQ % ù
Q
F ê%
11
%
12
ú
y ëQ 21 Q 22 û w

Fig. 9.1 Equivalent framework

Q
 Dually, for a system formulated in terms of a left CSD  matrix G1 D
Q
G11 G12Q ‚11 ‚12
2 RH1 associated with a right CSD matrix ‚ D 2 RH1
GQ 21 GQ 22 ‚21 ‚22
as illustrated in Fig. 8.7, the closed-loop transfer function from w to z is given by
   

GQ 11 GQ 12 ‚11 ‚12
z D LFTl P ; K1 w D CSDl ; CSDl ;ˆ w:
GQ 21 GQ 22 ‚21 ‚22
(9.5)

Apparently, one can also ensure the stability of the interconnection system for
GQ 1 dual J-lossless, ‚J-lossless, and ˆ 2 BH1 by the small gain theory. The rest
part of this chapter is devoted to show how to construct the required right and
left CSD matrices and then the H1 solutions of (9.2) from a given SCC plant
P . The approach is similar to that of the H2 optimal control case as shown in
the previous chapter. It will show that the H1 control problem of (9.2) is reduced
to two solutions which are linked to J-lossless coprime factorizations. State-space
formulae of the solution procedure are provided in the next section which contains
the determination of a feedback gain F, an observer gain H, and two accompanying
nonsingular matrices (W and WQ ).

9.1.1 Method I: CSDr  CSDl Right CSD Coupled


with Left CSD

In summary, a set of the H1 controllers satisfying (9.2) can be constructed via


three solution steps as illustrated in Fig. 9.2. Obviously, this CSD method relies on
solving two J-lossless coprime factorizations. The following summarizes the key
components of the whole framework.
270 9 A CSD Approach to H-Infinity Controller Synthesis

⎡ P1* ⎤
Perform a rcf of ⎢ ⎥
Give a SCC ⎣ P2* ⎦ Obtain
plant Pg G1 and G 2

Solve coprime
Perform a lcf factorization
Controller Obtain such that G1
of G2 = Π −1Θ
generator Π and Θ is J-lossless
K∞ = CSDl ( Π, Φ)
∀Φ ∈ BH∞ Solve coprime factorization
Π is outer
such that Θ is dual lossless

Fig. 9.2 Flowchart of Method I

z u¢
é G11 G12 ù
w êG ú
ë 21 G22 û w

u u u u¢

éP  ù
P 
éP  ù -1
P 
éQ  ù
Q
11 12
F ê ú 11 12 11 12
ê ê  ú
y 
ëP 21 P 22 û y  ú y
ëQ 21 Q 22 û w
ëP 21 P 22 û

K¥ G 2
Q 
Fig. 9.3 K1 D CSDl Q ; ˆ

   
P1 G 1
Step 1: Find a right coprime factorization of D Q 1 M1 over RH1 such
P2 G2

that G1 is J-lossless where

Step 2: Find a left coprime factorization of GQ 2 D … Q over RH1 such that …


Q 1 ‚ Q 2
GH1 and ‚ Q is dual J-lossless.
Step 3: The H1 controller satisfying (9.2) is generated by (Fig. 9.3)


Q ˆ D …
K1 D CSDl …; Q 21 1 …
Q 11  ˆ… Q 12  ˆ…
Q 22 ; where
(9.6)
8ˆ 2 BH1 :
9.1 H1 Control Problem 271

Finally, the input/output relationship could be realized by


z D LFT1 P ; K1 w
D CSDr .P
1 ; CSDl .P2 ; K1
 

// w
D CSDr G1 ; CSDl GQ 2 ; K 1 w

Q …
D CSDr .G1 ; CSDl . CSDl ‚; Q 1 ; K1 // w

D CSDr .G Q Q 1 ; CSDl …;
Q ˆ // w
1 ; CSDl . CSDl

‚; …
D CSDr G1 ; CSDl ‚; Q ˆ w: (9.7)

Theorem 9.1 For a given SCC plant P , there exists an internally stabilizing
 
P1
controller K1 such that kLFT1 (P ,K1 )k1 < 1 if there exists an rcf of D
P2
 
G1
M1 1
such that G1 is J-lossless and then an lcf of GQ 2 D …
Q 1 ‚Q such that ‚Q
GQ 2
Q outer. Then, all proper real rational stabilizing controllers
is dual J-lossless and …

K1 satisfying kLFT1 (P ,K1 )k1 < 1 are given by K1 D CSDl …; Q ˆ ; 8ˆ 2


BH1 .

9.1.2 Method II: CSDl  CSDr Left CSD Coupled


with Right CSD

A dual scheme of Method I in which a left CSD is coupled with a right CSD can
also be developed. The procedure of Method II is similar to that of Method I. The
detail is summarized as follows.
Recall from Chap. 5 where a row stacked transfer matrix from the SCC plant P is

defined by Then the LFT formulation of the

H1 control problem can be expressed by a coupled CSD representation. A particular


lcf of the stacked transfer matrix, denoted by , is
Q
to be found such that the numerator G1 is dual J-lossless, and a particular rcf of
the right part numerator, denoted by G2 D ‚… 1 , is found such that its numerator
‚ is J-lossless. Then the H1 controllers can be generated by the denominator ….
Figure 9.4 shows a graphical flowchart for the proposed solution process. The key
concept of each step is described below (Fig. 9.4).
Step 1: Find a left coprime factorization of over
RH1 such that the left part numerator GQ 1 is dual J-lossless (Fig. 9.5).
272 9 A CSD Approach to H-Infinity Controller Synthesis

Perform a
Obtain
lcf of [ P*1 P*2 ] Give a SCC
G% and G
1 2 plant Pg

Solve coprime
factorization Perform an rcf
such that G%1 is Controller
of G2 = QP -1 Obtain
dual J-lossless generator
P and Q
K ¥ = CSDr (P, F )

P is outer "F Î BH ¥
Solve coprime
factorization
such that Q is J-lossless

Fig. 9.4 Flowchart of Method II

G%1
z% z
z é1 ù w é G%11 G%12 ù
ê g P11 P12 ú
ê%
ê ú
y% % ú w
y u ëG21 G22 û
ê1 ú
ê g P21 P22 ú
Þ
ë û
z% u¢
é Q11 Q12 ù
K¥ y% êQ ú y¢ F
ë 21 Q 22 û

Fig. 9.5 Framework of Method II

G1
z z
⎡ G11 G12 ⎤
y ⎢  ⎥ w
⎣G21 G22 ⎦

z u u u
−1
⎡ Θ11 Θ12 ⎤ ⎡ Π11 Π12 ⎤ ⎡ Π11 Π12 ⎤
⎢Π Φ
y ⎢Θ
⎣ 21 Θ 22 ⎥⎦ y ⎢Π
⎣ 21 Π 22 ⎥⎦ y ⎣ 21 Π 22 ⎥⎦ y

Θ Π −1
G2 K∞

Q
Fig. 9.6 K1 D CSDr ( ,ˆ)

Step 2: Find a right coprime factorization of G2 D ‚… 1 over RH1 such that
˘ 2 GH1 and ‚ is J-lossless.
Step 3: The H1 controller set is generated by the denominator … as (Fig. 9.6)

K1 D CSDr .…; ˆ/ D .…11 ˆ C …12 / .…21 ˆ C …22 /1 ; where


8ˆ 2 BH1 : (9.8)
9.1 H1 Control Problem 273

Table 9.1 Three rupled CSD formulations (CSDr  CSDl case)


Objective: find a K
such that Requirements on coupled CSD matrices
   
G11 G12 Q
Stabilization LFTl (P,K) is stable G1 D 2 RH1 ; ‚ Q D I ‚12 2 RH1
0 I 0 ‚Q 22
problem
   
G11 G12 I ‚ Q 12
min kLFTl (P,K)k2 (i) G1 D Q
2 RH1 ; ‚D
H2 control
0 I 0 ‚ Q 22 2 RH1
problem over stabilizing K
(ii) G11 inner and ‚Q 22 co-inner
H1 control kLFTl (P ,K)k1 < 1 (i) G1 J-lossless
problem over stabilizing K
Q dual J-lossless
(ii) ‚

Finally, the input/output relationship could be realized by


z D LFT1 P ; K1 w
D CSDl .P
1 ; CSDr .P 2 ; K1
// w
 

Q
D CSDl G1 ; CSDr .G2 ; K1 / w

D CSDl GQ 1 ; CSDr CSDr ‚; …1


; K1 // w
D CSDl GQ 1 ; CSDr CSDr
‚; …1 ; CSDr .…; ˆ/ // w
D CSDl GQ 1 ; CSDr .‚; ˆ/ w: (9.9)

Theorem 9.2 For a given SCC plant P , there exists an internally stabi-
lizing controller K1 such that kLFTl (P ,K1 )k1 < 1 if there exists an lcf
of such that GQ 1 is dual J-lossless and then an rcf
of G2 D ‚… 1 such that  is J-lossless and ˘ outer. Then, all proper
real
 rational stabilizing controllers K1 satisfying kLFT1 (P ,K1 )k1 < 1 (or


CSDr GQ 1 ; CSDl .‚; ˆ/  < 1) are given by K1 D CSDr (…,ˆ), 8 ˆ 2 BH1 .
1

In the following sections, some specific H1 control problems will be discussed


to demonstrate the important issues of H1 synthesis, after a section on state-space
formulae. It will be seen that both Methods II and I aim to solve two J-lossless
coprime factorizations although the operational sequences are different from each
other. In fact, using Method I is identical to using Method II, for a general four-block
problem. However, for specific control problems (e.g., the two-block problems), it
is necessary to choose a proper method to make the solution process more efficient.
In the two-block problems with P21 invertible, Method I is more appealing to solve
the problems since the numerator of the second J-lossless coprime factorization of
Method I can be an identity matrix. On the other hand, Method II is more suitable
for two-block problems of P12 invertible for the similar reason.
Overall, solutions to the stabilizing H2 /H1 control problems are established by
means of a unified procedure within the CSD framework with respect to finding
corresponding coprime factorizations as summarized in Table 9.1.
274 9 A CSD Approach to H-Infinity Controller Synthesis

In the CSD framework for controller synthesis, the procedure to find two
successive coprime factorizations is unified for solving stabilization, H2 , and H1
(sub)optimal problems. The difference is that, as mentioned above, the numerators
of the coprime factorizations in the problems of finding all stabilization solutions
have to be in triangular forms while in the H2 or H1 problems, the numerators (or
part of them) are related to inner/J-lossless matrices.

9.2 State-Space Formulae of H1 Controllers

Similar to the H2 control problem defined in (8.14), let

(9.10)

be a (q1 C q2 )  (m1 C m2 ) transfer function matrix with dim(A) D n, and the


following assumptions are satisfied.
(a) (A,B2) is stabilizable 
and (C2 ,A) is detectable.
A  j!I B2
(b) rank D m2 C n and rank D12 D m2 .
C1 D12
 
A  j!I B1
(c) rank D q2 C n and rank D21 D q2 .
C2 D21
Notice that Assumption (a) ensures that there are right and left coprime factor-
izations of P , and Assumptions (b) and (c) are necessary, respectively, to find two
successive J-lossless coprime factorizations in solving the H1 suboptimal control
problem.
Many steps in the H1 suboptimal controller synthesis are similar to those
in Chap. 8, and symbols/variables are deliberately kept the same. To avoid
excessively repetitive description, derivation of the state-space formulae below
will be simplified. Readers are advised to consult relevant parts in previous chapters
if a doubt arises.

9.2.1 Method I: CSDr  CSDl

Step 1: Find the particular rcf such that G1 is J-lossless.

For a proper rational matrix of (9.10), the state-space realization of is


given, by (8.17), as
9.2 State-Space Formulae of H1 Controllers 275

(9.11)

To construct the H1 controllers following the procedure of a right CSD associated


with a left CSD, by (8.19), the particular right coprime factorization is given

(9.12)

(9.13)

(9.14)

   
Fu1 Wuu 0
where FI D and WI D are found such that G1 2 RH1 is
Fw Wwu Www
J-lossless, i.e., G1 JG1 D J and G1 (s)JG1 (s)  J, 8 Re(s)  0. It can be verified that
G1 defined by (9.12) is J-lossless if
2 h i 12 3

T 1
6 D T
12 I  D D
11 11 D 12 0 7
WI D 4
h
i 12
1
5;
1 1 
I  D11
T
D11 D11 T
D12 D12T
I  D11 D11 T
D12 I  D11
T
D11 2
(9.15)

where I  DT11 D11 > 0 and


FI D RI1 B T X C D1T C1
X D Ric .HX /  0 (9.16)
" #
A  BRI1 D1T C1 BRI1 B T
HX D
T 2 dom .Ric/ ;
 C1T I  D1 RI1 D1T C1  A  BRI1 D1 C1
T

(9.17)
276 9 A CSD Approach to H-Infinity Controller Synthesis

 T T

D12 D12 D12 D11

RI D ; (9.18)
T
D11 D12  I  D11
T
D11
   
D1 D D12 D11 ; B D B2 B1 : (9.19)

Q is dual J-lossless.
Step 2: Find the particular lcf such that ‚
Since GQ 2 2 RH1 is a stable transfer matrix defined by (9.13), there exists an lcf of
GQ 2 D … Q such that …
Q 1 ‚ Q 2 GH1 and ‚ Q 2 RH1 is dual J-lossless by Assumption
(a). Similar to the derivation of state-space formulae in Step 1, the coprime factors
can be constructed by

(9.20)

Q
while ‚.s/ is given by

(9.21)

 
Q to be dual J-lossless, the nonsingular matrix WQ I D WQ uu 0
For ‚ 2
WQ yu WQ yy
R.m2 Cq2 /.m2 Cq2 / should satisfy

WQ I DGQ 2 Jm2 ;m1 DGTQ WQ IT D Jm2 ;q2 : (9.22)


2

Let
" T #
AGQ 2  BGQ 2 Jm2 ;m1 DGQ RQ I1 CGQ 2 CGQ RQ I1 CGQ 2
T T

HZ D 2  2  ;
BGQ 2 Jm2 ;m1  Jm2 ;m1 DGQ RQ I1 DGQ 2 Jm2 ;m1 BGTQ  AGQ 2  BGQ 2 Jm2 ;m1 DGQ RQ I1 CGQ 2
T T

2 2 2
(9.23)

where

RQ I D DGQ 2 Jm2 ;m1 DGTQ : (9.24)


2

Assume that HZ 2 dom(Ric) with Z D Ric(HZ )  0, then define


9.2 State-Space Formulae of H1 Controllers 277

  
HI D Hu Hy1 D ZCGTQ  BGQ 2 Jm2 ;m1 DGTQ RQ 1 (9.25)
2 2

and
2 h
i 12 3
T 1
 6 D12 I  D11 D11 D12
T
 0 7
WQ uu 0 6 h
i 12 7
D 6 1 T h
i 17:
WQ yu WQ yy 4  D21 I  D11 D11 D21
T 1 T
D21 I  D11 D11 D21
T
 25

1 T
 D21 I  D11
T
D11 D11 D12
(9.26)

It can be verified that such defined HI and WQ I lead to a dual J-lossless ‚ Q and
Q
… 2 GH1 . As can be seen above, the first coprime factorization with J-lossless
G1 determines the free parameters FI and WI , and the second coprime factorization
with J-lossless ‚Q resolves for the other two free parameters HI and WQ I . The above
solution process shows an interesting feature in that the general output feedback
H1 control problem is reduced to the solutions of two less complicated problems
which contain the determination of a feedback control gain matrix FI , an observer
gain matrix HI , and two accompanying nonsingular matrices (WI and WQ I ).
Step 3: Find the H1 (suboptimal) controllers.
For the H1 control problem, if there exists a particular rcf of P1 such that G1 is
J-lossless and there also exists an lcf of GQ 2 such that ‚
Q is dual J-lossless, then one
has by the small gain theorem

 


Q ˆ  <1
LFT1 P ; K1  D CSDr G1 ; CSDl ‚; (9.27)
1 1

where K1 D CSDl …; Q ˆ , for any ˆ 2 BH1 .


This has shown that the CSD coprime factorization approach enables the designer
to formulate the general feedback control problem into a pair of simple two-port
network connections.
To characterize the H1 controllers in terms of state-space realizations, the left
CSD characterization can be equivalently transformed into its LFT form, denoted
by …Q P , such as

Q ˆ D LFTl …
K1 D CSDl …; Q P;ˆ ; 8ˆ 2 BH1 (9.28)

where, from (9.20),

(9.29)
278 9 A CSD Approach to H-Infinity Controller Synthesis

and, by (5.123),

(9.30)

Thus, the central controller is given by

(9.31)

9.2.2 Method II: CSDl  CSDr

Step 1: Find a particular lcf such that GQ 1 is dual J-lossless.


For the row stacked matrix of (9.10) inducted by (8.9), the left coprime factors of
can be constructed from (8.33) as

(9.32)

This gives

(9.33)

(9.34)

 
  WQ 0
where HII D Hz Hy2 and WQ II D Q zz Q are two free parameters to be
Wyz Wyy
 
GQ GQ
found such that GQ 1 D Q 11 Q 12 is dual J-lossless. Then a dual J-lossless GQ 1 can
G21 G22
9.2 State-Space Formulae of H1 Controllers 279

be constructed if I  D11 DT11 > 0 and HY 2 dom(Ric) where


"
1 #
T 2
I  D11 D11 0
WQ II D h
1 T i 12
h
1 T i 12
T 1
D21 I  D11
T
D11 D21 T
D21 D11 I  D11 D11 D21 I  D11
T
D11 D21
(9.35)

HII D  Y C T C B1 DT1 RQ II
1
(9.36)

Y D Ric .HY /  0 (9.37)


2
T 3
A  B D
T
Q 1 C
R C T Q 1
R C
6 1  1 II II 7
HY D 4
T T 5 (9.38)
 B1 I  D 1 RII D 1 B1  A  B1 D 1 RQ II C
T Q 1  1
T



Q  I  D11 D11
T T
D11 D21
RII D T T (9.39)
D21 D11 D21 D21
 
D11  
D 1 D ; C D C 1 C2 : (9.40)
D21

Step 2: Find a particular rcf such that ‚ is J-lossless.


Similarly, the assumptions, an rcf of G2 D ‚… 1 , can be constructed by

(9.41)

Then the required coprime factor ‚(s) is given by

(9.42)
280 9 A CSD Approach to H-Infinity Controller Synthesis

 
‚11 ‚12
Furthermore there exists a particular rcf such that ‚ D 2 RH1 is
‚21 ‚22
J-lossless if
 
Wuu 0
1. There exists a nonsingular matrix WII D 2 R.m2 Cq2 /.m2 Cq2 / such
Wyu Wyy
that

WIIT DGT 2 Jq1 ;q2 DG2 WII D Jm2 ;q2 (9.43)

2. HV 2 dom(Ric) with V D Ric(HV )  0 where


HV D
"
#
1 T 1 T
AG2  BG2 RII DG2 Jq1 ;q2 CG2 BG2 RII BG2
1 T

1 T

T :
 CGT2 Jq1 ;q2  Jq1 ;q2 DG2 RII DG2 Jq1 ;q2 CG2  AG2  BG2 RII DG2 Jq1 ;q2 CG2
(9.44)

RII D DGT 2 Jq1 ;q2 DG2 : (9.45)

Then one has


 
Fu2 1
T

FII D D RII BG2 V C DGT 2 Jq1 ;q2 CG2 and


Fy
WII D
2 h i 12 3

T 1
6 D T
12 I  D D
11 11 D12 0 7
4
h
i 12 h
1 T i 12 5:
T 1 T 1
D21 D11 I  D11 D11
T
D12 D12 I  D11 D11
T
D12 D21 I  D11 D11
T
D21

(9.46)

Step 3: Find the H1 (suboptimal) controllers.


Similarly, to characterize the H1 controllers in terms of state-space realizations, the
right CSD characterization can be equivalently transformed into its LFT form such
as

K1 D CSDr .…; ˆ/ D LFTl .…P ; ˆ/ ; 8ˆ 2 BH1 (9.47)

where, from (9.41),

(9.48)
9.3 H1 Solution of Special SCC Formulations 281

and

(9.49)

Thus, the central controller is given by

(9.50)

9.3 H1 Solution of Special SCC Formulations

As described in Sect. 9.1 (or 9.2), two successive coprime factorizations are
sought for H1 controller synthesis in the proposed CSD framework. Instead of
characterizing upper triangular matrices in the stabilizing controller or H2 optimal
synthesis, the factorization in the H1 synthesis proceeds to find a J-lossless (or dual
J-lossless) matrix factor. Similar to Sect. 8.5, for specific control problems [2], it is
beneficial to choose a better method to make the solution process more efficient. In
this section, the six popular synthesis problems, listed in Table 8.1, will be discussed
in the CSD framework. The formulae for … (or …) Q are presented and H1 controllers
to be characterized.

9.3.1 Disturbance Feedforward (DF) Problem

For the disturbance feedforward (DF) problem described


  in Table
 8.1,
 the frame-
P1 ;DF G1;DF 1
work of CSDr  CSDl is utilized, and the rcf of D Q MDF can be
P2 ;DF G2;DF
obtained from (9.11) as

(9.51)
282 9 A CSD Approach to H-Infinity Controller Synthesis

where

(9.52)

(9.53)

   
Fu1 Wuu 0
For H1 controllers, F D and W D should be found such
Fw Wwu Www
that G1,DF (s) is J-lossless. Furthermore, the particular lcf of GQ 2;DF D … Q
Q 1 ‚
DF DF can
be found as

(9.54)

where

(9.55)

Since A  B1 C2 is Hurwitz according to the assumption, ‚Q DF derived from this


Q DF D GQ 1 . The transfer function from
problem is an identity matrix so that … 2;DF
w to z is given by

z D LFT1 .PDF ; KDF / w D CSDr .G1;DF ; ˆDF / w: (9.56)

If F and W are found such that G 1,DF (s) is a


J-lossless function, then the H1
solutions are given by KDF D CSDl …Q DF ; ˆDF for any ˆDF 2 BH1 .
9.3 H1 Solution of Special SCC Formulations 283

9.3.2 Full Information (FI) Problem


 
P1 ;FI
Similarly, for the FI problem depicted in Table 8.1, the rcf of P;FI D D
P2 ;FI
 
G1;FI
MFI1 is given by (9.11) as
GQ 2;FI

(9.57)

where

(9.58)

(9.59)

It can be seen that P1 ;DF D P1 ;FI , then one concludes G1,DF D G1,FI . The particular
lcf of GQ 2;FI D …
Q 1 ‚
FI
Q FI can be construed as

(9.60)
284 9 A CSD Approach to H-Infinity Controller Synthesis

where

(9.61)

(9.62)

Q FI derived from this FI problem is a constant matrix. For ˆFI D 0, one


Note that ‚

has CSDl ‚ Q FI ; 0 D ‚ Q 12 D 0 since ‚


Q 1 ‚ Q 12 D 0. Then the closed-loop transfer
11
function resulting from the center solution w is given by
z D LFT1 .PFI ; KFI / w

D CSDr G1;FI ; CSDl GQ 2;FI ; KFI w


D CSDr G1;FI ; CSDl ‚ Q FI ; 0 w

D CSDr .G1;FI ; 0/ w (9.63)


where KFI D CSDl … Q FI ; 0 . This H1 solution appears in a state feedback given by


u D KFI yFI D CSDl … Q FI ; 0 yFI


 
  x
Q 1 Q
D …11 …12 yF I D Fu1 0 D Fu1 x: (9.64)
w
2 3
I 0 0 0
Furthermore, it can also be verified that GQ 2;DF D 40 I 0 0 5 GQ 2;FI and
0 0 C2 I
G1,DF D G1,FI as depicted in Chap. 8. Therefore one then has

z D LFT1 .P
FI ; KFI / w

D CSDr G1;FI ; CSDl GQ 2;FI ; KFI w


2 3 !!!
I 0 0 0
Q Q
D CSDr G1;FI ; CSDl G2;FI ; CSDl …DF 40 I 0 0 5 ; ˆDF w
0 0

C 2 I
D CSDr .G1;FI ; CSDl . GQ 2;DF ; CSDl …Q DF ; ˆDF // w
D CSDr .G1;FI ; CSDl .I; ˆDF // w
D CSDr .G1;DF ; ˆDF / w: (9.65)
9.3 H1 Solution of Special SCC Formulations 285

9.3.3 State Feedback (SF) Problem

Thirdly, one considers the state feedback (SF) problems. The state-space formula of
the stacked matrix and its right coprime factorization can be calculated from (9.12)
and (9.13) with C2 D I and D21 D 0; the lcf of GQ 2;SF D … Q
Q 1 ‚
SF SF is realized by

(9.66)

where

(9.67)

and

(9.68)

The central solutions are given by a state feedback gain CSDl … Q SF .s/; 0 D Fu1 ,
 
i.e., u(t) D Kx(t) D Fu1 x(t). Note that with defining ‚Q 22  D ˛, the closed loop of

1 

CSDl ‚; ˆ belongs to BH1 if and only if ˛kˆk1 < 1 since CSDl ‚;
Q Q ˆ  D
  1
ˆ ‚ Q 22  .
1

9.3.4 Output Estimation (OE) Problem

As depicted in Chap. 8, the output estimation (OE) problem is dual to the distur-
bance feedforward (DF) problem. In this case, the framework of CSDl  CSDr is
utilized, and the left coprime factors can be constructed as
286 9 A CSD Approach to H-Infinity Controller Synthesis

(9.69)

Then realizations of GQ 1;OE and G2,OE can be found as

(9.70)

(9.71)

Furthermore, the right coprime factors of G2,OE D ‚OE … 1


OE can be found as

(9.72)

where

(9.73)

(9.74)
9.3 H1 Solution of Special SCC Formulations 287

Note that ‚OE is in fact an identity matrix so that …OE D G 1


2;OE . The closed-loop
transfer function from w to z is given by

z D LFT1 .P
OE ; KOE / w

D CSDl GQ 1;OE ; CSDr .G CSDr .…OE ; ˆOE // w

2;OE ; 1
D CSDl GQ 1;OE ; CSD
r ‚OE …OE ; CSDr .…OE ; ˆOE / w
D CSDl GQ 1;OE ; ˆOE w: (9.75)

The central solution (ˆOE D 0) gives

CSDr .‚OE ; 0/ D ‚12 ‚1


22 D 0 .since ‚12 D 0/ (9.76)

and therefore z D CSDl GQ 1;OE ; 0 w. This concludes that the H1 solution of this
OE problem is given by CSDr (…OE ,ˆOE ) for any ˆOE 2 BH1 , if both H D [Hz Hy2 ]
 
WQ 0
and WQ D Q zz Q are found such that GQ 1;OE .s/ is dual J-lossless.
Wyz Wyy

9.3.5 Full Control (FC) Problem

For the full control (FC) problem depicted in Chap. 8, let

(9.77)

Then a lcf of is given by

(9.78)

where

(9.79)
288 9 A CSD Approach to H-Infinity Controller Synthesis

(9.80)

It can be seen from (9.79) and (9.70) that GQ 1;OE .s/ D GQ 1;FC .s/.
Furthermore, the right coprime factors of G2,FC D ‚FC … 1 FC can be found as

(9.81)

where

(9.82)

(9.83)

Since …FC (s) is a constant matrix, for ˆFC D 0, one has


 
Hy2
uFC D CSDr .…FC ; 0/ y D …12 …1
22 y D y (9.84)
0
9.3 H1 Solution of Special SCC Formulations 289

and then CSDr (‚FC ,0) D ‚12 ‚


22 D 0 (since ‚12 D 0). Therefore, the closed-loop
1

transfer function is given by

z D LFT1 .P
FC ; KFC / w

D CSDl GQ 1;FC ; CSD



r .‚FC ; 0/ w
D CSDl GQ 1;FC ; 0 w: (9.85)

This concludes that the central H1 solution of this FC problem


 is in fact a static
WQ zz 0
observer gain of (9.84) if H D [Hz Hy ] and WQ D Q are found such that
Wyz WQ yy
GQ 1;FC .s/ is dual J-lossless.
2 3
I 0 0
60 B2 0 7
It can also be verified that G2;OE D G2;FC 6 7 Q Q
40 I 0 5 and G1;OE D G1;FC . From
0 0 I
02 3 1
I 0 0
B6 0 B2 07 C
KFC D CSDr .…FC ; ˆFC / D CSDr B 6
@4 0
7 …OE ; ˆOE C ;
5 A (9.86)
I 0
0 0 I

one then has

z D LFT1 .P
FC ; KFC / w

D CSDl GQ 1;FC ; CSDr .G2;FC ; KFC / w


0 0 02 3 111
I 0 0
B B B60 B2 0 7 CCC
D CSDl B Q B B6 7 CCC
@G1;FC ; CSDr @G2;FC ; CSDr @40 I 0 5 …OE ; ˆOE AAA w

0 0 I
D CSDl GQ 1;FC ; CSDr G2;OE ; CSD

r OE ; ˆOE / // w
.…
D CSDl GQ 1;FC ; CSDr .I; ˆOE / w
D CSDl .G1;OE ; ˆOE / w (9.87)
 
B2
where uFC D uOE . This shows that the closed-loop transfer function of the OE
I
problem is in fact equivalent to that of the FC problem.

9.3.6 Output Injection (OI) Problem

Finally, one considers the output injection (OI) problems. The state-space formulae
of the stacked matrix and the left coprime factorization can be calculated from (9.33)
and (9.34) with B2 D I and D12 D 0, the rcf of G2,OI D ‚OI … 1
OI is realized by
290 9 A CSD Approach to H-Infinity Controller Synthesis

(9.88)

where

(9.89)

The central solutions are given by an output injection gain CSDr (…OI , 0) D Hy2 and
therefore z D GQ 12 w. Similarly, for k‚11 k1 D ˇ, the closed loop of CSDr (‚,ˆ)
belongs to BH1 if and only if ˇkˆk1 < 1 since kCSDr (‚,ˆ)k1 D k‚11 ˆk1 .

9.4 H1 Controller Synthesis with Coprime Factor


Perturbations

In this section, one uses the H1 control theory to solve the robust stabilization
problem of a perturbed plant. The perturbation considered here includes the
discrepancy between the dynamics of real plant and the mathematical model of it,
i.e., the nominal model, such as unmodeled dynamics (high-frequency dynamics)
and neglected nonlinearities. Such perturbations are usually called “lumped” uncer-
tainties or “unstructured” uncertainties in the literature. There are various ways to
describe perturbations, including additive perturbation (absolute error between the
actual dynamics and nominal model), input- and output-multiplication perturbations
(relative errors), and their inverse forms [7]. Theoretically speaking, most of these
perturbation expressions are “interchangeable,” though a successful design would
depend on, to certain extent, an appropriate choice of the perturbation (uncertainty)
model. This section introduces a design technique which incorporates the so-called
loop shaping design procedure (LSDP) [12] to obtain performance/robust trade-offs
and a particular H1 optimization problem to guarantee the closed-loop stability and
a level of robust stability based on the coprime factorization perturbation models.
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 291

Fig. 9.7 Left coprime e1 − e2


factorization perturbed N Δ M Δ
feedback system w

K N M −1
u
G

Fig. 9.8 H1 control


e1
problem of lcf plant
description 1
w
γ
y 1 e2
K N M −1
u γ
G

9.4.1 Robust Stabilization Problem of Left Coprime


Factorization Case

This subsection will show the robust stabilization of a plant, which is formulated
by a left coprime factorization with perturbations on each factors. Let G D MQ 1 NQ
be an lcf. Figure 9.7 gives an H1 stabilization problem with an lcf -perturbed plant
model, where MQ and NQ 2 RH1 are stable transfer functions to represent the
uncertainties on the nominal plant.
The objective of this problem is to stabilize not only the nominal plant but also a
family of perturbed plants defined as
n
1
  o
G D MQ C MQ NQ C NQ W  NQ MQ 1 < "; " > 0 : (9.90)

The robust stability of the perturbed feedback system is guaranteed by


 
 K.I  GK/1 MQ 1  1
kLFTl .P; K/k1 D  
 .I  GK/1 MQ 1   D " ; (9.91)
1

where The control problem is equivalent to find a stabilizing

     
u e 1 u
controller K such that the H1 norm from w to (or to 1 D ) is less
y e2 y
than a specified value as shown in Fig. 9.8.
292 9 A CSD Approach to H-Infinity Controller Synthesis

e1 W −1

1
−H
γ
y u x 1 x 1 e2
K (s) B C
s γ

Fig. 9.9 State-space realization of the lcf plant description

Herein, without loss of generality, one considers with D D 0 for


simplicity. A space-state realization of an lcf of this SCC plant is represented as
Fig. 9.9.

Note that is square and its inverse is stable by the


 
e
choice of H. The state-space realization of the SCC plant from w to z D 1 can
e2
be obtained from Fig. 9.9 by

(9.92)

To apply the proposed approach, the first step is to find the stacked matrix

defined in Step 1 of Method I above and to perform the two


successive J-lossless coprime factorizations, where from (9.3) and (8.17) one has

(9.93)
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 293

A right coprime factorization is found by (9.11) with Fu1 D Fu

(9.94)

Now, from (9.94),

(9.95)

where

(9.96)
294 9 A CSD Approach to H-Infinity Controller Synthesis

(9.97)

It can be found from (9.94) that

(9.98)

The second step is to find a J-lossless G1 . With the given H and WQ , the suboptimal
H1 control is to obtain Fu , Fw , Www , and Wuu such that G1 is J-lossless, i.e., to
establish the following equations:

DGT 1 Jz;w DG1 D Ju;w ; (9.99)

ATG1 X C X AG1 C CGT1 Jz;w CG1 D 0; (9.100)

BGT1 X C DGT 1 Jz;w CG1 D 0: (9.101)

By solving (9.98), the constant matrix is given by (9.15) as


2 3
  I u 0
Wuu 0  12 5 :
D4 (9.102)
0 Www 0 Iw  12 WQ T WQ 1

Solving (9.100) and (9.101) will give the following results, as described in (9.16):

  " #
Fu  2 B T X
D 1  I (9.103)
Fw Iw  12 WQ T WQ 1 WQ T H T X  1
2
C

and, from (9.17),


X D Ric HX  0I (9.104)
2 1 1 3
A  12 H WQ T WQ  12 I C H WQ T WQ  12 I H T  2 BB T
6 7
HX D4 1 1 T 5:
 12 C T I  12 WQ T WQ  12 I C  A  12 H WQ T WQ  12 I C
(9.105)

Therefore, G1 can be made J-lossless by the above.


9.4 H1 Controller Synthesis with Coprime Factor Perturbations 295

The third step is to construct a dual J-lossless factorization of GQ 2 . Notice that


GQ 2 2 RH1 and its inverse can be calculated via (9.13) as

(9.106)

is stable since H is any observer gainmatrix, in that (A C HC) are all in the left
Q I 0 Q D GQ 1 , such that
half plane. Hence, let ‚.s/ D (dual J-lossless) and … 2
0 I
GQ 2 D … Q ‚
1 Q is a J-lossless coprime factorization. Then the suboptimal solutions
can be found by (9.28)

Q ˆ D CSDr GQ 2 ; ˆ ;
K D CSDl …; 8ˆ 2 BH1 ; (9.107)

where

(9.108)

Q Q
Equivalently,
… can

by (5.123),
be converted to an LFT matrix, …P , such that
Q Q
CSDl …; ˆ D LFT1 …p ; ˆ where

(9.109)

One has the central solution (central controller) by inspection

(9.110)

Note that (9.109) is essentially an observer-based structure. In fact, the central


solution of the robust stabilization design problem exhibits an observer-based
structure in nature. Also notice that the central controller is independent to Fw . For
the case of ˆ D 0 (the central solution), the poles of the overall closed-loop system
are determined by the A-matrix of LFTl (P, K0 ), which can be shown by a similarity
transformation:
     
I 0 A BFu I 0 A C BFu BFu
D :
I I  H C A C H C C BFu I I 0 A C HC
(9.111)
296 9 A CSD Approach to H-Infinity Controller Synthesis

di z do
MΔ NΔ

y
K (s) M −1 N
u

G = NM −1

Fig. 9.10 Right coprime factorization perturbed feedback system

It shows that the eigenvalues of A C HC and A C BFu are the closed-loop poles
(which is similar to the separation principle) resulting from the central solution.
Here (A C HC) can be preassigned in the design problem, and (A C BFu ) is solved
by (9.102), (9.103), and (9.104). Apparently, the choice of (A C HC) is a design
freedom for which there exist several ways to determine H and WQ .
McFarlane and Glover [12] proposed a normalized coprime factorization design
such that G D MQ 1 NQ is a normalized coprime factorization
 (ncf ), which gives
a guideline to find H and WQ to make NQ MQ co-inner. By the definition
Q Q Q Q Q  D I , where Y D
 HT D  YCT and  W D I yield M M C N N
T
of ncf,
A C C
Ric . In this case, the central controller is expressed as
 BB T
A

(9.112)

where
02  31
A C 211 Y C T C 2 211 Y C T C Y  BB T
X D Ric @4 T 5A : (9.113)
 211 C T C  A C 211 Y C T C

9.4.2 Robust Stabilization Problem of Right Coprime


Factor Case

Similarly, a dual problem is formulated by introducing the rcf plant description


that
 attempts
 to ensure the robust stability with respect to the bounded uncertainty
 N 
 
 M  < " as shown in Fig. 9.10. Then a stabilizing K can be found such that
1
 
kLFTl .P; K/k1 D  M 1 .I  GK/1 K M 1 .I  GK/1 1  (9.114)
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 297

W −1
di do

1 1
−F
γ γ
u x 1 x y
K (s) B C
s

Fig. 9.11 State-space realization of the rcf plant description

with

In Fig. 9.11, the control problem is equivalent to find a stabilizing controller


K such that the H1 -norm from w to z is less than a specified value, where
 
do di D w. Note that is square and its inverse
is stable. Figure 9.11 illustrates a space-state realization of this SCC plant in rcf.
The state-space realization of the SCC matrix for Fig. 9.11 can easily be found
as

(9.115)

Different from the problem of lcf plant description, the solution process for the rcf
problem will be facilitated by Method II. The first step of Method II is to give a
coupled CSD representation as below

(9.116)
298 9 A CSD Approach to H-Infinity Controller Synthesis

The lcf of the matrix can be obtained from (9.32)


with Hy2 D Hy as

(9.117)

with that, from (9.33) and (9.34)

(9.118)

(9.119)

After multiplying MQ 1 at left-hand-side terminal,



the closed-loop transfer func-
tion is obtained by CSDl GQ 1 ; CSDr .G2 ; K/ . The second step is to solve that GQ 1 .s/
is dual J-lossless. By definition, GQ 1 .s/ is dual J-lossless if there exists a matrix such
that Y D Y T  0 satisfies

DGQ 1 Jz;w DGTQ D Jz;y ; (9.120)


1

AGQ 1 Y C Y ATGQ  BGQ 1 Jz;w BGTQ D 0; (9.121)


1 1

CGQ 1 Y  DGQ 1 Jz;w BGTQ D 0: (9.122)


1

Here let W D I for simplicity. It is easy to work out via (9.35) that
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 299

2  12 3
 
Q
D 4 1  2
Wzz 0 1
I 0 5 (9.123)
0 WQ yy 0 I

and, with (9.36),


  1 
 
Hz Hy D 12 B  Y F T 1  1
2
 2 Y C T (9.124)

where Y D Ric HY  0 and the Hamiltonian is defined by (9.38) as


2 3
AT  211 F T B T 2 211 F T F  C T C
H Y D4 1   5: (9.125)
 2 1 BB T  A  211 BF

Furthermore, with WQ D I , the corresponding Hamiltonian matrix can be reduced


to
2 3
A  211 H C 2 211 HH T  BB T
HX D 4 T 5 : (9.126)
 211 C T C  A  211 H C

It is interesting, though logical, to notice that the solution of the rcf problem is
a dual case of the lcf problem. Furthermore,  since
 both G2 and its inverse are
I 0
stable (i.e., G2 2 GH1 ), one has ‚.s/ D (J-lossless) and … D G 2
1
such
0 I
that G2 D ‚… 1 where

(9.127)

Then the suboptimal solutions can be found as

K D CSDr .…; ˆ/ 8ˆ 2 BH1 : (9.128)

The CSD matrix of … can be expectably converted to an equivalent LFT matrix …P


such that CSDr (…,ˆ) D LFT1 (…P ,ˆ) where

(9.129)
300 9 A CSD Approach to H-Infinity Controller Synthesis

Table 9.2 Comparison between the rcf and lcf plant description problem
The lcf plant description The rcf plant description
SCC plant

2-block condition 1
P21 DM Q .s/ P
12 D M(s)
1

Solution method A right CSD coupled a left A left CSD coupled a right CSD
CSD (Method I) (Method II)
Central controller
form

Similar to (9.110), the central controller can therefore be obtained with ˆ D 0 as

(9.130)

For the central solution, the poles of the overall closed-loop system are determined
by the A-matrix of LFTl (P, K0 ), which can be shown by a similarity transformation,
     
I 0 A BF I 0 A C BF BF
D :
I I Hy C A C BF C Hy C I I 0 A C Hy C
(9.131)

This illustrates that the eigenvalues of A C Hy C and A C BF are the closed-loop


poles of the central solution where (A C BF) is preassigned in the design problem,
and (A C Hy C) is solved from (9.120) and (9.121). Different from the lcf plant
description problem, (A C BF) is a design freedom indicated in terms of F and W.
The partial pole assignment property of the coprime factor plant description problem
is summarized as the following theorem.
The solutions shown in (9.109) and (9.129) have the same observer-based form,
and the corresponding AREs present a similar structure. Due to the duality of the
problems, the solution process for each problem is also in dual form. It is noted
that the (2,1)-block of the lcf problem (i.e., P21 ) is invertible, and hence Method I
is adopted to solve J-lossless matrix in the first step and to deal with the inversion
at the second step (… Q D GQ 1 ). Dually, since the (1,2)-block of the rcf problem
2
(i.e., P12 ) is invertible, Method II is employed. Therefore, the second steps of both
problems do not have additional calculations to construct a (dual) J-lossless coprime
factorization. Using Method I (or Method II) to solve the lcf (or rcf ), problem is still
achievable; however, excessive calculation is needed in the first step, which could
lead to inaccurate solutions in real-world design cases.
Table 9.2 gives a comparison between the rcf and lcf plant description problems.
References 301

Exercises

1. Consider a unity feedback system with a proportional gain controller K, where


K D 3 and the plant under control G.s/ D s2 1
. Compute a normalized coprime
factorization of G(s). Considering perturbations N and M of the normalized
coprime factors of G(s), compute the stability radius " with regard to the
perturbations on coprime factors.

2. For a given SCC plant and D 5, compute a

suboptimal H-infinity controller using the CSD approach.

3. For a given SCC plant select an appropriate

method (formulation) shown in Table 8.1 for the H-infinity suboptimal control
problem.
4. Consider the model matching (or reference) control problem shown in the
following figure. Formulate an H1 control problem that minimizes control
energy u and output error e.

+ u
K P
- +
r e
-
M

References

1. Bombois X, Anderson BDO (2002) On the influence of weight modification in Hinf control
design. IEEE conference on decision and control, Nevada, USA
2. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2
and H1 control problems. IEEE Trans Autom Control 34:831–847
3. Glover K, Doyle JC (1988) State-space formulae for all stabilizing controllers that satisfy an
H1 -norm bounded and relations to risk sensitivity. Syst Control Lett 11:167–172
4. Glover K, Limebeer DJN, Doyle JC, Kasenally EM, Safonov MG (1991) A characterization of
all the solutions to the four block general distance problem. SIAM J Control Optim 29:283–324
5. Green M (1992) H1 controller synthesis by J-lossless coprime factorization. SIAM J Control
Optim 30:522–547
302 9 A CSD Approach to H-Infinity Controller Synthesis

6. Green M, Glover K, Limebeer DJN, Doyle JC (1990) A J-spectral factorization approach to


H1 control. SIAM J Control Optim 28:1350–1371
7. Gu DW, Petkow PH, Konstantinov MM (2005) Robust control design with Matlab. Springer,
London
8. Hong JL (2004) An output feedback control for discrete-time state-delay systems. J Circuits
Syst Signal Process 23:255–272
9. Hong JL, Teng CC (2001) Control for nonlinear affine systems: a chain-scattering matrix
description approach. Int J Robust Nonlinear Control 11:315–333
10. Kimura H (1995) Chain-scattering representation, J-lossless factorization and H1 control. J
Math Syst Estim Control 5:203–255
11. Lee PH, Soh YC (2005) Synthesis of stable H1 controller via the chain scattering framework.
Syst Control Lett 46:1968–1972
12. McFarlane D, Glover K (1990) Robust controller design using normalized coprime factor plant
descriptions. Springer, London
13. Tsai MC, Tsai CS (1993) A chain scattering matrix description approach to H1 control. IEEE
Trans Autom Control 38:1416–1421
Chapter 10
Design Examples

In this chapter, several design examples are illustrated to demonstrate the validity of
the CSD two-port framework. Two different design methodologies with respect to
speed control of DC servomotors are presented. These examples will show how
industrial controllers, such as pseudo derivative feedback (PDF) controllers and
pseudo derivative feedback with feedforward (PDFF) controllers, can be formulated
into the standard control design framework and then solved by the state-space
solution procedures presented in previous chapters. By defining the transfer function
from the load torque disturbance to the controlled output, the dynamic stiffness
of a servo control system is characterized, and a scalar index value is defined by
the inverse of the maximum magnitude of the transfer function with respect to
frequency, i.e., the worst case in the frequency response. Thus, for performance
measurement of robust design, maximizing the dynamic stiffness measurement
implies minimizing the H1 -norm in controller design. This chapter will also show
how the dynamic stiffness of a servo system can be achieved by H1 design.

10.1 Mathematical Models of DC Servomotor

Mathematical models for describing the control system under investigation typically
contain some inaccuracies when compared with the real plants. This is mostly
caused by simplifications of the model, exclusion of some dynamics that are either
too complicated or unknown, and/or is due to uncertain dynamics. These inaccura-
cies induce a significant problem in control system design. A possible, and proven
useful, approach to dealing with this problem is based on modeling the real system
dynamics as a set of linear time-invariant models built around a nominal one, i.e., the
model is considered as uncertain but within known boundaries. The benefit of such
a representation of the model is the possibility of designing a robust controller that
stabilizes a closed-loop system with uncertainties under consideration. The ideal
goal would be to design a controller capable of stabilizing even the “the worst-
case scenario” representing the most degraded model. This section investigates, as

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 303
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__10,
© Springer-Verlag London 2014
304 10 Design Examples

Fig. 10.1 Block diagram TL (t )


of standard DC servomotor
v (t ) + i (t ) T (t ) - 1 w (t )
1
Kt
Ls+ R + J s+B
-
Ke

a real-world design example, the robust design of servo control systems, which are
widely used in industries.
Consider a DC permanent magnetic (PM) servomotor. Let the rotor be charac-
terized by the motor winding inductance L (unit: H) and the armature resistance R
(unit: ). Then, the equation associated with such an electrical circuit is given by
di
v.t / D L C Ri C e; (10.1)
dt
where the back EMF, e, of the motor has been taken into account. The torque
generated at the motor shaft is proportional to the armature current, where the ratio
is defined as the torque constant, Kt (unit: Nm/amp), as

T D Kt i: (10.2)

Moreover, the proportional ratio between angular velocity of motor and the back
EMF is defined as electromotive force constant, Ke (unit: V/s/rad), as

e D Ke !: (10.3)

One can now deal with the mechanical representation of the motor. The motor
exerts a torque while supplied by voltages. This torque acts on the mechanical
structure, which is characterized by the rotor inertia J (kgm2 ) and the viscous
friction coefficient B (Nms/rad) as
d!
T  TL D J C B! (10.4)
dt
where TL denotes the load torque.
Based on the electrical and mechanical equations above, the system block
diagram of a DC servomotor can be depicted as shown in Fig. 10.1 below.

10.2 Two-Port Chain Description Approach to Estimation


of Mechanical Loading

In this section, the technique of two-port CSD formulations is employed for


detecting the impedance of the mechanical loading in a DC motor. Let the block
diagram of a DC motor be formulated in terms of a two-port LFT framework.
10.2 Two-Port Chain Description Approach to Estimation of Mechanical Loading 305

Fig. 10.2 Functions Motor Coupling Loading


of actuator and sensor
in DC motor

Voltage Torque
Detect Rotational
Actuate
Current
Velocity
Driver
Circuit

Fig. 10.3 Block diagram I Zm


of DC motor TL w
. _ .
V + 1 x1 1 x1 Kt 1 x2 1 x2
_ L s + J s

R B

Ke

Fig. 10.4 LFT form of DC I V


motor M

w TL
Zm

Then the chain scattering description (CSD) as discussed earlier in this book can
be adopted to characterize the relationship between the electrical impedance and
the mechanical impedance for further analysis. This also implies that the motor
not only can be employed to actuate the mechanical loading but can also monitor
the operating condition, in which the mechanical loading can be found from the
measured electrical impedance as depicted in Fig. 10.2.
Consider the block diagram of a DC motor shown in Fig. 10.3, where Zm denotes
the mechanical loading. Let V (voltage reference) and TL (load) be input variables
and I (motor current) and ! (motor angular velocity) be the outputs. A (2  2) LFT
representation of Fig. 10.3 is depicted in Fig. 10.4, where
2 JsCB Ke 3
6 LJs 2 C .LBCJR/ sC .RBCKe Kt / LJs 2 C .LBCJR/ sC .RBCKe Kt / 7
M.s/D 6
4 5
7
Kt  .RCLs/
LJs C .LBCJR/ sC .RBCKe Kt / LJs C .LBCJR/ sC .RBCKe Kt /
2 2

(10.5)
306 10 Design Examples

Fig. 10.5 Two-port chain I TL


description matrix of a DC
motor G Zm
V w

Dynamic Signal Analyzer

0.21
I

V TL - w
+ 1 1
0.21
0.0038 s + 7.155 + 5.77´10-5s+0.00055
-
0.21

Fig. 10.6 Simulating current response by Simulink

As addressed in Chap. 5, the chain description of Fig. 10.5 with the “input”
variables TL , ! and the “outputs” I, V can be derived from (10.5), by (5.1), as
2 1 Js C B 3
   
I 6 Kt Kt 7 TL
D4
V R C Ls LJs 2 C .LB C JR/ s C .RB C Ke Kt / 5 !
Kt Kt
  
G11 G12 TL
D : (10.6)
G21 G22 !

Thus, for any mechanical impedance Zm , the equivalent electrical impedance of


the motor, denoted by Ze , is given by

V G21 TL C G22 ! G21 Zm C G22


Ze D D D : (10.7)
I G11 TL C G12 ! G11 Zm C G12

Reversely, if the electrical impedance Ze is measured via V and I at the input port
of Fig. 10.6, then the impedance of mechanical loading at the output port can be
found as
G12 Ze C G22
Zm D : (10.8)
G11 Ze  G21

In the following, the parameter values of a DC motor are listed in Table 10.1,
which will be used for computer simulations. Let the mechanical loading shown
in Fig. 10.3 be a spring-damper.
10.2 Two-Port Chain Description Approach to Estimation of Mechanical Loading 307

Table 10.1 Parameters


of the DC motor Parameters Values Parameters Values
J 5.77  105 (kgm2 ) R 7.155 ( )
B 0.00055 (Nms/rad) L 0.0038 (H)
Ke 0.21(Nm/A) Kt 0.21(Nm/A)

1
Input Voltage
0.8 Output Current

0.6

0.4

0.2
Amplitude

-0.2

-0.4

-0.6

-0.8

-1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)

Fig. 10.7 60 Hz sinusoidal input voltage (solid) and output current (dash)

Consider the simple case of Zm D B (i.e., the damping loading) and B D 0.00055
(Nms/rad). For a 60 Hz sinusoidal input voltage with a magnitude of 1 injected into
the DC motor, the response current can be investigated by using Simulink, as shown
in Fig. 10.6 and Table 10.1.
With 60 Hz, (10.6) can be rewritten by
    
I 4:7619 0:0026 C 0:1036i TL
D
V 34:0714 C 6:8217i 0:0804 C 0:7449i !
  
G11 G12 TL
D : (10.9)
G21 G22 !

By comparing the amplitude and the phase between the input voltage and the output
current in Fig. 10.7, the equivalent electrical impedance Ze can be found as

Ze D 7:2573  0:5896i: (10.10)

Then the mechanical impedance Zm can be computed, according to (10.8), as

Zm D 0:00055: (10.11)
308 10 Design Examples

Dynamic Signal Analyzer

V I
w
+ 1
0.21
TL 1
0.0038 s + 7.155 5.77´10-5s+0.00055
-
0.21

Fig. 10.8 Simulation of current and angular velocity with TL D 0

Obviously, the computed mechanical impedance is identical to B D 0.00055


(Nms/rad). This implies that once the electrical impedance Ze has been measured
in practice, the mechanical loading can be computed by (10.8). For the special
case that there is no loading (i.e., Zm D 0), the electrical impedance will become
Ze D 7.2062  0.5935i.
Note that the CSD matrix G in (10.6) is called the transduction matrix [7], which
describes the relationship between the inputs and outputs of an electromechanical
transducer. In general, the transduction matrix can be obtained via mathematical
analysis in theory or experimental data in practice. The theoretical analysis approach
is given in (10.8). In the following, the procedure for obtaining the transduction
matrix via a practical approach will be introduced.
Based on the definition of the transduction matrix G, one has
ˇ ˇ
I ˇˇ I ˇˇ
G11 D G D
TL ˇ!D0 ! ˇTL D0
12
ˇ ˇ : (10.12)
V ˇˇ V ˇˇ
G21 D G D
T ˇ !ˇ
22
L !D0 TL D0

It should be noticed that it is difficult to maintain a DC motor with zero angular


velocity (! D 0) in practice from an experimental setup. Therefore, an alternative
method is to find the LFT matrix M of Fig. 10.4 as given by
ˇ ˇ
Iˇ I ˇˇ
M11 D ˇˇ M12 D
V TL D0 TL ˇV D0
ˇ ˇ : (10.13)
!ˇ ! ˇˇ
M21 D ˇ M22 D
V T D0 L T ˇ L V D0

Here, V D 0 and TL D 0 can easily be implemented in the experimental setup. Then,


the transduction matrix G can be calculated by transforming the LFT matrix M to a
CSD matrix G. The method will be demonstrated with Simulink below.
At the condition of TL D 0, for a 60 Hz sinusoidal input voltage with the
magnitude of 1 injected to the DC motor, M11 and M21 can be simulated as shown
in Fig. 10.8. By comparing the amplitude and the phase between the input voltage
10.2 Two-Port Chain Description Approach to Estimation of Mechanical Loading 309

1
Input Voltage
0.8 Response Current

0.6

0.4

0.2
Amplitude

-0.2

-0.4

-0.6

-0.8

-1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)

Fig. 10.9 Input voltage (solid) and response current (dash)

and the response current (and the angular velocity) as depicted in Fig. 10.9 (and
Fig. 10.10), M11 (and M21 ) can be calculated as

M11 D 0:1378 C 0:0114i .and M21 D 0:1431  1:327i / : (10.14)

On the other hand, M12 and M22 can be found by Fig. 10.11 under the condition
of V D 0, where another motor is adopted to generate the input torque. From the
results shown in Figs. 10.12 and 10.13, M12 and M22 can be computed as

M12 D 0:1431  1:327i


M22 D 13:93 C 44:2378i: (10.15)

The transduction matrix G can be obtained by the LFT matrix M, by (5.28), as


   
G11 G12 4:7619 0:0026 C 0:1036i
D ; (10.16)
G21 G22 34:0714 C 6:8217i 0:0804 C 0:7449i

which is identical to (10.9) derived earlier.


This section has shown that the equivalent electrical impedance Ze is a function
of mechanical loading Zm while the load is connected. In other words, if the dynamic
properties of the motor and transmission are known in advance, then the operating
condition of the mechanical loading can be monitored by the measurement of input
310 10 Design Examples

2
Input Voltage
1.5 Angular velocity

0.5
Amplitude

-0.5

-1

-1.5

-2
0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.3
time(s)

Fig. 10.10 Input voltage (solid) and response angular velocity (dash)

Motor

Dynamic Signal Analyzer


TL

I w
-
1 1
0.21
0.0038 s + 7.155 + 5.77´10-5s+0.00055

0.21

Fig. 10.11 Simulation of current and angular velocity with V D 0

electrical impedance of the DC motor. An application of this method for monitoring


drill breakages in micro-drilling can be found in [3] where the drill breakage
identified from variation of mechanical loading can be detected by the measurement
of its equivalent electrical impedance.
10.2 Two-Port Chain Description Approach to Estimation of Mechanical Loading 311

0.05
Torque
0.04 Response Current

0.03

0.02

0.01
Amplitude

-0.01

-0.02

-0.03

-0.04

-0.05
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)

Fig. 10.12 Input torque (solid) and response current (dash)

1.5 Input Voltage


Angular velocity

0.5
Amplitude

-0.5

-1

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)

Fig. 10.13 Input torque (solid) and response angular velocity (dash)
312 10 Design Examples

10.3 Coprime Factorization Approach to System


Identification

The inner most layer of a servo drive system is the current control loop. For the
convenience of controller design in the velocity loop, the bandwidth of the current
control loop must be made much higher than that of the velocity loop, e.g., ten
times. When high-gain, closed-loop current control is implemented as a minor-loop
control, the transfer function from the current command to the current output in the
current control loop can be simplified as 1 [10]. Then, the motor model of Fig. 10.1
for speed control design can be simplified as a first-order model of Gm D JsCB Kt
, i.e.,
a simple motor torque constant, which converts current to torque, a single rotational
inertia J, and a damping factor B. Obviously, the assumptions made here imply that
the control of current in this servo loop equivalently generates the desired torque in
that the magnitude of driving current is approximately proportional to the torque.
Notice that the current controller of industrial servo drives, in practice, could not be
tuned by the user. It is necessary to identify the controlled system for speed control
design. The approach outlined in this section employs a plant representation in
terms of a coprime factorization, which can estimate plant dynamics under feedback
control. Identification methods for dealing with closed-loop experimental data have
been developed; see [5] for an overview.
Consider the feedback configuration as depicted in Fig. 10.14, where G and K
are the controlled plant to be identified and a stabilizing controller, respectively.
Assume that the input signal r is available from measurements and a controller K is
given such that the feedback system is stable. Then, the transfer functions from r to
y and u, respectively, can be found as

GK
y D Hyr r D r (10.17)
1 C GK

r + u y
K G
_

Fig. 10.14 Closed-loop


system identification Dynamic Signal Analyzer
10.3 Coprime Factorization Approach to System Identification 313

Fig. 10.15 Experimental setup of servomotor

and

K
u D Hur r D r: (10.18)
1 C GK

Thus, by measuring the black box transfer functions H byr and H bur , an estimated G
can be obtained as GbDH byr Hbur . This shows that, in fact, the identification method
1

based on the closed-loop data is derived from the concept of coprime factorization
of the plant model, provided that the controller K has no unstable zeros. In practice,
only measurements of frequency responses of signals u and y are required to obtain
an estimated G within a certain bandwidth by a dynamic spectrum analyzer. Notice
that the measured frequency responses of the coprime factors of a possibly unstable
plant are from the closed-loop experimental data.
For example, consider the experimental setup with a servomotor and a current
control power amplifier as shown in Fig. 10.15, where the transfer function from
the current input to the velocity, Gm (s), will be identified. Let K.s/ D 2 C 4s
be chosen as a stabilizing controller and the excitation signal r be the swept sine
to the closed-loop control system. The dynamic signal analyzer that measures
the frequency responses of the motor velocity y D !(t) and the motor current
u D i(t) simultaneously can calculate the Bode plot of Gm with experimental data.
Figure 10.16 shows the measured Bode diagram; subsequently, the curve fitting
gives the identified model of

bm .s/ D Kt 0:2
G D ; (10.19)
Js C B 0:000058s C 0:00056

where J D 0.000058 (kgm2 ), B D 0.00056 (Nms/rad), and Kt D 0.2 (Nm/amp). This


model is close to basic specifications of the servomotor given in Table 10.1 and will
be used for control design in the following sections.
314 10 Design Examples

Bode Diagram
20

0
Magnitude (dB)

-20

-40

-60

-80
0
Experiment
-45 Identification
Phase (deg)

-90

-135

-180
1 2 3 4
10 10 10 10
Frequency (Hz)

Fig. 10.16 Experimental data and identified model of DC motor

10.4 H1 Robust Controller Design for Speed Control

10.4.1 PDF Controller

10.4.1.1 Classical Controller Design

Consider a control law of the velocity loop, namely, a pseudo derivative feedback
(PDF) controller [2] hereafter, which will be designed to generate the torque
command. Figure 10.17 shows a PDF controller for the speed control of the DC
motor where Kp denotes the proportional gain and Ki the integral gain constant. The
closed-loop transfer function from !* to ! is given by

!.s/ .K i Kt =J /
T .s/ D 
D 2 
 : (10.20)
! .s/ s C B C K p Kt =J s C .K i K t =J /

Let

B C K p Kt
D ; (10.21)
2.Ki Kt J /0:5

Ki Kt 0:5
!n D : (10.22)
J
10.4 H1 Robust Controller Design for Speed Control 315

Fig. 10.17 Speed control w * (t ) T (t ) w (t )


+ 1 + 1
with classical PDF controller Ki Kt
J s+B
s
- -
controlled plant
Kp
controller

Then, T(s) of (10.20) can be written into the standard second-order form as

!n2
T .s/ D ; (10.23)
s2 C 2 !n s C !n2

where is the damping ratio and ! n the natural frequency. It is known [6] that the
bandwidth of the standard second-order system can be found as
p 0:5
BW D !n 1  2 2 C 2  4 2 C 4 4 : (10.24)

Thus, by specifying damping ratio and bandwidth BW based on design specifica-


tions and experiences, controller parameters (Kp , Ki ) of the PDF controller can be
determined directly, as shown in (10.25) below. The control performance resulting
from this classical PDF design approach is considered here as a reference for being
compared with the robust controller obtained from an H1 control design.
Consider the illustrated DC servomotor that has the nominal parameters given in
Table 10.1. For the damping ratio of * D 0.9 and the bandwidth of BW* D 100 Hz,
the PDF controller parameters are determined by

J !n2 2J !n  B
Ki D D 194:74; Kp D D 0:41: (10.25)
Kt Kt
However, due to the external load and/or the parameter inaccuracy, the moment of
inertia of a servo drive system often varies in practice during operations. The varia-
tion of parameter J which is away from its nominal value Jo D 5.77  105 (kgm2 )
will lead to the significant alteration in its output response of speed control. To
illustrate the effect of model uncertainty, three cases of J D 0.1Jo , J D Jo , and
J D 10Jo are investigated, respectively, by computer simulations. Let the step speed
command be 100 rad/s. Figure 10.18 shows the step responses based on the classical
PDF design obtained by (10.25) above. Note that the integral term of the PDF
controller is to assure a zero steady-state error due to the step input.
In Fig. 10.17, according to the closed-loop transfer function (10.20) from !*
to !, the characteristic equation of the speed control system can be obtained
as Js2 C (B C Kp Kt )s C Kt Ki D 0. To characterize the variations of the closed-loop
poles with respect to J, let 1 C kL(s) D 0, where k D J1 and

B C Kp Kt s C Kt Ki
L.s/ D : (10.26)
s2
316 10 Design Examples

140
Classical Design, Nominal J
Classical Design, 0.1J
Velocity Response (rad/sec) 120 Classical Design, 10J

100

80

60

40

20

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)

Fig. 10.18 Time responses subject to model uncertainty J

Fig. 10.19 Root locus plot according to variation of 1/J

Figure 10.19 presents the root locus with a zero located at  BCK
Kt Ki
p Ki
and two
open-loop poles at the origin. Then, for J D Jo D 5.77  10 5 (kgm2 ), the closed-
loop control system has the complex conjugate poles at  757.96 ˙ 367.1i (i.e.,
* D 0.9 and ! n D 842.18 rad/s). As can be expected from the root locus of
Figs. 10.19 and 10.18 shows that the speed response resulting from the conventional
design with J D 10Jo becomes slightly slower than that of the nominal case J D Jo ,
and its controlled output oscillates significantly with the overshoot around 40 % (i.e.,
40 rad/s). However, the speed response resulting from J D 0.1Jo has no overshoot,
10.4 H1 Robust Controller Design for Speed Control 317

ze 2 ze1 zu

we 2 we1 wu
w (t )
*
x 2 x2 y2 u + x 1 x 1 w (t )
+ 1 + 1 1
Ki Kt J
s s
- y-1 -
Kp B

Fig. 10.20 Weighed PDF controller design

and it is almost the same as that of J D Jo , which can be expected from Fig. 10.19.
This implies that if the controlled plant has certain parameter variations, how to
ensure the robustness in control performance becomes an important design issue. In
the following, H1 control design is adopted to find the PDF (or PDFF) controller
in the velocity loop to enhance the dynamic stiffness and to reduce from the effect
of system uncertainty.

10.4.1.2 Robust Controller Design with Constant Weighting Functions

Many control design approaches for improving the dynamic stiffness have been
proposed. Undoubtedly, H1 control is one of the most appropriate techniques for
dealing with robust stability with respect to parameter variations, which appear
commonly in industrial drives [1]. In practice, high dynamic stiffness often results
from large control efforts. As a matter of fact, robust H1 design often leads to high-
order dynamic controllers. Hence, a trade-off between controller order and system
performance should be considered in the formulation of design problems. In the
following, H1 control design is adopted to find the PDF (or PDFF) controller in the
velocity loop to enhance the dynamic stiffness.
Let Gm .s/ D JsCB Kt
be the transfer function of the DC servomotor from the

torque to the velocity and denote a state-space

realization.
Consider the H1 PDF design scheme of Fig. 10.20, where weighting functions
we1 , we2 , and wu should be chosen properly to satisfy the desired specifications.
With the trade-off between the system performance and computational complexity,
let the weighting functions all be positive constants under practical consideration.
Let P denote a SCC plant in the control framework shown in Fig. 10.21,
where the closed-loop transfer function from !* to z D [ze1 , ze2 , zu ]T is denoted
by LFTl (P,K1 ). Then, the PDF control design of Fig. 10.20 is formulated into the
318 10 Design Examples

Fig. 10.21 Standard é ze1 ù


framework of PDF control êz ú w*
problem ê e2 ú Pg ( s )
êë zu úû

é y1 ù
êy ú éë K p K i ùû u
ë 2û

standard H1 control problem, and the stabilizing controller K1 D [Kp , Ki ] is found


such that for a prespecified > 0,


kLFTl .P; K1 /k1 < or equivalently LFTl P ; K1 1 < 1; (10.27)

where
" #
1
P 1P
11 12
P D (10.28)
P21 P22

and

(10.29)

The SCC plant P above is a (3 C 2)  (1 C 1) transfer function matrix with


dim(A) D 2. Since C2 D I and D21 D 0, this is a special state feedback (SF) problem
as mentioned in previous chapters. It can also be verified that
(a) (A,B2) is stabilizable and (C2 , A) is detectable.
A  j!I B2
(b) rank D 1 C 2 and rank D12 D 1.
C1 D12
Based on the algorithm presented in Chap. 9, it can be expected that the static
controller parameters (i.e., Kp and Ki ) can be solved directly by the proposed SF
H1 solution process.
Naturally, selecting different weighting functions will lead to different controller
parameters. For illustrating the relationship between weighting functions and the
resulting controllers, only one weighting function is adjusted at a time in the
following case.
10.4 H1 Robust Controller Design for Speed Control 319

Case 1 Fixed we1 D 1, wu D 1, and D 1.

Controller we2 D 0.2 we2 D 0.4 we2 D 0.8


Kp 0.20 0.43 1.33
Ki 1 0.91 0.78

In this case, the controller parameter Kp is affected significantly by weighting


function we2 .
Case 2 Fixed we2 D 0.4, wu D 1, and D 1.

Controller we1 D 0.1 we1 D 1 we1 D 10


Kp 0.43 0.43 0.44
Ki 0.093 0.93 9.26

Obviously, the controller parameter Ki is proportional to the weighting function


we1 , while Kp remains almost the same for different we1 .
Case 3 Fixed we2 D 0.4, we1 D 1, and D 1

Controller wu D 0.1 wu D 1 wu D 10
Kp 4.36 0.43 0.04
Ki 9.28 0.93 0.09

In this case, the ratio between Kp and Ki (i.e., Ki /Kp ) remains almost the same,
but both of them are inversely proportional to wu . Recall that, in the root locus
of Fig. 10.19 with varying J1 , the poles and zero of the loop transfer function of
L(s) are p1 D 0, p2 D 0, and z D  BCK
Kt Ki
p Ki

 KKpi , respectively. Therefore, Ki /Kp ,
determined by the selections of we1 and we2 as discussed above, will naturally affect
the tendency of the closed-loop poles. Thus, the weighting function wu can be
chosen properly to achieve the desired closed-loop poles, which characterize the
system response.
Consequently, for we1 D 19.51, we2 D 0.015, and wu D 0.05, solving the SF
design problem of Fig. 10.21 with D 1, the controller parameters are found as

Ki D 272:9 and Kp D 0:55: (10.30)

Analog to the classical PDF design, the step speed responses resulting from the H1
SF design are shown in Fig. 10.22. Note that the weighting functions are chosen
purposely such that the step response with the nominal case J D Jo is close to that of
the classical PDF design as shown in Fig. 10.23. Compared with that of the classical
PDF design, Fig. 10.24 shows the step responses with J D 10Jo in which the
maximum overshoot and settling time have been improved in the illustrated design
example by using the H1 design approach in the case of the actual rotational inertia
being ten times of its nominal value. Of course, in addition to better performance
320 10 Design Examples

140
SF Solution, Nominal J
SF Solution, 0.1J
Velocity Response (rad/sec) 120 SF Solution, 10J

100

80

60

40

20

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)

Fig. 10.22 Step responses with robust PDF design

140
Classical Design, 10J
SF Solution, 10J
120
Velocity Response (rad/sec)

100

80

60

40

20

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)

Fig. 10.23 Comparison of step responses with nominal J D Jo

140
Classical Design, 10J
SF Solution, 10J
120
Velocity Response (rad/sec)

100

80

60

40

20

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)

Fig. 10.24 Comparison of step responses with J D 10Jo


10.4 H1 Robust Controller Design for Speed Control 321

a
ze2 ze1 zu

we2 we1 wu

w*(t ) + x 2 1
x2 y2 + u + 1 x 1 1
x1 w (t )
Ki Kt
s J s
- - -

Kp B
K¥ +
y1
n
+

Fig. 10.25 H1 design of PDFF control scheme

achieved, the other prominent feature of the H1 approach is the systematic solution
procedure which guarantees closed-loop stability and applicability for multivariable
systems. No comparison is presented here for the case of J D 0.1 Jo , because the
simulations are very similar to those with the nominal inertia.

10.4.2 PDFF Controller

The pseudo derivative feedback with feedforward (PDFF) control scheme is an


extended version of the PDF controller via augmenting a command feedforward
path that provides extra freedom in the tuning procedure. As a controller scheme
widely used in industrial practice, the classical PDFF control scheme has three
constant gains (˛, Kp , and Ki ) for the use of performance tuning in that the
feedforward term ˛ makes the system more responsive to the command. The
convenience and simplicity of tuning parameters in the PDFF scheme makes it
appealing in many industrial drives. Nevertheless, the classical PDFF scheme
may not easily yield satisfactory stability and performance simultaneously when
the system has high-order dynamics and disturbances. To obtain higher dynamic
stiffness and better stability robustness, this section investigates an extension of the
classical PDFF control scheme by allowing the control parameters,
 Kp and Ki , to
be dynamic controllers, denoted by K.s/ D Kp .s/ Ki .s/ . Such a scheme is then
called advanced PDFF in the following design examples and is recast as a particular
two-degree-of-freedom configuration.
Analog to the PDF design of Fig. 10.20 above, consider the PDFF design problem
of Fig. 10.25 involving an extra input n. This, in fact, becomes a weighted mixed-
sensitivity design but has a particular controller scheme. Let the weighting constants
we1 , we2 , and wu all be positive. Then, the closed-loop transfer function from
[!*,n]T to [ze1 ,ze2 ,zu ]T is denoted by LFTl (P,K1 ), where the SCC plant P is a
(3 C 2)  (2 C 1) transfer matrix and a state-space realization can be readily found
in Fig. 10.25 as
322 10 Design Examples

(10.31)

Since D21 is nonsingular with the feedforward path gain ˛ > 0, this formulation
is no longer a special SF design problem but rather a general case. It can be verified
that
(a) (A,B2) is stabilizable and (C2 ,A) is detectable.
A  j!I B2
(b) rank D 2 C 1 and rank D12 D 1.
C1 D12
 
A  j!I B1
(c) rank D 2 C 2 and rank D21 D 2.
C2 D21
Note that the extra input n has been introduced in the PDFF design problem
of Fig. 10.25 such that the above assumptions all are satisfied. Thus, the explicit
solution of this PDFF control scheme problem can be solved by utilizing the CSD
method presented in Chap. 9.
For we1 D 32.75, we2 D 0.05, wu D 1, and ˛ D 0.04, solving the H1 suboptimal
control problem of kLFTl (P ,K1 )k1 < 1 with D 1.75 will yield

(10.32)

The system step responses resulting from the PDFF scheme controller in the H1
design are shown in Fig. 10.26, where the closed-loop poles with J D Jo are given
by 9.5392, 25, 53.8201, 151833.9 and the closed-loop zeros are 9.5392,
25. As can be seen, for the case of J D 10Jo , the step response becomes slower
than that of the previous PDF design although there is a feedforward gain ˛ D 0.04
in this PDFF controller. This is due to the H1 design formulation that appears with
pole-zero cancellations at p1 D  ˛1 D 25 and p2 D  JB0 D 9:539, which
would affect the control performance significantly while uncertainties occur. This
inherent pole-zero cancellation property in the closed-loop system resulting from
the weighted mixed-sensitivity design was investigated in [12]. In the following,
the concept of the H1 loop-shaping design addressed in [4, 8, 11] is employed to
overcome this problem.
10.4 H1 Robust Controller Design for Speed Control 323

140
Robust PDFF Design, Nominal J
Robust PDFF Design, 0.1J
Velocity Response (rad/sec) 120 Robust PDFF Design, 10J

100

80

60

40

20

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (sec)

Fig. 10.26 Step responses resulting from robust PDFF design

10.4.3 Coprime Factorization Approach to Advanced


PDFF Controller

The design example here employs the coprime factorization descriptions to formu-
late the advanced PDFF design into an H1 weighted mixed-sensitivity problem
[13]. The advanced PDFF controller is found by a partial pole placement technique
[9] and the loop-shaping design of the normalized coprime factorization [4]. The
proposed design method provides a useful property in that the performance, such as
bandwidth and stability, can be designed simultaneously.
Recall that denotes the transfer function of the servo-

motor from the current (torque) command to velocity. To retain the structure of the
feedforward gain and the integrator in the PDFF controller scheme, consider the
augmented one-input-two-output plant given by

(10.33)

The problem formulation here employs the concept of the H1 loop-shaping design
C
[8], where the original plant Gm is shaped by sf Gm to obtain an augmented plant
Gs . To yield satisfactory stability and performance simultaneously, the coprime
factorization (CF) description of the controlled plant discussed in Sect. 9.4 is
employed to formulate the H1 weighted mixed-sensitivity problem.
324 10 Design Examples

Fig. 10.27 Advanced PDFF kf disturbance


scheme with CF description
w* Cf wi w
Ki N M -1
s
Gm
Kp

w

Consider the control design problem shown in Fig. 10.27, where Gm D MQ 1 NQ


denotes a left coprime factorization. Notice that there are two outputs of the
C
augmented plant Gs , i.e., !, the output of Gm , and ! i , the output of  sf Gm , as
shown in Fig. 10.27. From (6.30), one has

(10.34)

where Hm is an observer gain matrix such that Am C Hm Cm is stable and w Q > 0.


Furthermore, a coprime factorization of Gs D MQ s1 NQ s can be found as

(10.35)

(10.36)

where
" # " #
0 Hm kf1 0
Hs D and WQ s D : (10.37)
 kf1 1 0 wQ

Figure 10.28 shows the weighted mixed-sensitivity design problem of the


coprime factorization approach. Note that similar to that of ! d , the input command
!* could be considered as one of the exogenous inputs in Fig. 10.28. This implies
that both the command tracking and disturbance rejection can be taken into account
simultaneously in this design method.
10.4 H1 Robust Controller Design for Speed Control 325

wd
ze1
zu w -1
we1
wu -Hm

w *
1 w * w ze2
kf k -f 1 Cf Ki Bm
1
Cm we2
s wi s

Am
Kp

w

Fig. 10.28 PDFF design formulation of CF loop-shaping approach

wd w*

Gs
w -1 kf

zu
-H m k -f 1

x1 x1 w x 2 x2 wi
[ Bk 1 Bk 2 ]
1
s
Ck
Kt 1
s
1
1
Cf z e1
u J s
B
- ze 2
Ak J

Fig. 10.29 State-space representation of CF design approach

To explore the loop-shaping approach with the internal weights, let the output
weighting functions wu D 1.35, we1 D 1, we2 D 1 for satisfying the design specifica-
tion. By redrawing Fig. 10.28, the proposed design problem can be reconstructed
as depicted in Fig. 10.29 which gives a clearer representation. An important feature
of this formulation is that both of the feedforward gain kf and the integrator of the
PDFF controller structure with internal gain Cf have been taken into account as a
part of the augmented plant. The existence of feedforward term kf makes the system
more responsive to commands and provides extra freedom in the tuning procedure.
Note that the overall controller to be implemented should comprise an integrator
with a gain of Cf , a feedforward gain kf , and a computed dynamic H1 controller

as

 
 Cf 

u D Ki .s/ kf ! C !  ! C Kp .s/!: (10.38)


s
326 10 Design Examples

Fig. 10.30 Equivalent design é w* ù


problem u ê ú
ëwd û
éwi ù
N s M s
-1
K¥ êw ú
ë û

By the routine algebraic manipulations, the design problem can be transferred


into Fig. 10.30. Then, the advanced PDFF control design problem becomes the
standard H1 control problem of Fig. 10.30, where K1 D [Kp Ki ] and the closed-
 
!
loop transfer function from to [wi ,w,u]T is denoted by LFTl (P, K1 ). The
!d
H1 control problem is therefore to find a stabilizing controller K1 D [Kp Ki ] such
that, for a prespecified > 0,
  

  S 
LFTl P .s/; K1 .s/  
D Q 1 
Ms  < 1; (10.39)
1 K1 S 1

where S D (I  Gs K1 )1 is the sensitivity function. A state-space realization of the


SCC plant P can be readily found as

(10.40)

Now kf D 0.5 and Cf D 500 are selected as an illustrated design example. Then, the

normalized coprime factors of can


be found by wQ D 1 and Hm D YCm D 3,632.7024, where the ARE of

YATm C Am Y  Y CmT Cm Y C Bm BmT D 0 (10.41)

gives Y D 3,632.7024. Then, for the prespecified D 1.75, solving the H1 subop-
timal control problem of kLFTl (P ,K1 )k1 < 1 with P given by (10.40) will yield
the center controller as
10.4 H1 Robust Controller Design for Speed Control 327

140
CF PDFF Design, Nominal J
CF PDFF Design, 0.1J
Velocity Response (rad/sec) 120 CF PDFF Design, 10J

100

80

60

40

20

0
0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05
Time (sec)

Fig. 10.31 Step responses with CF approach PDFF design

(10.42)

It can also befound


 in Fig. 10.29 that the state-space realization of the closed-loop
TL
system from to !, denoted Tcl (s), is given by
!

(10.43)

where the closed-loop poles are given by 1,356,300, 842.2846, 3642.2, and
1,000 and the closed-loop zeros are 3642.2 and 1,000, which are from
Am C Hm Cm D 3,642.2 and  k f" Cf D 1,000. It was #
1
addressed in [9] that all of
Am C Hm Cm 0
the eigenvalues of As CHs Cs D are the closed-loop poles
0 kf1
in the design problem of Fig. 10.28. This implies that the advanced PDFF design
can provide an alternative for partial pole placement by assigning (As C Hs Cs ).
Obviously, this CF design approach does not have pole-zero cancellation occurring
at  JB0 D 9:539, which would affect control performance. The system step
responses resulting from the H1 design are shown in Fig. 10.31.
328 10 Design Examples

Bode Diagram
200
Gm
100 (CfGm)/s
Magnitude (dB)

KpGm+(CfKiGm)/s
0

-100

-200

-300
0
0.1 1

-45
Phase (deg)

-90

-135

-180
-1 0 1 2 3 4 5 6 7 8 9
10 10 10 10 10 10 10 10 10 10 10
Frequency (rad/sec)

Fig. 10.32 Bode plots resulting from CF design approach

In the loop-shaping design, the structure and gains of the internal weighting
function Csf can be selected to obtain the desired magnitude of the frequency
response of a loop transfer function Csf Gm , and then, the controller K1 D [Kp Ki ]
is found with the guaranteed stability properties [8]. Figure 10.32 shows the Bode
plots of the original plant Gm (the solid line), the shaped plant Csf Gm (the dashed

ˇ ˇKp C s ˇ Gm (the dotted line), respectively,


Cf Ki
line), and
ˇ the loop transfer function

ˇ ˇ
where ˇ Kp .s/ C Cf Ki .s/ Gm .s/ˇ
ˇ Cf Gm .s/ˇ for s D j! within the frequencies of
s s
1,000 rad/s. As can be seen in the dashed line of Fig. 10.32, the original plant Gm is
shaped by Csf Gm , such as the magnitude plot, to force the open-loop magnitude
to approximate to infinity at very low frequencies and the stabilizing controller
K1 D [Kp Ki ] to provide phase compensation, such as phase plots, to improve the
transient response of the control system.
It should be noted that the weighting functions in the above design examples are
chosen purposely such that the step response of the closed-loop transfer function
from !* to ! resulting from the above three H1 designs, i.e., SF PDF, robust
PDFF, and CF PDFF design, all are close to that of the classical PDF design at
the nominal case J D Jo , as shown in Fig. 10.33. The peak values of the control
efforts in both robust PDFF and CF PDFF designs are bigger than the other two due
to the added feedforward loop aiming at fast response. In the case of 0.1Jo shown
in Fig. 10.34, the robust PDFF design shows even faster response, and the control
efforts in the two PDFF structures are naturally higher. Note that the peak value of
control efforts due to the step input can be reduced in practice by the technique of
soft starting such as the S-curve. Compared with that of the classical PDF design,
Fig. 10.35 shows that the step response resulting with J D 10Jo has been improved in
10.4 H1 Robust Controller Design for Speed Control 329

140 18
Classical Design Classical Design
SF PDF Design 16 SF PDF Design
120 Robust PDFF Design Robust PDFF Design
CF PDFF Design CF PDFF Design

Control Effort Response (Ampere)


14
Velocity Response (rad/sec)

100
12

80 10

60 8

6
40
4
20
2

0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)

Fig. 10.33 Comparisons of step responses and control efforts with J D Jo from four controllers

140 20
Classical Design Classical Design
SF PDF Design 18 SF PDF Design
120 Robust PDFF Design Robust PDFF Design
CF PDFF Design 16 CF PDFF Design
Control Effort Response (Ampere)
Velocity Response (rad/sec)

100 14

12
80
10
60
8

40 6

4
20
2

0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)

Fig. 10.34 Comparisons of step responses and control efforts with J D 0.1Jo from four controllers

the maximum overshoot and settling time especially by the CF PDFF design. For the
step response of the robust PDFF design in Fig. 10.35, the slower transient response
with overshoot can be found. This is due to that the H1 design formulation appears
with pole-zero cancellations, which affects the control performance significantly
while uncertainties occur.
Disturbance rejection ability, also known as dynamic stiffness, is also an impor-
tant index with which to evaluate the above servo controller design. Enhancing the
330 10 Design Examples

150 80
Classical Design Classical Design
SF PDF Design 70 SF PDF Design

Control Effort Response (Ampere)


Robust PDFF Design Robust PDFF Design
CF PDFF Design CF PDFF Design
Velocity Response (rad/sec)

60

50
100
40

30

20
50
10

-10

0 -20
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)

Fig. 10.35 Comparisons of step responses and control efforts with J D 10Jo from four controllers

Bode Diagram
80
Classical Design
SF PDF Design
60 Robust PDFF Design
CF PDFF Design
40
Magnitude (dB)

20

-20

-40
-2 -1 0 1 2 3 4 5 6
10 10 10 10 10 10 10 10 10
Frequency (Hz)

Fig. 10.36 Dynamic stiffness plots resulting from different controller designs

dynamic stiffness is equivalent to restoring torque to attenuate the output deviation


such that the effect of process disturbance is reduced. Dynamic stiffness is charac-
terized by the inverse of the magnitude of the closed-loop transfer function from the
load torque disturbance TL to the output speed !. Thus, high dynamic stiffness is
essential for the control system to obtain good disturbance rejection ability.
The inverse plots of the magnitude of the closed-loop transfer function from TL to
! resulting from the above four designs are shown in Fig. 10.36, where the CF PDFF
design shows the best dynamic stiffness among the four design examples. To result
in higher dynamic stiffness and better stability robustness, the CF PDFF design
References 331

extends the feedback loop control parameters to dynamic controllers. In practice,


however, high dynamic stiffness often results in large control efforts; hence, a trade-
off should be considered carefully. Due to the pole-zero cancellations, the dynamic
stiffness of the robust PDFF design is less than the other three PDFF designs.

10.5 Summary

This chapter presents an application-oriented study that combines the frequently


used PDFF control scheme in industry and the H1 control theory. Four different
design methodologies, i.e., classical PDF, SF PDF, robust PDFF, and CF PDFF,
for speed control of DC servomotors are presented. These examples showed how
industrial controllers can be formulated into the standard control design framework
and then solved by the state-space solution procedures presented in previous chap-
ters. To achieve high dynamic stiffness design, the classical PDFF control design is
formulated into the standard H1 control problem, i.e., a special state feedback (SF)
problem. A weighted mixed-sensitivity design problem of the coprime factorization
(CF) approach is provided. Both the command tracking and disturbance rejection
can be taken into account simultaneously in this design method.
The concept of H1 loop-shaping design is also introduced to formulate the
advanced PDFF control scheme into a weighted mixed-sensitivity design problem
based on an augmented pseudo plant and the coprime factor description. This
chapter investigates the peculiar pole-zero properties in the explicit form of the
advanced PDFF design and characterizes the benefit and design procedure of
special H1 pole placement property. The proposed design method provides a useful
property that the bandwidth and the dynamic stiffness can be designed stage by
stage, and the property has been successfully applied to design a servomotor system.
To sum up, the CF PDFF design can provide some freedoms of prevent-
ing/performing the pole-zero cancellation between the controller and plant.
Although the pole placement is not a primary objective in the mixed-sensitivity
design, such choice of weights for improving dynamic stiffness is arresting. Such
design freedoms can be chosen to achieve higher dynamic stiffness with suitable
tracking performance and also to maintain certain robust stability.

References

1. Alter DM, Tsao TC (1996) Control of linear motors for machine tool feed drives: design and
implementation of H1 optimal feedback control. ASME J Dyn Syst Meas Control 118:649–
656
2. Ellis G (2012) Control system design guide. Elsevier Science, Oxford
3. Fu L, Ling SF, Tseng CH (2007) On-line breakage monitoring of small drills with input
impedance of driving motor. Mech Syst Signal Process 21(1):457–465
332 10 Design Examples

4. Glover K, McFarlane D (1989) Robust stabilization of normalized coprime factor plant


description with H1 -bounded uncertainty. IEEE Trans AC 34(8):821–830
5. Hof P, Schrama R, Callafon R, Bosgra O (1995) Identification of normalized coprime plant
factors from closed loop experimental data. Eur J Control 1(1):62–74
6. Kuo BC (1986) Automatic control systems. Prentice Hall, Englewood Cliffs
7. Ling SF, Xie Y (2001) Detecting mechanical impedance of structure using the sensing
capability of a piezoceramic inertial actuator. Sens Actuators 93:243–249
8. McFarlane D, Glover K (1992) A loop shaping design procedure using H1 synthesis. IEEE
Trans AC 37(6):759–769
9. Shen BH, Tsai MC (2006) Robust dynamic stiffness design of linear servomotor drives. IFAC
J Control Eng Prac 14(11):1325–1336
10. Tal J (1994) Step-by-step of motion control systems. Galil Motion Control, Inc., Rocklin
11. Tsai MC, Chang JY (1995) LQG/LTR loop shaping design with an application to position
control. J Chin Inst Eng 18(2):281–292
12. Tsai MC, Geddes EJM, Postlethwaite I (1992) Pole-zero cancellations and closed-loop
properties of an H1 mixed sensitivity design problem. Automatica 28(3):519–530
13. Whidborne J, Postlethwaite I, Gu DW (1993) Robust controller design using H1 loop-shaping
and the method of inequalities. In: Proceedings of conference on decision and control, San
Antonio, TX, USA, pp 2163–2168
Index

A Co-all-pass, 92
ABCD parameter, 40, 44, 60 Co-inner, 92, 186
Additive uncertainty, 90 Completely controllable, 24
Admittance parameter, 40, 41 Completely observable, 25
Advanced PDFF, 321 Conjugate system, 23
design, 323 Controllability, 24
Algebraic Riccati equations (ARE), 3, 4, Controllability gramian, 21, 185
171, 181 Coprime factorization, 3, 4, 145, 147
All pass, 56, 92 Coupled (right and left) CSD, 267
ARE. See Algebraic Riccati equations (ARE) Q
CSDl (G;K), 31
Armature resistance, 304 CSDr (G,K), 31
Asymptotical stability, 24

D
DC permanent magnetic (PM) servomotor, 304
B
Detectable, 26
Back EMF, 304
Disturbance feedforword (DF), 281
Basis, 11
case, 237–238
Bezout identity, 156
dom(Ric), 172
Bilinear transformations, 58
Drill breakages, 310
Bounded-input-bounded-output (BIBO)
Dual J-lossless, 137, 206
stability, 24
Dual J-unitary, 137
Bounded real, 58, 59
Dynamic stiffness, 303
Bounded real lemma (BRL), 198

E
C Eigenvalue, 12
Canonical decomposition form, 26 Eigenvector, 12
Cascaded CSD subsystems, 103 Electromechanical transducer, 308
Chain matrices, 45 Electromotive force constant, 304
Chain scattering decomposition, 3 Equivalent electrical impedance, 306
Chain scattering description/chain scattering
matrix description (CSD), 3, 31, 99, 213
Chain scattering parameter, 40, 51, 52 F
Characteristic equation, 12 Finite-dimensional linear time-invariant (LTI)
Characteristic polynomial, 12 dynamical system, 22
Classical PDF, 319 Four-block distance problem, 267

M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 333
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5,
© Springer-Verlag London 2014
334 Index

Four-block problem, 214 K


Frobenius norm, 15 Kernel (or null) space, 11
Full control (FC) Kimura, 213
case, 242–243 Kronecker delta, 10
problem, 287
Full information (FI)
case, 238–239 L
problem, 283–284 L2 , 16
Laplace transform, 22
Left coprime factors, 148
G Left coprime over RH1 , 149
Gramian test, 24, 26 Left CSD transformation, 102
LFT, 31
L1 -function space, 17
H L2 -function space, 17
H2 ? , 17 Linear fractional transformation (LFT), 4,
Hamiltonian matrix, 171 65, 231
Hankel norm approximation, 20, 21, 267 Linearly dependent, 10
Hardy space, 18 Linearly independent, 10
H1 control problem, 268–274 Linear quadratic Gaussian (LQG), 2
Hermitian, 12 Linear quadratic regulation (LQR), 195
H2-function space, 17 problem, 257
H1 loop shaping design, 322 Linear space, 10
H1 -norm, 18 Linear time-invariant (LTI), 1
Homographic transformation, 99 Load torque, 304
H1 optimal control theory, 3 Loop shaping design procedure
H parameter, 40 (LSDP), 290
H1 PDF design, 317, 319 Lossless, 56
H2 problems, 235 network, 56
H1 SF design, 319 transmission line, 48
Hybrid parameter, 43 LQR. See Linear quadratic
regulation (LQR)
Lyapunov equation, 183
I
Image (or range), 11
Impedance parameter, 40, 41 M
Infinitive norm, 1 Mason’s gain formulae, 38, 65, 80, 81, 83,
Inner, 92, 186 88, 89
Inner-outer factorization, 196 Matlab, 3
Integrator, 69 Matrix, 8
Interconnection system, 31 Matrix inversion, 12
Internal stability, 24 Matrix norm, 14
Inverse, 23 Matrix 1-norm, 15
Inverse matrix, 9 Matrix 1-norm, 15
Matrix 2-norm, 15
Mechanical impedance, 306
J Minimal realization, 27
J-lossless, 137, 206 Model reference control problem, 259
coprime factorizations, 267, 269 Moore-Penrose inverse, 13
function, 186 Motion of an antenna, 254
J-spectral factorization, 203, 206 Motor winding inductance, 304
J-unitary, 137 Multi-input-multi-output (MIMO), 2
T matrix, 57 Mutually orthogonal, 11
Index 335

N Real rational function, 58


Negative definite, 9 RH1 , 17
Negative semi-definite, 9 Ric, 172
Nehari problem, 267 Right and left CSD, 3
1-norm, 20 Right coprime factors, 148
1-norm, 19, 20 Right coprime over RH1 , 149
2-norm, 1, 19 Right CSD transformation, 100
Normal, 9 Right or left CSD-matrix, 99, 115
Normalized coprime factorizations (ncf), Robust control theory, 212
191, 296 Robust design of servo control
Norm of a system, 20 systems, 304
1-norm of a system, 20 Robustness, 2
2-norm of a system, 20 Rotational inertia, 312
Nyquist plot, 20 Rotor inertia, 304

O S
Observability, 25 Scattering parameter, 40, 48
Observability gramian, 21, 183, 250 Schur complement, 13
One-port and two-port networks, 4 Schur decomposition, 172
One-port network, 37 Separation principle, 264
Optimal control problem, 267 S-function, 68
Optimal H2 controller synthesis, 248 Similarity transformation, 127
Orthogonal, 9 Single chain-scattering description (CSD), 267
Orthogonal complement, 11 Single-input-single-output (SISO), 2
Orthonormal, 10 Singular value decomposition (SVD), 16
Outer, 187, 206 Singular values, 15
Output estimation (OE) Slicot, 3
case, 240–241 Solutions of special SCC formulations, 245
problem, 285 Space L1 , 16
Output injection (OI) Space Lp, 16
case, 243–247 S parameter, 40, 48, 49, 52, 54, 55, 57
problem, 289 Special SCC formulations, 236
Output response, 22 Special state feedback (SF), 318
Spectral factorization, 3, 4, 185, 267
P Spectral radius, 12
Parameter H, 43, 46 Speed control of DC servomotors, 303
Pole-zero cancellation property, 322 Stability, 24
Positive definite, 9, 25, 26 Stabilizable, 25
Positive real, 58, 61 Stabilizing controller, 1, 3, 5, 217, 224
Positive semi-definite, 9 Standard control (or compensation)
Pseudo derivative feedback (PDF), 303, 314 configuration (SCC), 4
Pseudo derivative feedback with feed-forward Standard control configuration (SCC), 3
(PDFF), 303, 321 Star product, 90, 134
Pseudo-inverse, 13 State feedback (SF)
case, 239–240
problem, 285
Q State response, 22
Quadratic equation, 175 State similarity transformation, 23
State-space formulae of stabilizing controllers,
220–227
R State-space realization, 22, 27–29
Rank, 9 Sub-optimal, 3
Rank test, 24, 26 Subspace, 10
336 Index

T Vector 1-norm, 14
The worst case scenario, 303 Vector 1-norm, 14
Torque constant, 304 Vector 2-norm, 14
T parameter, 40, 52–54, 58, 60, 118, 120 Vector p-norm, 14
Transduction matrix, 309 Viscous friction, 304
Transmission parameter, 40, 45
Transmission parameters, 44, 54
Transpose of a matrix, 8 W
Transpose of a vector, 8 Weighted all-pass, 185
Two-port network, 37, 38, 40, 43–46, 49, 51, Weighted all-pass function, 185
52, 56, 57 Weighted co-all-pass, 185
Weighted mixed sensitivity design, 322
Well-defined, 67
U Well-posed, 67
Unitary, 9
Unitary S matrix, 57
Upper block triangular numerator, 221 Y
Youla parameterization, 89
Y parameter, 40, 42
V
Vector, 7
Vector/matrix manipulations, 9 Z
Vector norm, 14 Z parameter, 40, 42, 47

You might also like