Linear Controller Design: Limits of Performance by Stephen Boyd and Craig Barratt - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CHAPTER 13 ELEMENTS OF CONVEX ANALYSIS

mag !0 at H0 is a subgradient of at H0. But since H0(j!0) = 0, this functional

6

is di erentiable at H0, with derivative

sg(H) = 1(H H

0)

0(j!0)H(j!0) :

<

This linear functional is a subgradient of at H0. The reader can directly verify

that the subgradient inequality (13.3) holds.

13.4.5

Norm of a Transfer Matrix

H

1

Now suppose that H is an m p transfer matrix, and is the

norm:

H

1

(H) = H :

k

k

1

We will express directly as the maximum of a set of linear functionals, as follows.

For each !

, u

m, and v

p, we de ne the linear functional

2

R

2

C

2

C

u v !(H) = (u H(j!)v):

<

Then we have

(H) = sup u v !(H) !

u = v = 1

f

j

2

R

k

k

k

k

g

using the fact that for any matrix A

m p,

2

C

max(A) = sup (u Av)

u = v = 1 :

f<

j

k

k

k

k

g

Now we can determine a subgradient of at the transfer matrix H0. We pick

any frequency !0

at which the

norm of H0 is achieved, i.e.

2

R

H

1

max(H0(j!0)) = H0 :

k

k

1

(Again, we ignore the case where there is no such !0, commenting that for rational

H0, there always is such a frequency, if we allow !0 = .) We now compute a

1

singular value decomposition of H0(j!0):

H0(j!0) = U V :

Let u0 be the rst column of U, and let v0 be the rst column of V . A subgradient

of at H0 is given by the linear functional

sg = u0 v0 !0:

index-314_1.png

index-314_2.png

index-314_3.png

index-314_4.png

index-314_5.png

index-314_6.png

index-314_7.png

index-314_8.png

index-314_9.png

13.4 COMPUTING SUBGRADIENTS

305

13.4.6

Peak Gain

We consider the peak gain functional

Z

(H) = H

1

pk gn =

h(t) dt:

k

k

0 j

j

In this case our functional is an integral of a family of convex functionals. We will

guess a subgradient of at the transfer function H0, reasoning by analogy with

the sum rule above, and then verify that our guess is indeed a subgradient. The

technique of the next section shows an alternate method by which we could derive

a subgradient of the peak gain functional.

Let h0 denote the impulse response of H0. For each t 0 we de ne the functional

that gives the absolute value of the impulse response of the argument at time t:

abs h t(H) = h(t) :

j

j

These functionals are convex, and we can express as

Z

(H) = 1 abs h t(H)dt:

0

If we think of this integral as a generalized sum, then from our sum rule we might

suspect that the linear functional

sg

Z

(H) = 1 sg t(H)dt

0

is a subgradient for , where for each t, sg t is a subgradient of abs h t at H0.

Now, these functionals are di erentiable for those t such that h0(t) = 0, and 0 is a

6

subgradient of abs h t at H0 for those t such that h0(t) = 0. Hence a speci c choice

for our guess is

sg

Z

(H) = 1 sgn(h0(t))h(t)dt:

0

We will verify that this is a subgradient of at H0.

For each t and any h we have h(t) sgn(h0(t))h(t) hence

j

j

Z

(H) = 1 h(t) dt Z 1 sgn(h0(t))h(t)dt:

0 j

j

0

This can be rewritten as

Z

(H)

1

( h0(t) + sgn(h0(t))(h(t) h0(t))) dt = (H0) + sg(H H0):

0

j

j

;

;

This veri es that sg is a subgradient of at H0.

index-315_1.png

index-315_2.png

index-315_3.png

index-315_4.png

306

Find Your Next Great Read

Describe what you're looking for in as much detail as you'd like.
Our AI reads your request and finds the best matching books for you.

Showing results for ""

Popular searches:

Romance Mystery & Thriller Self-Help Sci-Fi Business