马尔科夫跳变线性系统二次最优控制的资格迹方法
Eligibility Trace Method for Quadratic Optimal Control of Markovian Jump Linear Quadratic Control
DOI: 10.12677/pm.2024.145216, PDF, HTML, XML, 下载: 31  浏览: 71 
作者: 朱亚楠:上海理工大学理学院,上海
关键词: 最优控制马尔科夫跳变系统资格迹Optimal Control Markov Jump Linear Systems Eligibility Traces
摘要: 本文研究了资格迹方法在马尔科夫跳变线性系统的最优二次控制问题(MJLS-LQR)中的应用。常见的方法通过求解耦合的代数黎卡提方程得到最优控制,并不直接优化策略参数。本文在无模型强化学习方法的基础上引入资格迹,直接优化策略参数。考虑参数已知和参数未知两种情况下,MJLS-LQR问题的资格迹方法。参数未知时,无法利用系统参数信息精确表示资格迹,本文利用零阶优化定理近似资格迹,这可以将问题扩展至代价函数非凸的情况。在有限时域和高斯噪声的条件下,分别给出了两种情况下算法的全局收敛保证。数值模拟结果显示资格迹方法与梯度下降算法相比收敛更快。
Abstract: This paper studies the application of eligibility trace methods in the optimal quadratic control problem of Markov jump linear systems (MJLS-LQR). Common methods obtain optimal control by solving coupled algebraic Riccati equations, rather than directly optimizing policy parameters. Based on the model-free reinforcement learning method, this paper introduces eligibility traces to directly optimize policy parameters. The eligibility trace method for MJLS-LQR problems is considered under two scenarios: known parameters and unknown parameters. When the parameters are unknown, the system parameter information cannot be used to accurately represent the eligibility trace. This paper utilizes the zero-order optimization theorem to approximate the eligibility trace, which can extend the problem to non-convex cost functions. Global convergence guarantees for the algorithms under both scenarios are provided under the conditions of finite time horizon and Gaussian noise. Numerical simulation results show that the eligibility trace method converges faster compared with the gradient descent algorithm.
文章引用:朱亚楠. 马尔科夫跳变线性系统二次最优控制的资格迹方法[J]. 理论数学, 2024, 14(5): 629-643. https://doi.org/10.12677/pm.2024.145216

1. 引言

控制理论的一个重要问题是,当系统动力学发生变化时,保证控制系统仍然满足某些预期中的性能要求。这些变化可能由外部环境干扰导致,也可能由系统内部的故障或子系统间连接故障引起。当干扰因素对系统的影响较小时,通过在系统状态方程中引入随机噪声项描述此类不确定性。然而,情况更加复杂时,这种方式无法有效刻画干扰对系统的影响,导致控制反馈的有效性降低,计算成本也会大幅增加,随机跳变系统 [1] [2] 能够更好地描述这类问题。

马尔科夫跳变线性系统(Markov Jump Linear System, MJLS) [2] 是一类重要的随机系统,在通信、控制、金融等领域有广泛的应用。MJLS具有多个模态,在理想条件下,系统在各个模态之间的跳变转移通过马尔科夫链建模。在复杂的场景中,假设系统在有限的多个模型之间随机转移有确定的概率分布,即从一种模型状态跳转到另一种模型状态的概率是确定的。

在MJLS的最优控制问题研究中,常用的方法通过求解一组耦合黎卡提方程组获取最优控制。然而,当系统参数部分已知或未知时,无法取得良好的效果。Tzortzis [3] 等研究了模态转移概率不确定情况下,为转移概率矩阵设置模糊集研究MJLS的最优控制问题。文献 [4] 基于矩阵不等式方法研究了系统模态无法有效观测的情况,文献 [6] [7] [8] 基于黎卡提方程研究了MJLS的最优控制问题。随着研究深入,关于转移概率部分未知的离散时间和连续时间马尔科夫跳变线性系统的稳定性问题的理论更加成熟 [5] [10] 。强化学习(RL)方法 [9] 和数据驱动方法在解决不确定动力系统的问题中有较大突破,基于采样数据的方法成为解决此类随机系统的最优控制问题的有效手段 [11] [12] 。RL方法是交互式的学习方法,系统通过与环境交互积累经验,以最大化数值收益信号为导向,不断从经验中学习,最终得到最优策略(控制)。当系统动力学参数未知时,RL中的无模型方法直接利用经验数据学习最优控制而不估计系统参数。参数未知时,常用滤波方法估计系统状态参数,代替真实参数求解问题。最著名的是卡尔曼滤波方法 [13] 。Kim & Smagin [14] ,Marcos [15] ,Martins [16] 将卡尔曼滤波应用在马尔科夫跳变线性系统中,取得了不错的效果。虽然目前理论理解仍然不够完善,但无模型方法在MJLS最优控制问题中效果突出 [18] 。

本文对MJLS的策略优化学习方法进行研究,将强化学习和控制理论相结合,提出参数已知和参数未知两种情况下的策略优化学习方法。在实际应用中,许多动态系统都具有随机性,如通信网络、电力系统、飞行器控制系统 [17] 等。本文在理论上证明了RL方法中的资格迹方法的收敛性,数值分析验证了资格迹方法在MJLS最优控制问题中具有较快的收敛性。为解决参数未知的复杂系统的控制问题提供了有效的解决方案和思路。

数值分析部分通过数值模拟验证了不同维度的状态空间下,资格迹方法拥有更快的收敛速度。并研究了资格迹方法中不同衰减参数以及不同模态的系统参数的设置对最终收敛效果的影响。结果显示,衰减参数在合适的范围内,资格迹方法能够获取的最优控制逼近真实最优控制,且收敛速度优于传统方法。

2. 资格迹方法

本文基于RL方法中的actor-critic框架,基于梯度下降算法提出策略参数优化的资格迹方法 [9] ,在策略参数优化的过程中用资格迹代替梯度项。时变策略参数 K = ( K 0 ( θ 0 ) , K 1 ( θ 1 ) , , K T 1 ( θ T 1 ) ) ,考虑如下有限时域的随机MJLS-LQR问题:

min K V ( K ) = Ε [ t = 0 T 1 ( x t T Q t ( θ t ) x t + u t T R t ( θ t ) u t ) + x T T Q T ( θ T ) x T ] s .t . x t + 1 = A t ( θ t ) x t + B t ( θ t ) u t + C t ( θ t ) ω t (1)

其中, x t d u t k 分别表示系统的状态和控制变量, t [ T ] ,初始状态 x 0 从分布 D 中随机抽样。 Q t ( θ t ) R t ( θ t ) 是正定矩阵参数。 A t ( θ t ) , B t ( θ t ) C t ( θ t ) 是具有合适维数的系统矩阵参数,系统模态参数 θ t Θ ,某一时刻的系统模态 θ t ( A t ( θ t ) , B t ( θ t ) , C t ( θ t ) ) 给出。假设初始状态协方差矩阵为 Σ 0 = E [ x 0 x 0 T ] 正定,独立同分布噪声序列 { ω t } t = 0 T 1 满足:

E ( ω t ) = 0 , E ( ω t ω t T 1 { θ t = i } ) = W E ( x t ω t T 1 { θ t = i } ) = 0 , t [ T ] (2)

假设马尔科夫链上的模态具有时不变转移概率,概率矩阵为 Π = [ π i j ] N × N

π i j = P ( θ t + 1 = j | θ t = i ) , i , j = 1 , 2 , , N (3)

问题的目标是确定最优策略参数,保证累积代价函数达到最小值。

定义 P t K ( θ t ) 为式(4)的解:

P t K ( θ t ) = Q t ( θ t ) + K t T ( θ t ) R t ( θ t ) K t ( θ t ) + ( A t ( θ t ) + B t ( θ t ) K t ( θ t ) ) T E θ t ( P t + 1 K ( θ t + 1 ) ) ( A t ( θ t ) + B t ( θ t ) K t ( θ t ) ) (4)

不至引起歧义时,本文用 P t 代替 P t K ( θ t ) A t 代替 A t ( θ t ) 进行论述。

命题2.1:遵循策略参数的累积代价函数可表示为:

V ( K , x t ) = E [ x t T P t ( θ t ) x t + s = t T 1 ω C s T s T ( θ s ) E θ s ( P s + 1 ( θ s + 1 ) ) C s ( θ s ) ω s ] = E [ x t T P t ( θ t ) x t + s = t T 1 T r [ C s ( θ s ) W C s T ( θ s ) E θ s ( P s + 1 ( θ s + 1 ) ) ] ] (5)

定义 E t = ( R t + B t T E θ t ( P t + 1 ( θ t + 1 ) ) B t ) K t B t T E θ t ( P t + 1 ( θ t + 1 ) ) A t ,系统状态协方差矩阵:

Σ t = E [ x t x t T ] (6)

累积损失函数的梯度为:

t V ( K , x 0 ) = 2 E t Σ t (7)

证明:从t时刻到幕结束的累积代价函数为:

V ( K , x t ) = Ε [ s = t T 1 x s T Q s x s + u s T R s u s + x T 1 T Q T 1 x T 1 + u T 1 T R T 1 u T 1 + x T T Q T x T ] (8)

其中,

E [ x T T Q T x T ] = E [ ( ( A T 1 + B T 1 K T 1 ) x T 1 ) T E θ T 1 ( P T ) ( A T 1 + B T 1 K T 1 ) x T 1 ] + E [ ( C T 1 w T 1 ) T E θ T 1 ( P T ) C T 1 w T 1 ]

所以,

V ( K , x t ) = Ε [ s = t T 2 x s T Q s x s + u s T R s u s + x T 1 T E θ T 2 ( P T 1 ) x T 1 + ( C T 1 w T 1 ) T E θ T 1 ( P T ) C T 1 w T 1 ] = Ε [ x t T Q t x t + u t T R t u t + x t + 1 T E θ t ( P t + 1 ) x t + 1 + s = t + 1 T 1 ( C s w s ) T E θ s ( P s + 1 ) C s w s ] = Ε [ x t T E θ t ( P t + 1 ) x t + s = t T 1 T r [ C s W C s T E θ s ( P s + 1 ) ] ]

V ( K , x T ) = x T T E θ T 1 ( P T ) x T , P T = Q T

累积代价函数对策略参数 K t 的偏导为:

t V ( K , x 0 ) = V ( K , x 0 ) K t = E [ x t T ( Q t + K t T R t K t + ( A B K t ) T E θ t ( P t + 1 ) ( A B K t ) ) x t + K ( t ) ] K t = E [ 2 R t K t x t x t T 2 B T E θ t ( P t + 1 ) ( A B K t ) x t x t T ] = 2 E t Σ t

其中,

K ( t ) = s = 0 t 1 x s T Q s x s + u s T R s u s + s = t + 1 T 1 ( C s w s ) T E θ s ( P s + 1 ) C s w s

证毕。

2.1. 参数已知的资格迹方法

本节讨论有限时域情况下,系统模态参数 θ t , t [ T ] 和系统参数 Ξ 已知时的资格迹方法 [9] 。资格迹方法在蒙特卡洛方法和时序差分方法的基础上,定义一个与策略参数相同维度的短时记忆向量 δ ,作为衡量策略参数 K 不同分量的指标。随着迭代次数的增加,参与更新的控制参数的分量对应的资格迹逐渐衰减,直到这一分量再次参与更新。

考虑如下优化策略参数的资格迹方法:

K n + 1 = K n α δ n δ 0 = C ( K 0 ) δ n = λ δ n 1 + C ( K n ) , n > 0 t [ T ] (9)

其中, n [ N ] 是迭代次数, α 是步长参数, λ 是折扣系数, K n = ( K 0 n , K 1 n , , K T 1 n ) 是第n次迭代时的控制序列, δ n = ( δ 0 n , δ 1 n , , δ T 1 n ) 是与之对应的资格迹序列。

δ t n = { 2 E t n Σ t n t = 0 , n = 0 λ δ t 1 n + 2 E t n Σ t n t > 0 , n > 0 , t [ T ] (10)

衰减参数 λ 的取值不同,决定了历史信息对下一步决策的重要程度。 λ = 0 时,决策时不考虑历史信息, λ = 1 时,在决策过程中历史信息与当前信息同样重要。梯度下降算法只考虑梯度更新的平滑度,而资格迹方法考虑了当前的损失函数和历史策略梯度的关系,能够减少参数更新过程中的错误决策次数。

引理2.2 假设任意可行控制 K K 产生的代价函数均有界, { x t } t = 0 T 1 { u t } t = 0 T 1 { x t } t = 0 T 1 { u t } t = 0 T 1 分别是由 K K 生成的序列,令 x 0 = x 0 = x ,则代价差可表示为:

V ( K , x ) V ( K , x ) = E [ t = 0 T 1 2 T r ( x t ( x t ) T ( K t K t ) T E t ) ] + E [ t = 0 T 1 T r ( x t ( x t ) T ( K t K t ) T ( R t + B t T E θ t ( P t + 1 ) B t ) ( K t K t ) ) ] (11)

证明见附录。

引理2.3 令 ρ = max { max i A i B i K i , max i A i B i K i } Δ = K t K t K K 是任意策略,系统状态向量的协方差满足下面的关系:

Σ K Σ K ρ 2 T 1 ρ 2 1 [ ( 2 ρ + 1 ) B max t = 0 T 1 Δ + B max 2 t = 0 T 1 Δ 2 ] ( V ( K , x 0 ) σ min Q + T W max ) (12)

其中, B max = max t { B t } W max = arg max C t W C t T C t W C t T

证明详见附录。

上面的分析为收敛保证奠定了基础,证明算法的收敛性之前,引理2.4的论证了控制序列经过一次迭代后对代价函数值的影响。

引理2.4 设 K * 是最优至序列, K K 经一次迭代得到,当

其中,

α 1 = ρ 2 1 2 ( ρ 2 T 1 ) ( 2 ρ + 1 ) B max σ min Q σ min Σ K C ( K ) + T W σ min Q 1 max t δ t

V ( K , x 0 ) V ( K * , x 0 ) ( 1 8 α σ min R σ min 2 Σ Σ K * ) ( V ( K , x 0 ) V ( K * , x 0 ) ) (13)

证明详见附录。

经过以上分析,下面给出参数已知时,资格迹算法在DLQR问题中的全局收敛性保证。

定理2.5 假设 C ( K 0 ) 有界,步长 α 满足引理2.4的约束,对 ε > 0 ,当迭代次数N满足下述条件:

N Σ K * 8 α σ min Σ σ min R log V ( K 0 , x 0 ) V ( K * , x 0 ) ε

代价函数值收敛至最优值,即:

V ( K , x 0 ) V ( K * , x 0 ) ε (14)

证明:令,根据引理2.4的结论,

V ( K 1 , x 0 ) V ( K * , x 0 ) ( 1 8 α σ min R σ min 2 Σ Σ K * ) [ V ( K , x 0 ) V ( K * , x 0 ) ]

假设经 n + 1 次迭代后, V ( K n + 1 , x 0 ) V ( K 0 , x 0 ) ,此时 K t n + 1 = K t n α δ t n ,根据Cauchy-Schwarz不等式,

t = 0 T 1 δ t n = t = 0 T 1 i = 0 n λ n i t V ( K n , x 0 ) t = 0 T 1 n i = 0 n t C ( K n ) 2 t = 0 T 1 4 n i = 0 n T r ( Σ t i ( E t i ) T E t i Σ t i ) T t = 0 T 1 4 n i = 0 n Σ t i 2 T r ( ( E t i ) T E t i ) 2 V ( K , x 0 ) σ min Q n T max t ( R t + B t T E θ t ( P t + 1 ) B t ) σ min Σ ( V ( K , x 0 ) V ( K * , x 0 ) )

V ( K , x 0 ) V ( K * , x 0 ) V ( K , x 0 ) V ( K , x 0 ) = E [ t = 0 T 1 T r ( E t T ( R t + B t T E θ t ( P t + 1 ) B t ) 1 E t ) ] σ min Σ max t ( R t + B t T E θ t ( P t + 1 ) B t ) t = 0 T 1 T r ( E t T E t )

结合引理2.3的分析,引理2.3中的结论仍然成立,即:

V ( K n + 1 , x 0 ) V ( K * , x 0 ) ( 1 8 α σ min R σ min 2 Σ Σ K * ) [ V ( K n , x 0 ) V ( K * , x 0 ) ] (15)

n + 1 次的结果进行累积,

V ( K n + 1 , x 0 ) V ( K * , x 0 ) ( 1 8 α σ min R σ min 2 Σ Σ K * ) n + 1 [ V ( K 0 , x 0 ) V ( K * , x 0 ) ]

ε > 0 ,当

N Σ K * 8 α σ min Σ σ min R log V ( K 0 , x 0 ) V ( K * , x 0 ) ε

时, V ( K N , x 0 ) V ( K * , x 0 ) ε 。证毕。

2.2. 系统参数未知的资格迹方法

本节讨论系统模态 θ t 和系统参数 Ξ 未知时的资格迹方法。不同模态下的系统参数间差异间需要满足一定的界限。模态未知,系统使用零阶优化方法近似资格迹,零阶优化方法 [19] [20] [21] 对目标函数的凸性没有要求,直接以函数值估计函数梯度。在MJLS的最优二次控制问题中,参数未知时,在每一步的控制上加入随机噪声进行采样来估计代价函数值。目标函数可表示为

V ( K , x 0 ) = E ζ [ V ( K , x 0 ; ζ ) ] (16)

这里利用带噪声的代价函数值构造梯度的近似无偏估计。令 U r = { U k × d : U F = r } ,设 P U U r 上的均匀分布。任意度量 r > 0 ,以及 U ~ P U ζ 独立,则 C ( K ) 的梯度估计 [22] 为:

V ( K , x 0 ) = k × d r V ( K + U , x 0 ) U

随着r越来越小,近似值越来越精确,但r过小容易导致方差过大。

定义2.5 对给定的 r > 0 以及从 υ r = { U d : U F = r } 中随机抽取的随机向量 U ,I为采样幕数, λ 是折扣系数,资格迹的经验近似为:

δ ^ t n = { 1 I i = 0 I 1 D r 2 c ^ t i U t i n = 0 λ δ ^ t n 1 + 1 I i = 0 I 1 D r 2 c ^ t i U t i n > 0 (18)

其中,

c ^ t i = t = 0 T 1 ( ( x t i ) T Q t x t i + ( u t i ) T R t u t i ) + ( x T i ) T Q T x T i

引理2.6 假设任意不同控制 K K 的分量满足:

K t K t min { K t , ( ρ 2 1 ) σ min Q σ min Σ 2 T ( ρ 2 T 1 ) ( 2 ρ + 1 ) ( V ( K , x 0 ) + σ min Q T W max ) B max } (19)

则存在

h C { ρ 2 1 ( 2 ρ + 1 ) ( ρ 2 T 1 ) 1 B max 1 W max 1 V ( K 0 , x 0 ) }

h g { ρ 2 1 ( 2 ρ + 1 ) ( ρ 2 T 1 ) 1 B max 1 W max σ min Q σ min Σ V ( K 0 , x 0 ) 1 Σ }

使得,

| V ( K , x 0 ) V ( K , x 0 ) | h C T 1 t = 0 K t K t t V ( K , x 0 ) t V ( K , x 0 ) h g T 1 t = 0 K t K t

定理2.7 假设 C ( K 0 ) 有界,步长 α 满足引理2.3的约束,对 ε > 0 ,当迭代次数N满足

N Σ K * 8 α σ min Σ σ min R log V ( K 0 , x 0 ) V ( K * , x 0 ) ε

代价函数值收敛至最优值,即:

V ( K , x 0 ) V ( K * , x 0 ) ε (20)

证明与定理2.4类似。

表1提出了MJLS-LQ问题的资格迹算法。

Table 1. Eligibility trace algorithm

表1. 资格迹算法

3. 数值模拟

当系统状态空间维数 d = 2 时,系统参数为

A 1 = ( 0.8521 1.11 1.035 0.7436 ) , B 1 = ( 0.831 1.002 ) , Q 1 = ( 1 0 0 1 ) , R 1 = 1

A 2 = ( 0.6984 1.13 1.025 0.6521 ) , B 2 = ( 0.705 0.849 ) , Q 2 = ( 1.105 0 0 0.92 ) , R 2 = 1

系统模态转移概率矩阵为:

Π = ( 0.9 0.1 0.7 0.3 )

比较资格迹方法与梯度下降算法的收敛情况。在折扣系数 λ = 0.1 的条件下,设定指数衰减的步长参数,时域 T = 200 ,迭代次数 N = 50 和迭代次数 N = 100 ,代价函数的收敛情况结果如图1图2所示。

图1图2的结果说明,资格迹算法比梯度下降算法具有更快的收敛速度。资格迹方法中折扣系数的取值对最终结果有显著影响,图3展示了 T = 100 N = 70 时不同的折扣系数对算法性能的影响。

图4展示了某次系统模态序列,在本节设定的系统参数下,折扣系数 λ < 0.3 时,资格迹算法表现优于策略梯度算法,当 λ > 0.3 后,结果出现不收敛的情况,随 λ 的增大,收敛更快,但结果不收敛。 λ 是过去梯度信息的权重,说明在这一数值范例中,过去梯度信息对问题求解只能提供少量信息。

Figure 1. The convergence of C(K) when d = 2, T = 40

图1. d = 2,T = 40代价函数的收敛情况

Figure 2. The convergence of C(K) when d = 2, T = 70

图2. d = 2,T = 70代价函数的收敛情况

Figure 3. Cost function error variation with N = 100

图3. N = 100代价函数误差变化

Figure 4. System modal sequence

图4. 系统模态序列

4. 结论

本文研究了无模型强化学习方法在有限时域MJLS-LQ问题中的应用,不同于通过解代数黎卡提方程方程得到最优控制的方法,本文直接优化控制增益,在梯度下降算法的基础上引入资格迹方法,并给出在参数已知和参数未知两种情况下算法的收敛保证。在初始代价函数有界的条件下,算法可以扩展至无限时域。数值模拟验证了算法的收敛性,展示了不同参数设置对结果的影响。另一个方向是基于有模型的强化学习方法,在更少样本量的基础上,进一步达到更好的收敛结果。

致谢

感谢张老师在论文写作过程中给出的指导和建议。

附录

引理2.2证明:

ϒ s = ω C s T s T E θ s ( P s + 1 ) C s ω s

V ( K , x ) V ( K , x ) = E [ x T P 0 x + s = 0 T 1 ϒ s ] V ( K , x ) = E [ x T P 0 x + s = 0 T 1 ( ϒ s + V ( K , x s ) V ( K , x s ) ) ] V ( K , x ) = E [ x T P 0 x + s = 0 T 1 ( ϒ s + V ( K , x s + 1 ) V ( K , x s ) ) ]

J ( K , x s , u s ) = x s T Q s x s + u s T R s u s + E [ V ( K , x s + 1 ) ]

V ( K , x ) V ( K , x ) = E [ x T P 0 x + s = 0 T 1 ( ϒ s + V ( K , x s + 1 ) V ( K , x s ) ) ] = E [ s = 0 T 1 J ( K , x s , u s ) V ( K , x s ) ]

J ( K , x s , u s ) V ( K , x s ) = ( x s ) T Q s x s + ( u s ) T R s u s + E [ V ( K , x t + 1 ) ] V ( K , x s ) = ( x s ) T ( Q s + ( K s ) T R s K s + ( A s + B s K s ) T E θ s ( P s + 1 ) ( A s + B s K s ) ) x s ( x s ) T P s x s = ( x s ) T ( Q s + ( K s K s + K s ) T R s ( K s K s + K s ) ) x s + ( x s ) T ( A s + B s ( K s K s + K s ) ) T E θ s ( P s + 1 ) ( A s + B s ( K s K s + K s ) ) x s

( x s ) T ( Q s + K s T R s K s + ( A s + B s K s ) T E θ s ( P s + 1 ) ( A s + B s K s ) ) x s = 2 ( x s ) T ( K s K s ) T ( ( R s + B s T E θ s ( P s + 1 ) B s ) K s B s T E θ s ( P s + 1 ) A s ) x s + ( x s ) T ( K s K s ) T ( R s + B s T E θ s ( P s + 1 ) B s ) ( K s K s ) x s = 2 T r ( x s ( x s ) T ( K s K s ) T E s ) + T r ( x s ( x s ) T ( K s K s ) T ( R s + B s T E θ s ( P s + 1 ) B s ) ( K s K s ) )

引理2.3证明:

定义线性算子: F K t ( X ) = ( A t + B t K t ) X ( A t + B t K t ) T G t ( Σ ) = F K t F K t 1 F K 0

系统状态协方差矩阵为:

Σ t + 1 = E ( x t + 1 x t + 1 T ) = E ( ( ( A t + B t K t ) x t + C t ω t ) ( ( A t + B t K t ) x t + C t ω t ) T ) = ( A t + B t K t ) Σ t ( A t + B t K t ) T + C t W C t T = F K t ( Σ t ) + C t W C t T

= ( A t + B t K t ) ( F K t 1 ( Σ t 1 ) + C t 1 W C t 1 T ) ( A t + B t K t ) T + C t W C t = F K t F K t 1 ( Σ t 1 ) + F K t ( C t 1 W C t 1 T ) + C t W C t T = F K t F K t 1 F K t 2 ( Σ t 2 ) + F K t F K t 1 ( C t 2 W C t 2 T ) + F K t ( C t 1 W C t 1 T ) + C t W C t T = G t ( Σ 0 ) + s = 0 t 1 F K t F K t s ( C t s 1 W C t s 1 T ) + C t W C t T

t = 0 T 1 ( F K t F K t ) ( X ) = t = 0 T 1 ( A t + B t K t ) X ( A t + B t K t ) T ( A t + B t K t ) X ( A t + B t K t ) T = t = 0 T 1 ( A t + B t K t ) X ( B t Δ t ) T + ( B t Δ t ) X ( A t + B t K t ) T ( B t Δ t ) X ( B t Δ t ) T t = 0 T 1 X ( 2 A t + B t K t B t K t K t + B t 2 K t K t 2 ) ( 2 ρ B max t = 0 T 1 K t K t + B max 2 t = 0 T 1 K t K t 2 ) X

F K t ( X ) = F t , F K t ( X ) = F t

t = 0 T 1 ( G t G t ) ( X ) = t = 0 T 1 ( F t G t 1 F t G t 1 + F t G t 1 F t G t 1 ) ( X ) t = 0 T 1 F K t ( G t 1 G t 1 ) ( X ) + G t 1 F t F t X t = 0 T 1 ρ 2 ( G t 1 G t 1 ) ( X ) + ρ 2 t F t F t X ρ 2 T 1 ρ 2 1 ( t = 0 T 1 F t F t ) X

同理可得:

s = 0 t 1 ( F t F t s F t F t s ) ( C t s 1 W C t s 1 T ) ρ 2 T 1 ρ 2 1 ( s = 0 t 1 F t F t ) W max

综上所述:

Σ K Σ K t = 0 T 1 [ ( G t G t ) ( Σ 0 ) + s = 0 t 1 ( F t F t u F t F t u ) ( C t u 1 W C t u 1 T ) ] ρ 2 T 1 ρ 2 1 [ t = 0 T 1 ( F t F t ) ( Σ 0 ) + t = 0 T 1 s = 0 t ( F t F t ) ( W max ) ] ρ 2 T 1 ρ 2 1 [ t = 0 T 1 F t F t ] ( Σ 0 + T W max ) ρ 2 T 1 ρ 2 1 [ 2 ρ B max t = 0 T 1 Δ + B max 2 t = 0 T 1 Δ 2 ] ( V ( K , x 0 ) σ min Q + T W max )

引理2.4证明:

V ( K , x 0 ) V ( K * , x 0 ) = V ( K , x 0 ) V ( K , x 0 ) + V ( K , x 0 ) V ( K * , x 0 )

V ( K , x 0 ) V ( K , x 0 ) = t = 0 T 1 [ 2 α T r ( Σ t δ t T E t ) + α 2 T r ( Σ t δ t T ( R t + B t T E θ t ( P t + 1 ) B t ) δ t ) ] = t = 0 T 1 [ 2 α T r ( ( Σ t Σ t + Σ t ) δ t T E t ) + α 2 T r ( Σ t δ t T ( R t + B t T E θ t ( P t + 1 ) B t ) δ t ) ] t = 0 T 1 [ 2 α T r ( δ t T δ t ( Σ t Σ t ) δ t T E t Σ t Σ t 1 ) + α 2 T r ( Σ t δ t T ( R t + B t T E θ t ( P t + 1 ) B t ) δ t ) ] t = 0 T 1 [ 2 α T r ( δ t T δ t ) + 2 α Σ t Σ t σ min Σ T r ( δ t T δ t ) ] + t = 0 T 1 [ α 2 R t + B t T E θ t ( P t + 1 ) B t Σ t T r ( δ t T δ t ) ] α p a r a t = 0 T 1 T r ( δ t T δ t )

α p a r a : = 2 α t = 0 T 1 1 Σ t Σ t σ min Σ K α 2 Σ K R t + B t T E θ t ( P t + 1 ) B t

t = 0 T 1 Σ t Σ t σ min Σ ρ 2 T 1 ρ 2 1 [ t = 0 T 1 F K t F K t ] ( Σ 0 + T W min ) σ min Σ ρ 2 T 1 ρ 2 1 [ 2 ρ B max t = 0 T 1 Δ t + B max 2 t = 0 T 1 Δ t 2 ] ( Σ 0 + T W min ) σ min Σ

B max K t K t = α δ t σ min Q σ min Σ 2 V ( K , x 0 ) 1 2

2 ρ B max t = 0 T 1 Δ + B max 2 t = 0 T 1 Δ 2 ( 2 ρ + 1 ) B max t = 0 T 1 α δ t

t = 0 T 1 Σ t Σ t σ min Σ ρ 2 T 1 ρ 2 1 [ t = 0 T 1 F K t F K t ] ( Σ 0 + T W min ) σ min Σ ρ 2 T 1 ρ 2 1 [ 2 ρ B max t = 0 T 1 Δ t + B max 2 t = 0 T 1 Δ t 2 ] ( Σ 0 + T W min ) σ min Σ ρ 2 T 1 ρ 2 1 [ ( 2 ρ + 1 ) B max t = 0 T 1 α δ t ] ( V ( K , x 0 ) + T σ min Q W min ) σ min Q σ min Σ 1 2

Σ K Σ K Σ K + Σ K 1 2 σ min Σ + V ( K , x 0 ) σ min Q 1 2 Σ K + V ( K , x 0 ) σ min Q

α 2 Σ K t = 0 T 1 R t + B t T E θ t ( P t + 1 ) B t α 2 2 V ( K , x 0 ) σ min Q t = 0 T 1 R t + B t T E θ t ( P t + 1 ) B t α V ( K , x 0 ) σ min Q T max t R t + B t T E θ t ( P t + 1 ) B t 1 2

V ( K , x 0 ) V ( K * , x 0 ) = t = 0 T 1 [ 2 T r ( x t * ( x t * ) T ( K t K t * ) T E t ) T r ( x t * ( x t * ) T ( K t K t * ) T ( R t + B t T E θ t ( P t + 1 K * ) B t ) ( K t K t * ) ) ] = t = 0 T 1 [ T r ( x t * ( x t * ) T ( K t K t * + ( R t + B t T E θ t ( P t + 1 K * ) B t ) 1 E t ) T ( R t + B t T E θ t ( P t + 1 K * ) B t ) ( K t K t * + ( R t + B t T E θ t ( P t + 1 K * ) B t ) 1 E t ) ) + T r ( x t * ( x t * ) T E t T ( R t + B t T E θ t ( P t + 1 K * ) B t ) 1 E t ) ]

t = 0 T 1 [ T r ( x t * ( x t * ) T E t T ( R t + B t T E θ t ( P t + 1 K * ) B t ) 1 E t ) Σ K * σ min R t = 0 T 1 T r ( E t T E t ) Σ K * 4 σ min R t = 0 T 1 T r ( Σ t 1 δ t T δ t Σ t 1 ) Σ K * 4 σ min R σ min 2 Σ t = 0 T 1 T r ( δ t T δ t )

V ( K , x 0 ) V ( K , x 0 ) 2 α 4 σ min R σ min 2 Σ Σ K * [ V ( K , x 0 ) V ( K * , x 0 ) ] + [ V ( K , x 0 ) V ( K * , x 0 ) ] ( 1 8 α σ min R σ min 2 Σ Σ K * ) [ V ( K , x 0 ) V ( K * , x 0 ) ]

参考文献

[1] Zhang, Q., Li, L., Yan, X. and Spurgeon, S.K. (2017) Sliding Mode Control for Singular Stochastic Markovian Jump Systems with Uncertainties. Automatica, 79, 27-34.
https://doi.org/10.1016/j.automatica.2017.01.002
[2] Costa, O.L., Fragoso, M.D. and Marques, R.P. (2004) Discrete-Time Markov Jump Linear Systems. IEEE Transactions on Automatic Control, 51, 916-917.
https://doi.org/10.1109/TAC.2006.874981
[3] Tzortzis, I., Charalambous, C.D. and Hadjicostis, C.N. (2019) Robust LQG for Markov Jump Linear Systems. 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, 11-13 December 2019, 6760-6765.
https://doi.org/10.1109/CDC40024.2019.9028886
[4] Todorov, M.G. and Fragoso, M.D. (2014) New Methods for Mode-Independent Robust Control of Markov Jump Linear Systems. 53rd IEEE Conference on Decision and Control, Los Angeles, 15-17 December 2014, 4222-4227.
https://doi.org/10.1109/CDC.2014.7040047
[5] Wang, Y., Ahn, C.K., Yan, H. and Xie, S. (2020) Fuzzy Control and Filtering for Nonlinear Singularly Perturbed Markov Jump Systems. IEEE Transactions on Cybernetics, 51, 297-308.
https://doi.org/10.1109/TCYB.2020.3004226
[6] Guo, Y. and Li, J. (2021) Network-Based Quantized H∞ Control for T-S Fuzzy Singularly Perturbed Systems with Persistent Dwell-Time Switching Mechanism and Packet Dropouts. Nonlinear Analysis: Hybrid Systems, 42, Article ID: 101060.
https://doi.org/10.1016/j.nahs.2021.101060
[7] Tzortzis, I., Charalambous, C.D. and Hadjicostis, C.N. (2019) Robust LQG for Markov Jump Linear Systems. 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, 11-13 December 2019, 6760-6765.
https://doi.org/10.1109/CDC40024.2019.9028886
[8] Lopes, R.O., Mendes, E.M., Tôrres, L.A., Vargas, A.N. and Palhares, R.M. (2020) Finite-Horizon Suboptimal Control of Markov Jump Linear Parameter-Varying Systems. International Journal of Control, 94, 2659-2668.
https://doi.org/10.1080/00207179.2020.1728387
[9] Sutton, R.S. and Barto, A.G. (2018) Reinforcement Learning: An Introduction. MIT Press, Cambridge.
[10] Souza, M., Fioravanti, A.R. and Araujo, V.S. (2021) Impulsive Markov Jump Linear Systems: Stability Analysis and H2 Control. Nonlinear Analysis: Hybrid Systems, 42, Article ID: 101089.
https://doi.org/10.1016/j.nahs.2021.101089
[11] Chen, Y., Wen, J., Luan, X. and Liu, F. (2020) Robust Control for Markov Jump Linear Systems with Unknown Transition Probabilities—An Online Temporal Differences Approach. Transactions of the Institute of Measurement and Control, 42, 3043-3051.
https://doi.org/10.1177/0142331220940208
[12] Park, I.S., Kwon, N.K. and Park, P. (2019) Dynamic Output-Feedback Control for Singular Markovian Jump Systems with Partly Unknown Transition Rates. Nonlinear Dynamics, 95, 3149-3160.
https://doi.org/10.1007/s11071-018-04746-0
[13] Zhao, J. and Mili, L. (2019) A Decentralized H-Infinity Unscented Kalman Filter for Dynamic State Estimation Against Uncertainties. IEEE Transactions on Smart Grid, 10, 4870-4880.
https://doi.org/10.1109/TSG.2018.2870327
[14] Kim, K.S. and Smagin, V.I. (2020) Robust Filtering for Discrete Systems with Unknown Inputs and Jump Parameters. Automatic Control and Computer Sciences, 54, 1-9.
https://doi.org/10.3103/S014641162001006X
[15] Marcos, L.B. and Terra, M.H. (2020) Markovian Filtering for Driveshaft Torsion Estimation in Heavy Vehicles. Control Engineering Practice, 102, Article ID: 104552.
https://doi.org/10.1016/j.conengprac.2020.104552
[16] Queiroz de Jesus, G. and Martins Calazans Silva, B. (2022) Robust Estimation for Discrete-Time Markovian Jump Linear Systems in a Data Fusion Scenario. Intermaths, 3, 17-36.
https://doi.org/10.22481/intermaths.v3i1.10715
[17] Gray, W.S., González, O.R. and Doğan, M. (2000) Stability Analysis of Digital Linear Flight Controllers Subject to Electromagnetic Disturbances. IEEE Transactions on Aerospace and Electronic Systems, 36, 1204-1218.
https://doi.org/10.1109/7.892669
[18] Bertsekas, D.P. (1995) Dynamic Programming and Optimal Control. 3rd Edition, Massachusetts Institute of Technology, Cambridge.
[19] Bertsekas, D.P. (2011) Approximate Policy Iteration: A Survey and Some New Methods. Journal of Control Theory and Applications, 9, 310-335.
https://doi.org/10.1007/s11768-011-1005-3
[20] Fazel, M., Ge, R. Kakade, S.M. and Mesbahi, M. (2018) Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator. International Conference on Machine Learning, Stockholm, 10-15 July 2018, 1467-1476.
[21] Hambly, B.M., Xu, R., and Yang, H. (2020) Policy Gradient Methods for the Noisy Linear Quadratic Regulator over a Finite Horizon. DecisionSciRN: Other Decision-Making in Economics (Topic).
[22] Malik, D., Pananjady, A., Bhatia, K., Khamaru, K., Bartlett, P.L. and Wainwright, M.J. (2018) Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems. Journal of Machine Learning Research, 21, 1-51.