Anonymous
×
Create a new article
Write your page title here:
We currently have 106 articles on MOR Wiki. Type your article name above or click on one of the titles below and start writing!



Moment-matching PMOR method: Difference between revisions

No edit summary
No edit summary
Line 36: Line 36:


The matrix $V$ is derived by orthogonalizing a number of moment
The matrix $V$ is derived by orthogonalizing a number of moment
matrices of the system in
matrices of the system in (1)[1][2].
(\ref{linear1})\cite{FengB07, Daniel04}.


By defining $B_M=\tilde{E}^{-1}B$, $M_i=-\tilde{E}^{-1}E_i$,
By defining <math>B_M=\tilde{E}^{-1}B, M_i=-\tilde{E}^{-1}E_i,i=1,2,\ldots,p</math> and
$i=1,2,\ldots,p$ and
 
%
<math>
%
\tilde{E}=E_0+s_1^0E_1+s_2^0E_2+\cdots+s_p^0E_p,
\begin{equation}
</math>
\label{E}
 
\tilde{E}=E_0+s_1^0E_1+s_2^0E_2+\cdots+s_p^0E_p
we can expand <math>x</math> in (1) at <math>s_1, s_2, \ldots, s_p</math> around a set of
\end{equation}
expansion points <math>p_0=[s_1^0,s_2^0,\cdots,s_p^0]</math> as below,
%
 
%
<math>
we can expand $x$ in (\ref{linear1}) at $s_1, s_2, \ldots, s_p$ around a set of
  x=[I-(\sigma_1M_1+\ldots +\sigma_pM_p]^{-1}B_Mu(s_p)
expansion points $p_0=[s_1^0,s_2^0,\cdots,s_p^0]$ as below,
  =\sum\limits_{i=0}^{\infty}(\sigma_1M_1+\ldots+\sigma_pM_p)^iB_Mu(s_p).
%
</math>
%
 
\begin{equation}
Here <math>\sigma_i=s_i-s_i^0, i=1,2,\ldots,p</math>. We call the coefficients
\label{x1}
in the above series expansion moment matrices of the parametrized
\begin{array}{rl}
system, i.e. <math>B_M, \ M_1B_M, \ \ldots, \ M_pB_M,\ M_1^2B_M, \
  x&=[I-(\sigma_1M_1+\ldots +\sigma_pM_p]^{-1}B_Mu(s_p)\\
  &=\sum\limits_{i=0}^{\infty}(\sigma_1M_1+\ldots+\sigma_pM_p)^iB_Mu(s_p).\\
\end{array}
\end{equation}
%
%
Here $\sigma_i=s_i-s_i^0, i=1,2,\ldots,p$. We call the coefficients
in the above series expansion moment matrices of the parameterized
system, i.e. $B_M, \ M_1B_M, \ \ldots, \ M_pB_M,\ M_1^2B_M, \
(M_1M_2+M_2M_1)B_M, \ \ldots, \ (M_1M_p+M_pM_1)B_M, \ M_p^2B_M, \
(M_1M_2+M_2M_1)B_M, \ \ldots, \ (M_1M_p+M_pM_1)B_M, \ M_p^2B_M, \
M_1^3B_M, \ \ldots$. The corresponding moments are those moment
M_1^3B_M, \ \ldots</math>. The corresponding moments are those moment
matrices multiplied by $L^{\mathrm{T}}$ from the left. The matrix $V$ can be
matrices multiplied by <math>L^{\mathrm{T}}</math> from the left. The matrix <math>V</math> can be
generated by first explicitly computing some of the moment matrices
generated by first explicitly computing some of the moment matrices
and then orthogonalizing them as is suggested in~\cite{Daniel04}.
and then orthogonalizing them as is suggested in~\cite{Daniel04}.
The resulting $V$ is desired to expand the subspace:
The resulting $V$ is desired to expand the subspace:
%
 
%
<math>
\begin{equation}
\mathop{\mathrm{range}}\{V\}=\mathop{\mathrm{span}}\{B_M, \ M_1B_M,\ldots, M_pB_M,\ M_1^2B_M,  
\label{v1}
(M_1M_2+M_2M_1)B_M, \ldots, (M_1M_p+M_pM_1)B_M,  
\begin{array}{rl}
M_p^2B_M, M_1^3B_M,\ldots, M_1^rB_M, \ldots,M_p^rB_M \}.               (2)
\mathop{\mathrm{range}}\{V\}=&\mathop{\mathrm{span}}\{B_M, \ M_1B_M,\ldots, M_pB_M,\ M_1^2B_M, \\
</math>
& (M_1M_2+M_2M_1)B_M, \ldots, (M_1M_p+M_pM_1)B_M, \\
 
& M_p^2B_M, M_1^3B_M,\ldots, M_1^rB_M, \ldots,M_p^rB_M \}.
However, <math>V</math> does not really span the whole subspace, because the
\end{array}
\end{equation}
%
%
However, $V$ does not really span the whole subspace, because the
latterly computed vectors in the subspace become linearly dependent
latterly computed vectors in the subspace become linearly dependent
due to numerical instability. Therefore, with this matrix $V$ one
due to numerical instability. Therefore, with this matrix <math>V</math> one
cannot get an accurate reduced model which matches all the moments
cannot get an accurate reduced model which matches all the moments
included in the subspace.
included in the subspace.


Instead of directly computing the moment matrices in (\ref{v1}), a
Instead of directly computing the moment matrices in (2)[1], a
numerically robust method is proposed in \cite{FengB07} (the
numerically robust method is proposed in [1] (the
detailed algorithm is described in \cite{FengB09}), which combines
detailed algorithm is described in [3]), which combines
the recursions in (\ref{moments}) with the modified Gram-Schmidt
the recursions in (4) with the modified Gram-Schmidt
process to implicitly compute the moment matrices. The computed $V$
process to implicitly compute the moment matrices. The computed <math>V</math>
is actually an orthonormal basis of the subspace as below,
is actually an orthonormal basis of the subspace as below,
%
 
%
<math>
\begin{equation}
\mathop{\mathrm{range}}\{V\}=\mathop{\mathrm{span}}\{R_0, R_1,\ldots, R_r \}.   (3)
\label{v2}
</math>
\mathop{\mathrm{range}}\{V\}=\mathop{\mathrm{span}}\{R_0, R_1,\ldots, R_r \}.
 
\end{equation}
It can be proved that the subspace in~(2) is included in the
%
subspace in~(3). Due to the numerical stability properties of
%
It can be proved that the subspace in~(\ref{v1}) is included in the
subspace in~(\ref{v2}). Due to the numerical stability properties of
the repeated modified Gram-Schmidt process employed in
the repeated modified Gram-Schmidt process employed in
\cite{FengB07,FengB09}, the reduced model derived from $V$
[2][3], the reduced model derived from <math>V</math>
in~(\ref{v2}) is computed in a numerically stable and accurate way.
in~(3) is computed in a numerically stable and accurate way.
%
 
%
<math>
\begin{equation}
R_0=B_M, \ R_1=[M_1R_0,\ldots, M_pR_0],  
\label{moments}
R_2=[M_1R_1,\ldots, M_pR_1],  
\begin{array}{l}
\vdots,  
R_0=B_M, \ R_1=[M_1R_0,\ldots, M_pR_0], \\
R_r=[M_1R_{r-1},\ldots, M_pR_{r-1}]
R_2=[M_1R_1,\ldots, M_pR_1], \\
\vdots, \\
R_r=[M_1R_{r-1},\ldots, M_pR_{r-1}]\\
\vdots.
\vdots.
\end{array}
</math>
\end{equation}
 
%
%
Furthermore, one can see that each moment matrix is actually several
Furthermore, one can see that each moment matrix is actually several
vectors multiplied by $\tilde{E}^{-1}$, and if the dimension of
vectors multiplied by <math>\tilde{E}^{-1}</math>, and if the dimension of
$\tilde{E}$ is very large, it is necessary to solve linear systems
<math>\tilde{E}</math> is very large, it is necessary to solve linear systems
like
like
%
 
%
<math>
\begin{equation}
\label{sequence} \tilde{E}x=w_i, \ i=1,2,\ldots, l
\label{sequence} \tilde{E}x=w_i, \ i=1,2,\ldots, l
\end{equation}
</math>
%
 
%
to obtain <math>\tilde{E}^{-1}w_i</math>, where <math>\tilde{E}</math> is generally
to obtain $\tilde{E}^{-1}w_i$, where $\tilde{E}$ is generally
non-symmetric and <math>w_i</math> is a vector. Moreover, if quite a few of the
nonsymmetric and $w_i$ is a vector. Moreover, if quite a few of the
moment matrices need to be computed (this is normal when system
moment matrices need to be computed (this is normal when system
(\ref{linear1}) contains more than 2 parameters), the number $l$ of
(1) contains more than 2 parameters), the number <math>l</math> of
the linear systems in (\ref{sequence}) will be very large. By
the linear systems in (\ref{sequence}) will be very large.
looking at the above recursions in~(\ref{moments}), it is obvious that the right-hand
sides of the linear systems cannot be available simultaneously.
Systems in (\ref{sequence}) can be simply solved one after another
by standard iterative methods such as GMRES. However, it is possible
to speed up the standard iterative methods by using more efficient
methods.

Revision as of 15:49, 29 November 2011

Parametric model order reduction (PMOR) methods are designed for model order reduction of parametrized systems, where the parameters of the system play an important role in practical applications such as Integrated Circuit (IC) design, MEMS design, Chemical engineering etc.. The parameters could be the variables describing geometrical measurement, material property, damping of the systems or component flow-rate. The reduced models are constructed such that all the parameters can be preserved with acceptable accuracy. Usually the time of simulating the reduced models is much shorter than directly simulating the original large system. However, the time of constructing the reduced model increases with the dimension of the original system. If the original system is very large, the process of obtaining the reduced model could become extremely slow. The recycling algorithm considered in this paper tries to accelerate the above process and reduce the time of deriving the reduced model to a reasonable range.

The method introduced here is from [1][2], and applies to a linear parametrized system, which has the following form in the frequency domain:

(E0+s1E1+s2E2++spEp)x=Bu(sp),(1)y=LTx,

where s1,s2,,sp are the parameters of the system. They can be any scalar functions of some source parameters, like s1=et, where t is time, or combination of several physical parameters like s1=ρv, where ρ and v are two physical parameters.

x(t)n is the state vector, udI and ydO are, respectively, the inputs and outputs of the system. To obtain the reduced model in (1), a projection matrix V which is independent of all the parameters has to be computed.

VT(E0+s1E1+s2E2++spEp)Vx=VTBu(sp),y=LTVx.

The matrix $V$ is derived by orthogonalizing a number of moment matrices of the system in (1)[1][2].

By defining BM=E~1B,Mi=E~1Ei,i=1,2,,p and

E~=E0+s10E1+s20E2++sp0Ep,

we can expand x in (1) at s1,s2,,sp around a set of expansion points p0=[s10,s20,,sp0] as below,

x=[I(σ1M1++σpMp]1BMu(sp)=i=0(σ1M1++σpMp)iBMu(sp).

Here σi=sisi0,i=1,2,,p. We call the coefficients in the above series expansion moment matrices of the parametrized system, i.e. Failed to parse (syntax error): {\displaystyle B_M, \ M_1B_M, \ \ldots, \ M_pB_M,\ M_1^2B_M, \ (M_1M_2+M_2M_1)B_M, \ \ldots, \ (M_1M_p+M_pM_1)B_M, \ M_p^2B_M, \ M_1^3B_M, \ \ldots} . The corresponding moments are those moment matrices multiplied by LT from the left. The matrix V can be generated by first explicitly computing some of the moment matrices and then orthogonalizing them as is suggested in~\cite{Daniel04}. The resulting $V$ is desired to expand the subspace:

range{V}=span{BM, M1BM,,MpBM, M12BM,(M1M2+M2M1)BM,,(M1Mp+MpM1)BM,Mp2BM,M13BM,,M1rBM,,MprBM}.(2)

However, V does not really span the whole subspace, because the latterly computed vectors in the subspace become linearly dependent due to numerical instability. Therefore, with this matrix V one cannot get an accurate reduced model which matches all the moments included in the subspace.

Instead of directly computing the moment matrices in (2)[1], a numerically robust method is proposed in [1] (the detailed algorithm is described in [3]), which combines the recursions in (4) with the modified Gram-Schmidt process to implicitly compute the moment matrices. The computed V is actually an orthonormal basis of the subspace as below,

range{V}=span{R0,R1,,Rr}.(3)

It can be proved that the subspace in~(2) is included in the subspace in~(3). Due to the numerical stability properties of the repeated modified Gram-Schmidt process employed in [2][3], the reduced model derived from V in~(3) is computed in a numerically stable and accurate way.

R0=BM, R1=[M1R0,,MpR0],R2=[M1R1,,MpR1],,Rr=[M1Rr1,,MpRr1].

Furthermore, one can see that each moment matrix is actually several vectors multiplied by E~1, and if the dimension of E~ is very large, it is necessary to solve linear systems like

Failed to parse (unknown function "\label"): {\displaystyle \label{sequence} \tilde{E}x=w_i, \ i=1,2,\ldots, l }

to obtain E~1wi, where E~ is generally non-symmetric and wi is a vector. Moreover, if quite a few of the moment matrices need to be computed (this is normal when system (1) contains more than 2 parameters), the number l of the linear systems in (\ref{sequence}) will be very large.