减肥早餐适合吃什么| 懒觉什么意思| 萤火虫为什么会发光| 但求无愧于心上句是什么| 皮肤病是什么原因造成的| 鼻子两侧挤出来的白色东西是什么| 连坐是什么意思| 狗跟什么生肖最配| 肠炎有什么症状表现| 株连九族是什么意思| 痈是什么| 伊朗说什么语言| 上火吃什么水果好| 什么食物含硒| 9个月宝宝玩什么玩具| 扫描件是什么意思| 老人双脚浮肿是什么原因| 炸油条用什么油最好| 国家的实质是什么| 南京大屠杀是什么时候| 原发性高血压什么意思| 豆芽和什么一起炒好吃| 青海有什么湖| 什么是云母| 咽峡炎吃什么药| 手牵手我们一起走是什么歌| 艾灸肚脐有什么好处| 晚上蝴蝶来家什么预兆| 膈应什么意思| 肾上腺增生是什么意思| 外阴痒用什么药膏| 风湿属于什么科| 过敏性鼻炎用什么药| 8月1号是什么星座| 胃黏膜病变是什么意思| 强直性脊柱炎看什么科| 下眼袋发青是什么原因| 高甘油三酯血症是什么意思| 兵戎相见是什么意思| 促甲状腺高会导致什么| 什么菜是发物不能吃| 手脚冰凉吃什么好| 玩微博的都是什么人| 梦见砍竹子是什么意思| 冷冻液是什么| 叶芽是什么| 中医内科主要看什么| 白菜什么时候播种| 肾挂什么科室| 女人什么时候绝经正常| 狗感冒了吃什么药| 急性扁桃体炎吃什么药| 血小板分布宽度是什么意思| 鱼泡是什么| 酉是什么字| 獭尾肝是什么病| 藏红花什么时候喝最好| 杏鲍菇不能和什么一起吃| 大小便失禁是什么原因| 南柯一梦是什么意思| 默然是什么意思| 延年是什么意思| 脸上爱长痘痘是什么原因| 脸上反复长痘是什么原因| 一直打嗝不止是什么原因| 突然膝盖疼是什么原因| 主是什么结构的字体| 知了为什么要叫| 核磁共振是什么| 足跟痛吃什么药| 剑锋金命五行缺什么| 见血封喉什么意思| 晚上看见黄鼠狼有什么预兆| 伊始什么意思| 什么叫电子版照片| 高贵的什么| 拉肚子呕吐吃什么药| 老是放屁吃什么药| 破鞋是什么意思啊| 观音成道日是什么意思| 40岁男人学什么乐器好| 易孕期是什么意思| 什么叫尿毒症| 胸前出汗多是什么原因| 什么茶叶降血压最好| 小产可以吃什么水果| 台湾为什么叫4v| 鼻咽炎吃什么药| 疏肝解郁吃什么药| 寂灭是什么意思| 什么血型会导致不孕| 背疼什么原因| 浮躁的意思是什么| 银行卡年费是什么意思| 乙酉日五行属什么| 猫咪感冒吃什么药| 貘是什么| 突然头晕是什么原因| 高压氧舱治疗什么效果| mup是什么意思| 慢性鼻炎用什么药| 潮汐车道是什么意思| 芽原基发育成什么| 什么的海底| 胃息肉是什么原因造成的| 阳卦代表什么意思| 吃什么排铅最快| 孕妇生气对胎儿有什么影响| 双性是什么意思| 针对性是什么意思| 川芎的功效与作用是什么| 硬核是什么意思| 氧化钠是什么| 哦多桑是什么意思| 耳石症眩晕吃什么药| 黄芪加陈皮有什么功效| 辰砂和朱砂有什么区别| 左脸颊长痘是什么原因| 2024属什么生肖| 颈椎病去医院挂什么科| 经常眩晕是什么原因引起的| 什么清肠茶好| 柳丁是什么水果| 便秘应该吃什么| 张起灵和吴邪什么关系| 血压偏低是什么原因造成的| 政法委是干什么的| 手麻脚麻是什么原因| 武则天叫什么名字| 一月底是什么星座| 头顶不舒服是什么原因| 一加一笔变成什么字| 孢子是什么| 为什么腰疼| 白手起家是什么意思| 手腕血管疼是什么原因| 结肠炎吃什么食物好| 鱼石是什么| 桔子树用什么肥料最好| 黄瓜有什么营养价值| 2004年是什么生肖| 在野是什么意思| mfd是什么意思| 赶集是什么意思| 广角是什么| 检查阑尾炎挂什么科| hpv病毒是什么| 风云际会的意思是什么| 甲辰年五行属什么| 人湿气重有什么症状| 物理意义是什么意思| 纳帕皮是什么皮| 开水烫伤用什么方法好的最快| 文竹的寓意是什么| 阻生齿是什么| 今天买什么股票| 绅士什么意思| 做梦梦到老公出轨代表什么预兆| 三点水是什么字| 阴囊瘙痒用什么药最好| 蒸米饭时加什么好吃| 贫血吃什么食物最好| 鸡肉和什么菜搭配最好| 医保是什么| IB是什么| 悬脉是什么意思| 胆囊萎缩是什么原因| dhc是什么| 水生什么五行| 林冲的绰号是什么| 小肠火吃什么药效果快| 退行性变是什么意思| 什么都没有| 立flag是什么意思| 多发结节是什么意思| 为什么很多人不去庐山| 蛋白尿是什么症状| 晚上喝红酒有什么好处和坏处| 为什么不建议治疗幽门螺杆菌| 三个七念什么| 上半身皮肤痒什么原因| 榴莲什么时候最便宜| 生长激素由什么分泌| 脂肪肝喝什么茶| 补充胶原蛋白吃什么最好| 急性胆囊炎吃什么药| 强磁对人体有什么危害| dvt是什么意思| 先算什么再算什么| 造影检查对身体有什么伤害| 什么是粒子植入治疗| 唐卡是什么材料做的| 纺织业属于什么行业| 老虎头上为什么有王字| 阴虱用什么药| 腕管综合征吃什么药| 耳朵发热是什么预兆| 鸟飞到头上什么预兆| 65岁属什么| 月经什么时候来| 胃糜烂吃什么药可以根治| 头陀是什么意思| 退化是什么意思| 什么是同源染色体| 跳蚤的天敌是什么| 蓝颜知己是什么意思| 内热是什么原因引起的怎么调理| 头发全白是什么病| 为什么突然流鼻血| 为什么没有| 眼睛长结石是什么原因引起的| 哥谭市是什么意思| 龙配什么生肖最好| 胡桃是什么| 高血脂会引起什么疾病| 吓得什么填空| 什么是六爻| 肿瘤最怕什么| 拈花一笑什么意思| 鸡蛋粘壳是什么原因| 嗓子上火吃什么药| 朋友圈为什么发不出去| 1959年属什么生肖| 做梦梦到捡钱是什么征兆| ra是什么病的缩写| 被蜜蜂蛰了擦什么药| 深入交流是什么意思| 什么叫静息心率| 来月经吃什么水果好| 织物是什么材质| 中暑什么感觉| 藏红花可以搭配什么泡水喝| 做胃镜前喝的那个液体是什么| 706代血浆又叫什么| 7月26日是什么星座| 偶尔是什么意思| 蒲公英和玫瑰花一起泡有什么功效| 黎明破晓是什么意思| 放射科检查什么| 姨妈没来是什么原因| 盆腔炎是什么症状| 经常尿路感染是什么原因| 昂热为什么认识路鸣泽| 壶嘴为什么不能对着人| 皮肤痒挂什么科| 红烧排骨用什么排骨比较好| 四个月念什么字| 天蝎座后面是什么星座| 快餐是什么意思| 圣诞节送孩子什么礼物好| 五味子有什么作用| 为什么会莫名其妙流鼻血| 胃酸吃什么好| 孕妇吃什么鱼对胎儿好| BE是什么| 原则性问题是什么意思| 墓库是什么意思| 尿液弱阳性什么意思| ly是什么意思| 四方八面是什么生肖| 建日是什么意思| 行大运是什么意思| 凝血五项是检查什么病| 扁桃和芒果有什么区别| 百度Jump to content

香港互联网经济峰会:多方助力 香港互联网经...

From Wikipedia, the free encyclopedia
百度 这些展品形象丰富多样,表情细腻传神,彩绘用色大胆,具有较强的艺术感染力,体现出传统木偶与漳州民间文化的深度融合,同时,也较为全面地展示了徐氏木偶雕刻的技艺特点、艺术风格和匠心传承。

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently.[1][2][3] The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

The practical QR algorithm

[edit]

Formally, let A be a real matrix of which we want to compute the eigenvalues, and let A0 := A. At the k-th step (starting with k = 0), we compute the QR decomposition Ak = Qk?Rk where Qk is an orthogonal matrix (i.e., QT = Q?1) and Rk is an upper triangular matrix. We then form Ak+1 = Rk?Qk. Note that so all the Ak are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.

Under certain conditions,[4] the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros,[citation needed] but the Gershgorin circle theorem provides a bound on the error.

If the matrices converge, then the eigenvalues along the diagonal will appear according to their geometric multiplicity. To guarantee convergence, A must be a symmetric matrix, and for all non zero eigenvalues there must not be a corresponding eigenvalue .[5] Due to the fact that a single QR iteration has a cost of and the convergence is linear, the standard QR algorithm is extremely expensive to compute, especially considering it is not guaranteed to converge.[6]

Using Hessenberg form

[edit]

In the above crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix A to upper Hessenberg form (which costs arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.[7][8] (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm.

If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the Ak. In this case reaching Hessenberg form costs arithmetic operations using a technique based on Householder reduction.[7][8] Determining the QR decomposition of a symmetric tridiagonal matrix costs operations.[9]

Iteration phase

[edit]

If a Hessenberg matrix has element for some , i.e., if one of the elements just below the diagonal is in fact zero, then it decomposes into blocks whose eigenproblems may be solved separately; an eigenvalue is either an eigenvalue of the submatrix of the first rows and columns, or an eigenvalue of the submatrix of remaining rows and columns. The purpose of the QR iteration step is to shrink one of these elements so that effectively a small block along the diagonal is split off from the bulk of the matrix. In the case of a real eigenvalue that is usually the block in the lower right corner (in which case element holds that eigenvalue), whereas in the case of a pair of conjugate complex eigenvalues it is the block in the lower right corner.

The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.[clarification needed]

A single iteration with explicit shift

[edit]

The steps of a QR iteration with explicit shift on a real Hessenberg matrix are:

  1. Pick a shift and subtract it from all diagonal elements, producing the matrix . A basic strategy is to use , but there are more refined strategies that would further accelerate convergence. The idea is that should be close to an eigenvalue, since making this shift will accelerate convergence to that eigenvalue.
  2. Perform a sequence of Givens rotations on , where acts on rows and , and is chosen to zero out position of . This produces the upper triangular matrix . The orthogonal factor would be , but it is neither necessary nor efficient to produce that explicitly.
  3. Now multiply by the Givens matrices , , ..., on the right, where instead acts on columns and . This produces the matrix , which is again on Hessenberg form.
  4. Finally undo the shift by adding to all diagonal entries. The result is . Since commutes with , we have that .

The purpose of the shift is to change which Givens rotations are chosen.

In more detail, the structure of one of these matrices are where the in the upper left corner is an identity matrix, and the two scalars and are determined by what rotation angle is appropriate for zeroing out position . It is not necessary to exhibit ; the factors and can be determined directly from elements in the matrix should act on. Nor is it necessary to produce the whole matrix; multiplication (from the left) by only affects rows and , so it is easier to just update those two rows in place. Likewise, for the Step 3 multiplication by from the right, it is sufficient to remember , , and .

If using the simple strategy, then at the beginning of Step 2 we have a matrix where the denotes “could be whatever”. The first Givens rotation zeroes out the position of this, producing Each new rotation zeroes out another subdiagonal element, thus increasing the number of known zeroes until we are at The final rotation has chosen so that . If , as is typically the case when we approach convergence, then and . Making this rotation produces which is our upper triangular matrix. But now we reach Step 3, and need to start rotating data between columns. The first rotation acts on columns and , producing The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form. This ends at Algebraically the form is unchanged, but numerically the element in position has gotten a lot closer to zero: there used to be a factor gap between it and the diagonal element above, but now the gap is more like a factor , and another iteration would make it factor ; we have quadratic convergence. Practically that means iterations per eigenvalue suffice for convergence, and thus overall we can complete in QR steps, each of which does a mere arithmetic operations (or as little as operations, in the case that is symmetric).

Visualization

[edit]
Figure 1: How the output of a single iteration of the QR or LR algorithm varies alongside its input

The basic QR algorithm can be visualized in the case where A is a positive-definite symmetric matrix. In that case, A can be depicted as an ellipse in 2 dimensions or an ellipsoid in higher dimensions. The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation). Note that the LR algorithm is depicted alongside the QR algorithm.

A single iteration causes the ellipse to tilt or "fall" towards the x-axis. In the event where the large semi-axis of the ellipse is parallel to the x-axis, one iteration of QR does nothing. Another situation where the algorithm "does nothing" is when the large semi-axis is parallel to the y-axis instead of the x-axis. In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction. In both situations, the matrix is diagonal. A situation where an iteration of the algorithm "does nothing" is called a fixed point. The strategy employed by the algorithm is iteration towards a fixed-point. Observe that one fixed point is stable while the other is unstable. If the ellipse were tilted away from the unstable fixed point by a very small amount, one iteration of QR would cause the ellipse to tilt away from the fixed point instead of towards. Eventually though, the algorithm would converge to a different fixed point, but it would take a long time.

Finding eigenvalues versus finding eigenvectors

[edit]
Figure 2: How the output of a single iteration of QR or LR are affected when two eigenvalues approach each other

It's worth pointing out that finding even a single eigenvector of a symmetric matrix is not computable (in exact real arithmetic according to the definitions in computable analysis).[10] This difficulty exists whenever the multiplicities of a matrix's eigenvalues are not knowable. On the other hand, the same problem does not exist for finding eigenvalues. The eigenvalues of a matrix are always computable.

We will now discuss how these difficulties manifest in the basic QR algorithm. This is illustrated in Figure 2. Recall that the ellipses represent positive-definite symmetric matrices. As the two eigenvalues of the input matrix approach each other, the input ellipse changes into a circle. A circle corresponds to a multiple of the identity matrix. A near-circle corresponds to a near-multiple of the identity matrix whose eigenvalues are nearly equal to the diagonal entries of the matrix. Therefore, the problem of approximately finding the eigenvalues is shown to be easy in that case. But notice what happens to the semi-axes of the ellipses. An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle. The eigenvectors can only be known when the semi-axes are parallel to the x-axis and y-axis. The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular.

While it may be impossible to compute the eigendecomposition of an arbitrary symmetric matrix, it is always possible to perturb the matrix by an arbitrarily small amount and compute the eigendecomposition of the resulting matrix. In the case when the matrix is depicted as a near-circle, the matrix can be replaced with one whose depiction is a perfect circle. In that case, the matrix is a multiple of the identity matrix, and its eigendecomposition is immediate. Be aware though that the resulting eigenbasis can be quite far from the original eigenbasis.

Speeding up: Shifting and deflation

[edit]

The slowdown when the ellipse gets more circular has a converse: It turns out that when the ellipse gets more stretched - and less circular - then the rotation of the ellipse becomes faster. Such a stretch can be induced when the matrix which the ellipse represents gets replaced with where is approximately the smallest eigenvalue of . In this case, the ratio of the two semi-axes of the ellipse approaches . In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues. This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be deflated, which simply means removing its last row and column.

The issue with the unstable fixed point also needs to be addressed. The shifting heuristic is often designed to deal with this problem as well: Practical shifts are often discontinuous and randomised. Wilkinson's shift—which is well-suited for symmetric matrices like the ones we're visualising—is in particular discontinuous.

The implicit QR algorithm

[edit]

In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce.[4] The matrix is first brought to upper Hessenberg form as in the explicit version; then, at each step, the first column of is transformed via a small-size Householder similarity transformation to the first column of [clarification needed] (or ), where , of degree , is the polynomial that defines the shifting strategy (often , where and are the two eigenvalues of the trailing principal submatrix of , the so-called implicit double-shift). Then successive Householder transformations of size are performed in order to return the working matrix to upper Hessenberg form. This operation is known as bulge chasing, due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of is sufficiently small.

Renaming proposal

[edit]

Since in the modern implicit version of the procedure no QR decompositions are explicitly performed, some authors, for instance Watkins,[11] suggested changing its name to Francis algorithm. Golub and Van Loan use the term Francis QR step.

Interpretation and convergence

[edit]

The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm. Recall that the power algorithm repeatedly multiplies A times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix A, upon convergence, AQ = , where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.

History

[edit]

The QR algorithm was preceded by the LR algorithm, which uses the LU decomposition instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm.

The LR algorithm was developed in the early 1950s by Heinz Rutishauser, who worked at that time as a research assistant of Eduard Stiefel at ETH Zurich. Stiefel suggested that Rutishauser use the sequence of moments y0T Ak x0, k = 0, 1, ... (where x0 and y0 are arbitrary vectors) to find the eigenvalues of A. Rutishauser took an algorithm of Alexander Aitken for this task and developed it into the quotient–difference algorithm or qd algorithm. After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration Ak = LkUk (LU decomposition), Ak+1 = UkLk, applied on a tridiagonal matrix, from which the LR algorithm follows.[12]

Other variants

[edit]

One variant of the QR algorithm, the Golub-Kahan-Reinsch algorithm starts with reducing a general matrix into a bidiagonal one.[13] This variant of the QR algorithm for the computation of singular values was first described by Golub & Kahan (1965). The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.[14][15]

References

[edit]
  1. ^ J.G.F. Francis, "The QR Transformation, I", The Computer Journal, 4(3), pages 265–271 (1961, received October 1959). doi:10.1093/comjnl/4.3.265
  2. ^ Francis, J. G. F. (1962). "The QR Transformation, II". The Computer Journal. 4 (4): 332–345. doi:10.1093/comjnl/4.4.332.
  3. ^ Vera N. Kublanovskaya, "On some algorithms for the solution of the complete eigenvalue problem," USSR Computational Mathematics and Mathematical Physics, vol. 1, no. 3, pages 637–657 (1963, received Feb 1961). Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, vol.1, no. 4, pages 555–570 (1961). doi:10.1016/0041-5553(63)90168-X
  4. ^ a b Golub, G. H.; Van Loan, C. F. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins University Press. ISBN 0-8018-5414-8.
  5. ^ Holmes, Mark H. (2023). Introduction to scientific computing and data analysis. Texts in computational science and engineering (Second ed.). Cham: Springer. ISBN 978-3-031-22429-4.
  6. ^ Golub, Gene H.; Van Loan, Charles F. (2013). Matrix computations. Johns Hopkins studies in the mathematical sciences (Fourth ed.). Baltimore: The Johns Hopkins University Press. ISBN 978-1-4214-0794-4.
  7. ^ a b Demmel, James W. (1997). Applied Numerical Linear Algebra. SIAM.
  8. ^ a b Trefethen, Lloyd N.; Bau, David (1997). Numerical Linear Algebra. SIAM.
  9. ^ Ortega, James M.; Kaiser, Henry F. (1963). "The LLT and QR methods for symmetric tridiagonal matrices". The Computer Journal. 6 (1): 99–101. doi:10.1093/comjnl/6.1.99.
  10. ^ "linear algebra - Why is uncomputability of the spectral decomposition not a problem?". MathOverflow. Retrieved 2025-08-06.
  11. ^ Watkins, David S. (2007). The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods. Philadelphia, PA: SIAM. ISBN 978-0-89871-641-2.
  12. ^ Parlett, Beresford N.; Gutknecht, Martin H. (2011), "From qd to LR, or, how were the qd and LR algorithms discovered?" (PDF), IMA Journal of Numerical Analysis, 31 (3): 741–754, doi:10.1093/imanum/drq003, hdl:20.500.11850/159536, ISSN 0272-4979
  13. ^ Bochkanov Sergey Anatolyevich. ALGLIB User Guide - General Matrix operations - Singular value decomposition . ALGLIB Project. 2025-08-06. URL:[1]Accessed: 2025-08-06. (Archived by WebCite at
  14. ^ Deift, Percy; Li, Luenchau C.; Tomei, Carlos (1985). "Toda flows with infinitely many variables". Journal of Functional Analysis. 64 (3): 358–402. doi:10.1016/0022-1236(85)90065-5.
  15. ^ Colbrook, Matthew J.; Hansen, Anders C. (2019). "On the infinite-dimensional QR algorithm". Numerische Mathematik. 143 (1): 17–83. arXiv:2011.08172. doi:10.1007/s00211-019-01047-5.

Sources

[edit]
[edit]
糖耐是检查什么 常熟有什么好玩的地方 拔牙之后吃什么消炎药 头疼需要做什么检查 咳嗽有白痰一直不好是什么原因
水晶消磁是什么意思 华佗是什么生肖 不怕热是什么体质 为什么家里有蟑螂 猪肚炖什么
化疗与放疗有什么区别 波折是什么意思 贵人是什么意思 六尘不染的生肖是什么 右位是什么意思
附睾炎吃什么药最有效 4.19是什么星座 x线检查是什么 孕前检查什么时候去最合适 参军意愿选什么比较好
1978年出生是什么命xscnpatent.com 双字五行属什么hcv8jop8ns0r.cn 理疗是什么hcv8jop0ns5r.cn vj是什么意思hcv7jop9ns7r.cn 戴银镯子变黑是什么原因hcv8jop5ns0r.cn
剥皮实草是什么意思hcv8jop8ns1r.cn min是什么意思hcv8jop1ns6r.cn 满日是什么意思hcv9jop4ns4r.cn 囗腔溃疡吃什么维生素hcv8jop0ns3r.cn 卡地亚属于什么档次hcv8jop3ns5r.cn
吃什么水果对心脏有好处hcv8jop8ns2r.cn 执拗是什么意思hcv9jop1ns1r.cn 女孩叫兮兮是什么意思hcv9jop7ns2r.cn 9点半是什么时辰hcv8jop2ns1r.cn 头皮毛囊炎用什么洗发水sanhestory.com
铁锈色痰见于什么病cl108k.com 糙米是什么hcv9jop3ns1r.cn 怀孕哭对宝宝有什么影响hcv9jop2ns4r.cn 黄疸高吃什么药hcv7jop5ns2r.cn 排骨粥要搭配什么好吃hcv7jop4ns5r.cn
百度