SlideShare a Scribd company logo
Quadratic Form and Functional
        Optimization
     9th June, 2011   Junpei Tsuji
Optimization of multivariate
    quadratic function
                           𝑥1     1         3 1   𝑥1
𝐽 𝑥1 , 𝑥2 = 1.2 + 0.2, 0.3 𝑥2   +   𝑥1 , 𝑥2       𝑥2
                                  2         1 4
                              3 2
     = 1.2 + 0.2𝑥1 + 0.3𝑥2 + 𝑥1 + 𝑥1 𝑥2 + 2𝑥2 2
                              2




      𝑥1 , 𝑥2 , 𝐽 = 0.045, 0.064, 1.1881
Quadratic approximation
                                                       1
               𝑓 𝒙 ≈ 𝑓 ̅ + 𝑱̅ ∙ 𝒙 − � +
                                    𝒙                    𝒙−�
                                                           𝒙     𝑇�
                                                                   𝑯 𝒙−�
                                                                       𝒙
By Taylor's expansion

                                                       2
                     constant          linear form               quadratic form



    𝒙 ∶= 𝑥1 , 𝑥2 , ⋯ 𝑥 𝑝
                                𝑇
where


    𝑓 ̅ ∶= 𝑓 �
             𝒙
•


    𝑱̅: =      ,    ,⋯,
            𝜕𝑓   𝜕𝑓     𝜕𝑓
•

            𝜕𝑥1 𝜕𝑥2     𝜕𝑥 𝑝
                                        �
                                      𝒙=𝒙
•                                               Jacobian (gradient)

                         ⋯
                𝜕2 𝑓                  𝜕2 𝑓
              𝜕𝑥1 𝜕𝑥1               𝜕𝑥1 𝜕𝑥 𝑝
• � ∶=
   𝑯             ⋮       ⋱             ⋮
                         ⋯
                𝜕2 𝑓                  𝜕2 𝑓
                                                      Hessian (constant)

              𝜕𝑥 𝑝 𝜕𝑥1              𝜕𝑥 𝑝 𝜕𝑥 𝑝
                                                  �
                                                𝒙=𝒙
Completing the square
                          1
 𝑓 𝒙 = 𝑓 ̅ + 𝑱̅ ∙ 𝒙 − � +
                      𝒙     𝒙−�
                              𝒙       𝑇�
                                        𝑯 𝒙−�
                                            𝒙
                          2

• Let � = 𝒙∗ where 𝑱 𝒙∗ 𝑇 = 𝟎 then
      𝒙
                  1
      𝑓 𝒙 = 𝑓∗ +     𝒙 − 𝒙∗ 𝑇 𝑯∗ 𝒙 − 𝒙∗
                  2
           constant         quadratic form
Completing the square
                   1 𝑇
    𝑓 𝒙 = 𝑐 + 𝒃 𝒙 + 𝒙 𝑨𝑨
                  𝑇
                   2
                                       1
                            𝑓 𝒙 = 𝑑+       𝒙 − 𝒙0 𝑇 𝑨 𝒙 − 𝒙0
                                       2
                             1           1                 1
                       = 𝑑 + 𝒙0 𝑇 𝑨𝒙0 − 𝒙0 𝑇 𝑨 + 𝑨 𝑇 𝒙 + 𝒙 𝑇 𝑨𝑨
                             2           2                 2
      𝒃𝑇 =−       𝒙0 𝑇 𝑨 + 𝑨 𝑇
              1
              2
                                    𝒙0 𝑇 = −2𝒃 𝑇 𝑨 + 𝑨 𝑇 −1
•

                                      𝒙0 = −2 𝑨 + 𝑨 𝑇 −1 𝒃
      𝑐= 𝑑+       𝒙0 𝑇 𝑨𝒙0
              1

                             1 𝑇
              2

                  𝑑= 𝑐−        𝒙0 𝑨𝒙0 = 𝑐 − 2𝒃 𝑇 𝑨 + 𝑨 𝑇        𝑨 𝑨+ 𝑨𝑇         𝒃
•
                                                           −1              −1
                             2

                          𝑓 𝒙 = 𝑐 − 2𝒃 𝑇 𝑨 + 𝑨 𝑇   −1   𝑨 𝑨+ 𝑨𝑇   −1   𝒃
Therefore,

                    1
                      + 𝒙 + 2 𝑨 + 𝑨 𝑇 −1 𝒃 𝑇 𝑨 𝒙 + 2 𝑨 + 𝑨 𝑇 −1 𝒃
                    2
     If 𝑨 was symmetric matrix,
                             1 𝑇 −1      1
                 𝑓 𝒙 = 𝑐− 𝒃 𝑨 𝒃+            𝒙 + 𝑨−1 𝒃 𝑇 𝑨 𝒙 + 𝑨−1 𝒃
•

                             2           2
Quadratic form
                𝑓 𝒙𝒙 = 𝒙𝒙 𝑇 𝑺𝑺𝑺

• 𝑺 is symmetric matrix.
where
Symmetric matrix
• Symmetric matrix 𝑺 is defined as a matrix that satisfies the

                           𝑺𝑇 = 𝑺
  following formula:


• Symmetric matrix 𝑺 has real eigenvalues 𝜆 𝑖 and
  eigenvectors 𝒖 𝑖 that consist of normal orthogonal base.

                            𝑺𝒖 𝑖 = 𝜆 𝑖 𝒖 𝑖
where

                      𝜆1 ≥ 𝜆2 ≥ ⋯ ≥ 𝜆 𝑝
                            𝒖 𝑖 , 𝒖 𝑗 = 𝛿 𝑖𝑖
                   𝛿 𝑖𝑖 is Kronecker's delta
Diagonalization of symmetric matrix
• We define an orthogonal matrix 𝑼 as follows:
                           𝑼 = 𝒖1 , 𝒖2 , ⋯ , 𝒖 𝑝
• Then, 𝑼 satisfies the following formulas:
                                𝑼𝑇 𝑼= 𝑰
                              ∴ 𝑼−1 = 𝑼 𝑇
• where 𝑰 is an identity matrix.
   𝑺𝑺 = 𝑺 𝒖1 , 𝒖2 , ⋯ , 𝒖 𝑝 = 𝑺𝒖1 , 𝑺𝒖2 , ⋯ , 𝑺𝒖 𝑝
                                                            𝜆1
           = 𝜆1 𝒖1 , 𝜆2 𝒖2 , ⋯ , 𝜆 𝑝 𝒖 𝑝 =   𝒖1 , ⋯ , 𝒖 𝑝        ⋱
                                                                     𝜆𝑝
           = 𝑼 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝
                 ∴ 𝑺 = 𝑼 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝       𝑼𝑇
Transformation to principal axis
                       𝑓 𝒙′ = 𝒙′ 𝑇 𝑺𝑺′
• Then, we assume 𝒙𝒙 = 𝑼 𝑇 𝒛, where 𝒛 =
   𝑧1 , 𝑧1 , ⋯ , 𝑧 𝑝 .

      𝑓 𝑼 𝑇 𝒛 = 𝑼 𝑇 𝒛 𝑇 𝑺 𝑼 𝑇 𝒛 = 𝒛 𝑇 𝑼𝑺𝑼 𝑇 𝒛
            = 𝒛 𝑇 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝 𝒛
                           𝑝

               ∴ 𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖2
                          𝑖=1
Contour surface
• If we assume 𝑓 𝒛 equals constant 𝑐,
                          𝑝

                𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖2 = 𝑐
                         𝑖=1
• When 𝑝 = 2,
  – a locus of 𝒛 illustrates an ellipse if 𝜆1 𝜆2 > 0.
  – a locus of 𝒛 illustrates a hyperbola if 𝜆1 𝜆2 < 0.
Contour surface
                           𝑧2                2

                                     𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖 2 = 𝑐𝑐𝑐𝑐𝑐.
                                            𝑖=1
                                             𝜆1 𝜆2 > 0



                                                 𝑧1




maximal or minimal point




                                                      𝑓 𝑥1 , 𝑥2 = −𝑥1 2 − 2𝑥2 2 + 20.0
Transformation to principal axis
            𝑥𝑥2


                  𝑓 𝒙𝒙 = 𝑐𝑐𝑐𝑐𝑐.




                                      𝑥𝑥1


                                   𝒙𝒙 = 𝑼 𝑇 𝒛
                                  ∴ 𝒛 = 𝑼𝒙′
                       Transformation to principal axis
Parallel translation
             𝑥𝑥2

𝑥2

                   �
                   𝒙             𝑥𝑥1


                            𝑓 𝒙 = 𝑐𝑐𝑐𝑐𝑐.


                       𝑥1
                              𝒙𝒙 = 𝒙 − �𝒙
1
Contour surface of quadratic function
        𝑓 𝒙 = 𝑓 +
               ∗
                    𝒙 − 𝒙∗        𝑇
                                      𝑯∗ 𝒙 − 𝒙∗
                  2
   𝑥2

                   �
                   𝒙


                            𝑓 𝒙 = 𝑐𝑐𝑐𝑐𝑐.


                       𝑥1
Contour surface
         𝑧2

                                  2

                         𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖 2 = 𝑐𝑐𝑐𝑐𝑐.
                                 𝑖=1
                                  𝜆1 𝜆2 < 0
                    𝑧1




saddle point




                              𝑓 𝑥1 , 𝑥2 = 𝑥1 2 − 𝑥2 2
Stationary points
𝑓 𝑥1 , 𝑥2 = 𝑥1 3 + 𝑥2 3 + 3𝑥1 𝑥2 + 2




  maximal point
                                       saddle point
Stationary points
                    1 3
𝑓 𝑥1 , 𝑥2 = exp −     𝑥1 + 𝑥1 − 𝑥2 2
                    3




   saddle point
                                         maximal point
Quadratic form and functional optimization
Newton-Raphson method

   𝑓𝑓 𝒙 = 𝟎 where 𝑓 𝒙 is 𝑁-th polynomial by
• Newton’s method is an approximate solver of

  using a quadratic approximation.


                                                              𝑓 𝒙


                                            quadratic approximation of 𝑓 𝒙 in 𝒙
                                                                                 1
                                                     𝑓 𝒙 + Δ𝒙 ≈ 𝑓 𝒙 + 𝑱 𝒙 ∙ Δ𝒙 + Δ𝒙 𝑇 𝑯 𝒙 Δ𝒙
                                                                                 2
                                                            𝜕𝑓 𝒙 + Δ𝒙
              𝑓𝑓 𝒙∗ = 𝟎                                       𝜕 Δ𝒙
                                                                      = 𝑱 𝒙 𝑇 + 𝑯 𝒙 Δ𝒙



                          𝒙∗   𝒙 + 𝚫𝒙   𝒙
                                                                    𝒙
Algorithm of Newton’s method
Procedure Newton (𝑱 𝒙 , 𝑯 𝒙 )
 1. Initialize 𝒙.
 2. Calculate 𝑱 𝒙 and 𝑯 𝒙 .

    equation and giving ∆𝒙 :
          𝑱 𝒙 𝑇 + 𝑯 𝒙 ∆𝒙 = 𝟎
 3. Solve the following simultaneous


 4. Update 𝒙 as follows:
               𝒙 ← 𝒙 + ∆𝒙
 5. If ∆𝒙 < 𝛿 then return 𝒙 else go
    back to 2.
Linear regression
                                                                     𝑝
             𝑦
                                                  𝑦 = 𝑓 𝒙 = 𝛽0 + � 𝛽 𝑗 𝑥 𝑗
                      𝑁 samples
                                    𝒙 𝑖, 𝑦 𝑖
                                                                    𝑗=1




                                                      𝒙   𝑝-th dimensional space


We would like to find 𝜷∗ that minimizes the residual sum of square (RSS).
Linear regression
                             min RSS 𝜷
                               𝜷

                                                                    2
                 𝑁                    𝑁                  𝑝
• where

 RSS 𝜷 = � 𝑦 𝑖 − 𝑓 𝒙 𝑖         2   = � 𝑦𝑖 −       𝛽0 + � 𝛽 𝑗 𝑥 𝑖𝑖
             𝑖=1                     𝑖=1               𝑗=1
• Given 𝑿, 𝒚, 𝜷 as follows:
          𝑥11        ⋯   𝑥1𝑝 1       𝑦1           𝛽1
    𝑿=     ⋮         ⋱    ⋮ ⋮ , 𝒚=   ⋮ , 𝜷=       ⋮
          𝑥 𝑁𝑁       ⋯   𝑥 𝑁𝑁 1      𝑦𝑁           𝛽𝑝

                         ∴ RSS 𝜷 = 𝒚 − 𝑿𝜷     2
Linear regression
     RSS 𝜷 = 𝐽 𝜷 = 𝒚 − 𝑿𝜷 2 = 𝒚 − 𝑿𝜷 𝑇 𝒚 − 𝑿𝜷
           = 𝒚 𝑇 𝒚 − 𝜷 𝑇 𝑿 𝑇 𝒚 − 𝒚 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷


         𝒂𝑇 𝜷 = 𝒂
    𝜕
    𝜕𝜷

         𝜷𝑇 𝒂 = 𝒂
•
    𝜕
    𝜕𝜷

         𝜷 𝑇 𝑨𝜷 = 𝑨
•
    𝜕
    𝜕𝜷
                        𝜕𝐽
               𝐽′   𝜷 =    = −2𝑿 𝑇 𝒚 + 2𝑿 𝑇 𝑿𝜷
•

                        𝜕𝜷
Linear regression
Given 𝜷∗ that satisfies 𝐽′ 𝜷∗ = 𝟎,
                       𝑿 𝑇 𝒚 = 𝑿 𝑇 𝑿𝜷∗
                      𝒚 𝑇 𝑿 = 𝜷∗ 𝑇 𝑿 𝑇 𝑿
                   ∴ 𝜷∗ =      𝑿𝑇 𝑿   −1
                                           𝑿𝑇 𝒚

  ∴ 𝐽 𝜷 = 𝒚 𝒚 − 𝜷 𝑿 𝑿𝜷 − 𝜷 𝑿 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷
               𝑇       𝑇   𝑇    ∗      ∗𝑇

 ∴ 𝐽 𝜷
       = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ − 𝜷 𝑇 𝑿 𝑇 𝑿𝜷∗
       − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷
 ∴ 𝐽 𝜷 = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗
                                              completing the square
Linear regression
 𝐽 𝜷 = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗
 = 𝒚 − 𝑿𝜷∗ 2 + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗
           1
 = 𝐽 𝜷 +
      ∗
              𝜷 − 𝜷 ∗ 𝑇 𝑯 𝜷 − 𝜷∗
           2
Residual sum of squares (RSS)    quadratic form

                            𝛽2                         𝐽 𝜷 = 𝑐𝑐𝑐𝑐𝑐.
by Linear Regression


                                             𝜷∗

                                                        𝜷∗ = 𝑿 𝑇 𝑿 −1 𝑿 𝑇 𝒚
                                                            𝑯 = 2𝑿 𝑇 𝑿
                                                  𝛽1
Hessian

• 𝑯≔                 = 2𝑿 𝑇 𝑿
          𝜕2 𝐽
         𝜕𝛽 𝑖 𝜕𝛽 𝑗

• 𝑯 has the following two features:
                                   𝑯𝑇 = 𝑯
                                  ∀ 𝒙 ≠ 𝟎, 𝒙 𝑇 𝑯𝑯 > 0
  – symmetric matrix:
  – positive-definite matrix:


Therefore, 𝜷∗ =       𝑿𝑇 𝑿   −1
                                  𝑿 𝑇 𝒚 is the minimum
of 𝐽 𝜷 .
Analysis of residuals
                       𝒚∗ = 𝑿𝜷∗
• Then, we substitute 𝜷∗ = 𝑿 𝑇 𝑿   −1
                                        𝑿 𝑇 𝒚 in the above,
              𝒚∗ = 𝑿𝜷∗ = 𝑿 𝑿 𝑇 𝑿    −1
                                         𝑿𝑇 𝒚

                ∴ 𝒚∗ = ℋ𝒚 (Hat matrix)

• the vector of residuals 𝒓 can be expressed by follows:
           𝒓 = 𝒚 − 𝒚∗ = 𝒚 − ℋ𝒚 = 𝑰 − ℋ 𝒚
  𝑉𝑉𝑉 𝒓 = 𝑉𝑉𝑉 𝑰 − ℋ 𝒚 = 𝑰 − ℋ 𝑉𝑉𝑉 𝒚 𝑰 − ℋ 𝑇
Analysis of residuals
                ℋ = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇
The hat matrix ℋ is a projection matrix, which

1. Projection: ℋ 2 = ℋ
satisfies the following equations:

   ℋ 2 = ℋ ∙ ℋ = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 ∙ 𝑿 𝑿 𝑇 𝑿   −1
                                                 𝑿𝑇
         = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇
             = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 = ℋ
2. Orthogonal: ℋ 𝑇 = ℋ
Analysis of residuals

                 𝑥11    ⋯       𝑥1𝑝 1           𝛽1 ∗
       𝑦1 ∗                                      ⋮
        ⋮ =       ⋮     ⋱        ⋮ ⋮
                                                𝛽𝑝 ∗
       𝑦 𝑁∗      𝑥 𝑁1   ⋯       𝑥 𝑁𝑁 1
                                                𝛽0 ∗

           𝑥11                    𝑥1𝑝        1
= 𝛽1   ∗    ⋮ + ⋯ + 𝛽 𝑝∗           ⋮   + 𝛽0 ⋮
                                           ∗

           𝑥 𝑁1                   𝑥 𝑁𝑁       1
            𝒙1                       𝒙𝑝                𝒙 𝑝+1 = 𝟏
                    linear combination in 𝑝 + 1 -th vector space
Analysis of residuals


                               𝒚
                                        𝒚∗ = ℋ𝒚 (Projection)

        𝒙𝑝
                                   𝒚∗

                          𝒙𝑗

                 𝑝 + 1 -th dimensional super surface

𝑁-th dimensional space
Analysis of residuals
 𝒚 = 𝑿𝜷
• 𝜷 = 𝑿−1 𝒚, where 𝑿−1 is M-P generalized inverse.
                              𝑝= 𝑁
                              𝑝> 𝑁
    1. Unique solution:

                              𝑝< 𝑁
    2. Many solutions:


                    𝑿−1
    3. No solution:


     𝑿   −1
              =� 𝑿𝑿 𝑿𝑿𝑿 −1     𝜷 = 𝑿−1 𝒚 is min in 𝜷
                  𝑿𝑿𝑿 −1 𝑿𝑿        𝒚 − 𝑿𝜷 2 is min
•

More Related Content

PDF
Lesson 26: The Fundamental Theorem of Calculus (slides)
PDF
Taller de funcion cuadrática
PPTX
Punto medio
PDF
Sistemas de ecuaciones lineales (II)
PDF
Guia 1 n2_angulos_en_triangulos_36877_20150508_20140711_162956
PPTX
Improper integral of second kind
PDF
Profmat medias
PPTX
Ordinary Differential Equations: Variable separation method
Lesson 26: The Fundamental Theorem of Calculus (slides)
Taller de funcion cuadrática
Punto medio
Sistemas de ecuaciones lineales (II)
Guia 1 n2_angulos_en_triangulos_36877_20150508_20140711_162956
Improper integral of second kind
Profmat medias
Ordinary Differential Equations: Variable separation method

What's hot (20)

PPTX
Resolucion de triangulos rectangulos
PDF
Ejercicios de teoria de conjuntos
PDF
Integral Indefinida E Definida
PDF
Números Complexos
PDF
Integracion de romberg
DOC
Three dimensional geometry
PPTX
Basics of Integration and Derivatives
DOC
Funciones trigonometricas evaluación de
PDF
41 ejercicios sistemas de ecuaciones
PPTX
improper integrals
DOCX
Relaciones metricas en el triangulo rectangulo (2) (1)
PDF
Tema7 sol
PPT
Eigen Values Jocobi Method.ppt
PPTX
Semana2-Sumatoria-Combinatorio-Binomio Newton (1) (3) (5).pptx
PDF
El Silabo de la Asignatura de Calculo II ccesa007
PDF
Função quadrática resumo teórico e exercícios - celso brasil
PDF
Practica de apoyo sobre conversion de la medida de angulos ...
PDF
92538508 algebra-material-de-peruacadeico-nxpowerlite
PPTX
Longitud de arco
PDF
Semana 12 ecuaciones e inecuaciones trigonometricas
Resolucion de triangulos rectangulos
Ejercicios de teoria de conjuntos
Integral Indefinida E Definida
Números Complexos
Integracion de romberg
Three dimensional geometry
Basics of Integration and Derivatives
Funciones trigonometricas evaluación de
41 ejercicios sistemas de ecuaciones
improper integrals
Relaciones metricas en el triangulo rectangulo (2) (1)
Tema7 sol
Eigen Values Jocobi Method.ppt
Semana2-Sumatoria-Combinatorio-Binomio Newton (1) (3) (5).pptx
El Silabo de la Asignatura de Calculo II ccesa007
Função quadrática resumo teórico e exercícios - celso brasil
Practica de apoyo sobre conversion de la medida de angulos ...
92538508 algebra-material-de-peruacadeico-nxpowerlite
Longitud de arco
Semana 12 ecuaciones e inecuaciones trigonometricas
Ad

Similar to Quadratic form and functional optimization (20)

PDF
Tutorial 9 mth 3201
PDF
Gaussseidelsor
 
PDF
Introduction to modern time series analysis
PDF
Introduction to inverse problems
PDF
Parameter Estimation in Stochastic Differential Equations by Continuous Optim...
PDF
Sheet1 simplified
PDF
V. Dragovic: Geometrization and Generalization of the Kowalevski top
PDF
Maths assignment
PDF
Emat 213 midterm 1 fall 2005
PDF
Engr 213 sample midterm 2b sol 2010
PDF
Emat 213 study guide
PDF
C Sanchez Reduction Saetas
PPT
"Modern Tracking" Short Course Taught at University of Hawaii
PDF
Ih2414591461
PPTX
study Accelerating Spatially Varying Gaussian Filters
DOC
Chapter 4(differentiation)
PDF
Lattices and codes
PDF
Performance of Optimal Registration Estimator
PDF
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
PDF
2 senarai rumus add maths k1 trial spm sbp 2010
Tutorial 9 mth 3201
Gaussseidelsor
 
Introduction to modern time series analysis
Introduction to inverse problems
Parameter Estimation in Stochastic Differential Equations by Continuous Optim...
Sheet1 simplified
V. Dragovic: Geometrization and Generalization of the Kowalevski top
Maths assignment
Emat 213 midterm 1 fall 2005
Engr 213 sample midterm 2b sol 2010
Emat 213 study guide
C Sanchez Reduction Saetas
"Modern Tracking" Short Course Taught at University of Hawaii
Ih2414591461
study Accelerating Spatially Varying Gaussian Filters
Chapter 4(differentiation)
Lattices and codes
Performance of Optimal Registration Estimator
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
2 senarai rumus add maths k1 trial spm sbp 2010
Ad

More from Junpei Tsuji (20)

PDF
素因数分解しようぜ! #日曜数学会
PDF
モンテカルロ法を用いた素数大富豪素数問題の評価 #素数大富豪研究会
PDF
ピタゴラス数とヒルベルトの定理90 #3分で数学を語る会
PDF
五次方程式はやっぱり解ける #日曜数学会
PDF
第18回日曜数学会オンライン・オープニング資料
PDF
「にじたい」へのいざない #ロマンティック数学ナイト
PDF
ラマヌジャンやっぱりやばいじゃん - 第15回 #日曜数学会
PDF
x^2 + ny^2 の形で表せる素数 - めざせプライムマスター!
PDF
x^2+ny^2の形で表せる素数の法則と類体論
PDF
オイラー先生のおしゃれな素数判定 - 第14回 #日曜数学会
PDF
萩の月問題 - 第14回 #日曜数学会
PDF
合同数問題と保型形式
PDF
私の好きな関数とのなれそめ #ロマンティック数学ナイト
PDF
ベルヌーイ数とお友達になろう #ロマンティック数学ナイト
PDF
五次方程式は解けない - 第12回 #日曜数学会
PDF
「ガロア表現」を使って素数の分解法則を考える #mathmoring
PDF
連分数マジック - 第3回 #日曜数学会 in 札幌
PDF
素数は孤独じゃない(番外編) 第13回 数学カフェ「素数!!」
PDF
ゼータへ続く素数の階段物語 第13回 数学カフェ「素数!!」
PDF
非正則素数チェッカー #日曜数学会
素因数分解しようぜ! #日曜数学会
モンテカルロ法を用いた素数大富豪素数問題の評価 #素数大富豪研究会
ピタゴラス数とヒルベルトの定理90 #3分で数学を語る会
五次方程式はやっぱり解ける #日曜数学会
第18回日曜数学会オンライン・オープニング資料
「にじたい」へのいざない #ロマンティック数学ナイト
ラマヌジャンやっぱりやばいじゃん - 第15回 #日曜数学会
x^2 + ny^2 の形で表せる素数 - めざせプライムマスター!
x^2+ny^2の形で表せる素数の法則と類体論
オイラー先生のおしゃれな素数判定 - 第14回 #日曜数学会
萩の月問題 - 第14回 #日曜数学会
合同数問題と保型形式
私の好きな関数とのなれそめ #ロマンティック数学ナイト
ベルヌーイ数とお友達になろう #ロマンティック数学ナイト
五次方程式は解けない - 第12回 #日曜数学会
「ガロア表現」を使って素数の分解法則を考える #mathmoring
連分数マジック - 第3回 #日曜数学会 in 札幌
素数は孤独じゃない(番外編) 第13回 数学カフェ「素数!!」
ゼータへ続く素数の階段物語 第13回 数学カフェ「素数!!」
非正則素数チェッカー #日曜数学会

Recently uploaded (20)

PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
RMMM.pdf make it easy to upload and study
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Computing-Curriculum for Schools in Ghana
PDF
O7-L3 Supply Chain Operations - ICLT Program
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
VCE English Exam - Section C Student Revision Booklet
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
RMMM.pdf make it easy to upload and study
102 student loan defaulters named and shamed – Is someone you know on the list?
Abdominal Access Techniques with Prof. Dr. R K Mishra
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Chinmaya Tiranga quiz Grand Finale.pdf
Computing-Curriculum for Schools in Ghana
O7-L3 Supply Chain Operations - ICLT Program

Quadratic form and functional optimization

  • 1. Quadratic Form and Functional Optimization 9th June, 2011 Junpei Tsuji
  • 2. Optimization of multivariate quadratic function 𝑥1 1 3 1 𝑥1 𝐽 𝑥1 , 𝑥2 = 1.2 + 0.2, 0.3 𝑥2 + 𝑥1 , 𝑥2 𝑥2 2 1 4 3 2 = 1.2 + 0.2𝑥1 + 0.3𝑥2 + 𝑥1 + 𝑥1 𝑥2 + 2𝑥2 2 2 𝑥1 , 𝑥2 , 𝐽 = 0.045, 0.064, 1.1881
  • 3. Quadratic approximation 1 𝑓 𝒙 ≈ 𝑓 ̅ + 𝑱̅ ∙ 𝒙 − � + 𝒙 𝒙−� 𝒙 𝑇� 𝑯 𝒙−� 𝒙 By Taylor's expansion 2 constant linear form quadratic form 𝒙 ∶= 𝑥1 , 𝑥2 , ⋯ 𝑥 𝑝 𝑇 where 𝑓 ̅ ∶= 𝑓 � 𝒙 • 𝑱̅: = , ,⋯, 𝜕𝑓 𝜕𝑓 𝜕𝑓 • 𝜕𝑥1 𝜕𝑥2 𝜕𝑥 𝑝 � 𝒙=𝒙 • Jacobian (gradient) ⋯ 𝜕2 𝑓 𝜕2 𝑓 𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥 𝑝 • � ∶= 𝑯 ⋮ ⋱ ⋮ ⋯ 𝜕2 𝑓 𝜕2 𝑓 Hessian (constant) 𝜕𝑥 𝑝 𝜕𝑥1 𝜕𝑥 𝑝 𝜕𝑥 𝑝 � 𝒙=𝒙
  • 4. Completing the square 1 𝑓 𝒙 = 𝑓 ̅ + 𝑱̅ ∙ 𝒙 − � + 𝒙 𝒙−� 𝒙 𝑇� 𝑯 𝒙−� 𝒙 2 • Let � = 𝒙∗ where 𝑱 𝒙∗ 𝑇 = 𝟎 then 𝒙 1 𝑓 𝒙 = 𝑓∗ + 𝒙 − 𝒙∗ 𝑇 𝑯∗ 𝒙 − 𝒙∗ 2 constant quadratic form
  • 5. Completing the square 1 𝑇 𝑓 𝒙 = 𝑐 + 𝒃 𝒙 + 𝒙 𝑨𝑨 𝑇 2 1 𝑓 𝒙 = 𝑑+ 𝒙 − 𝒙0 𝑇 𝑨 𝒙 − 𝒙0 2 1 1 1 = 𝑑 + 𝒙0 𝑇 𝑨𝒙0 − 𝒙0 𝑇 𝑨 + 𝑨 𝑇 𝒙 + 𝒙 𝑇 𝑨𝑨 2 2 2 𝒃𝑇 =− 𝒙0 𝑇 𝑨 + 𝑨 𝑇 1 2 𝒙0 𝑇 = −2𝒃 𝑇 𝑨 + 𝑨 𝑇 −1 • 𝒙0 = −2 𝑨 + 𝑨 𝑇 −1 𝒃 𝑐= 𝑑+ 𝒙0 𝑇 𝑨𝒙0 1 1 𝑇 2 𝑑= 𝑐− 𝒙0 𝑨𝒙0 = 𝑐 − 2𝒃 𝑇 𝑨 + 𝑨 𝑇 𝑨 𝑨+ 𝑨𝑇 𝒃 • −1 −1 2 𝑓 𝒙 = 𝑐 − 2𝒃 𝑇 𝑨 + 𝑨 𝑇 −1 𝑨 𝑨+ 𝑨𝑇 −1 𝒃 Therefore, 1 + 𝒙 + 2 𝑨 + 𝑨 𝑇 −1 𝒃 𝑇 𝑨 𝒙 + 2 𝑨 + 𝑨 𝑇 −1 𝒃 2 If 𝑨 was symmetric matrix, 1 𝑇 −1 1 𝑓 𝒙 = 𝑐− 𝒃 𝑨 𝒃+ 𝒙 + 𝑨−1 𝒃 𝑇 𝑨 𝒙 + 𝑨−1 𝒃 • 2 2
  • 6. Quadratic form 𝑓 𝒙𝒙 = 𝒙𝒙 𝑇 𝑺𝑺𝑺 • 𝑺 is symmetric matrix. where
  • 7. Symmetric matrix • Symmetric matrix 𝑺 is defined as a matrix that satisfies the 𝑺𝑇 = 𝑺 following formula: • Symmetric matrix 𝑺 has real eigenvalues 𝜆 𝑖 and eigenvectors 𝒖 𝑖 that consist of normal orthogonal base. 𝑺𝒖 𝑖 = 𝜆 𝑖 𝒖 𝑖 where 𝜆1 ≥ 𝜆2 ≥ ⋯ ≥ 𝜆 𝑝 𝒖 𝑖 , 𝒖 𝑗 = 𝛿 𝑖𝑖 𝛿 𝑖𝑖 is Kronecker's delta
  • 8. Diagonalization of symmetric matrix • We define an orthogonal matrix 𝑼 as follows: 𝑼 = 𝒖1 , 𝒖2 , ⋯ , 𝒖 𝑝 • Then, 𝑼 satisfies the following formulas: 𝑼𝑇 𝑼= 𝑰 ∴ 𝑼−1 = 𝑼 𝑇 • where 𝑰 is an identity matrix. 𝑺𝑺 = 𝑺 𝒖1 , 𝒖2 , ⋯ , 𝒖 𝑝 = 𝑺𝒖1 , 𝑺𝒖2 , ⋯ , 𝑺𝒖 𝑝 𝜆1 = 𝜆1 𝒖1 , 𝜆2 𝒖2 , ⋯ , 𝜆 𝑝 𝒖 𝑝 = 𝒖1 , ⋯ , 𝒖 𝑝 ⋱ 𝜆𝑝 = 𝑼 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝 ∴ 𝑺 = 𝑼 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝 𝑼𝑇
  • 9. Transformation to principal axis 𝑓 𝒙′ = 𝒙′ 𝑇 𝑺𝑺′ • Then, we assume 𝒙𝒙 = 𝑼 𝑇 𝒛, where 𝒛 = 𝑧1 , 𝑧1 , ⋯ , 𝑧 𝑝 . 𝑓 𝑼 𝑇 𝒛 = 𝑼 𝑇 𝒛 𝑇 𝑺 𝑼 𝑇 𝒛 = 𝒛 𝑇 𝑼𝑺𝑼 𝑇 𝒛 = 𝒛 𝑇 𝐝𝐝𝐝𝐝 𝜆1 , 𝜆2 , ⋯ , 𝜆 𝑝 𝒛 𝑝 ∴ 𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖2 𝑖=1
  • 10. Contour surface • If we assume 𝑓 𝒛 equals constant 𝑐, 𝑝 𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖2 = 𝑐 𝑖=1 • When 𝑝 = 2, – a locus of 𝒛 illustrates an ellipse if 𝜆1 𝜆2 > 0. – a locus of 𝒛 illustrates a hyperbola if 𝜆1 𝜆2 < 0.
  • 11. Contour surface 𝑧2 2 𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖 2 = 𝑐𝑐𝑐𝑐𝑐. 𝑖=1 𝜆1 𝜆2 > 0 𝑧1 maximal or minimal point 𝑓 𝑥1 , 𝑥2 = −𝑥1 2 − 2𝑥2 2 + 20.0
  • 12. Transformation to principal axis 𝑥𝑥2 𝑓 𝒙𝒙 = 𝑐𝑐𝑐𝑐𝑐. 𝑥𝑥1 𝒙𝒙 = 𝑼 𝑇 𝒛 ∴ 𝒛 = 𝑼𝒙′ Transformation to principal axis
  • 13. Parallel translation 𝑥𝑥2 𝑥2 � 𝒙 𝑥𝑥1 𝑓 𝒙 = 𝑐𝑐𝑐𝑐𝑐. 𝑥1 𝒙𝒙 = 𝒙 − �𝒙
  • 14. 1 Contour surface of quadratic function 𝑓 𝒙 = 𝑓 + ∗ 𝒙 − 𝒙∗ 𝑇 𝑯∗ 𝒙 − 𝒙∗ 2 𝑥2 � 𝒙 𝑓 𝒙 = 𝑐𝑐𝑐𝑐𝑐. 𝑥1
  • 15. Contour surface 𝑧2 2 𝑓 𝒛 = � 𝜆 𝑖 𝑧 𝑖 2 = 𝑐𝑐𝑐𝑐𝑐. 𝑖=1 𝜆1 𝜆2 < 0 𝑧1 saddle point 𝑓 𝑥1 , 𝑥2 = 𝑥1 2 − 𝑥2 2
  • 16. Stationary points 𝑓 𝑥1 , 𝑥2 = 𝑥1 3 + 𝑥2 3 + 3𝑥1 𝑥2 + 2 maximal point saddle point
  • 17. Stationary points 1 3 𝑓 𝑥1 , 𝑥2 = exp − 𝑥1 + 𝑥1 − 𝑥2 2 3 saddle point maximal point
  • 19. Newton-Raphson method 𝑓𝑓 𝒙 = 𝟎 where 𝑓 𝒙 is 𝑁-th polynomial by • Newton’s method is an approximate solver of using a quadratic approximation. 𝑓 𝒙 quadratic approximation of 𝑓 𝒙 in 𝒙 1 𝑓 𝒙 + Δ𝒙 ≈ 𝑓 𝒙 + 𝑱 𝒙 ∙ Δ𝒙 + Δ𝒙 𝑇 𝑯 𝒙 Δ𝒙 2 𝜕𝑓 𝒙 + Δ𝒙 𝑓𝑓 𝒙∗ = 𝟎 𝜕 Δ𝒙 = 𝑱 𝒙 𝑇 + 𝑯 𝒙 Δ𝒙 𝒙∗ 𝒙 + 𝚫𝒙 𝒙 𝒙
  • 20. Algorithm of Newton’s method Procedure Newton (𝑱 𝒙 , 𝑯 𝒙 ) 1. Initialize 𝒙. 2. Calculate 𝑱 𝒙 and 𝑯 𝒙 . equation and giving ∆𝒙 : 𝑱 𝒙 𝑇 + 𝑯 𝒙 ∆𝒙 = 𝟎 3. Solve the following simultaneous 4. Update 𝒙 as follows: 𝒙 ← 𝒙 + ∆𝒙 5. If ∆𝒙 < 𝛿 then return 𝒙 else go back to 2.
  • 21. Linear regression 𝑝 𝑦 𝑦 = 𝑓 𝒙 = 𝛽0 + � 𝛽 𝑗 𝑥 𝑗 𝑁 samples 𝒙 𝑖, 𝑦 𝑖 𝑗=1 𝒙 𝑝-th dimensional space We would like to find 𝜷∗ that minimizes the residual sum of square (RSS).
  • 22. Linear regression min RSS 𝜷 𝜷 2 𝑁 𝑁 𝑝 • where RSS 𝜷 = � 𝑦 𝑖 − 𝑓 𝒙 𝑖 2 = � 𝑦𝑖 − 𝛽0 + � 𝛽 𝑗 𝑥 𝑖𝑖 𝑖=1 𝑖=1 𝑗=1 • Given 𝑿, 𝒚, 𝜷 as follows: 𝑥11 ⋯ 𝑥1𝑝 1 𝑦1 𝛽1 𝑿= ⋮ ⋱ ⋮ ⋮ , 𝒚= ⋮ , 𝜷= ⋮ 𝑥 𝑁𝑁 ⋯ 𝑥 𝑁𝑁 1 𝑦𝑁 𝛽𝑝 ∴ RSS 𝜷 = 𝒚 − 𝑿𝜷 2
  • 23. Linear regression RSS 𝜷 = 𝐽 𝜷 = 𝒚 − 𝑿𝜷 2 = 𝒚 − 𝑿𝜷 𝑇 𝒚 − 𝑿𝜷 = 𝒚 𝑇 𝒚 − 𝜷 𝑇 𝑿 𝑇 𝒚 − 𝒚 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷 𝒂𝑇 𝜷 = 𝒂 𝜕 𝜕𝜷 𝜷𝑇 𝒂 = 𝒂 • 𝜕 𝜕𝜷 𝜷 𝑇 𝑨𝜷 = 𝑨 • 𝜕 𝜕𝜷 𝜕𝐽 𝐽′ 𝜷 = = −2𝑿 𝑇 𝒚 + 2𝑿 𝑇 𝑿𝜷 • 𝜕𝜷
  • 24. Linear regression Given 𝜷∗ that satisfies 𝐽′ 𝜷∗ = 𝟎, 𝑿 𝑇 𝒚 = 𝑿 𝑇 𝑿𝜷∗ 𝒚 𝑇 𝑿 = 𝜷∗ 𝑇 𝑿 𝑇 𝑿 ∴ 𝜷∗ = 𝑿𝑇 𝑿 −1 𝑿𝑇 𝒚 ∴ 𝐽 𝜷 = 𝒚 𝒚 − 𝜷 𝑿 𝑿𝜷 − 𝜷 𝑿 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷 𝑇 𝑇 𝑇 ∗ ∗𝑇 ∴ 𝐽 𝜷 = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ − 𝜷 𝑇 𝑿 𝑇 𝑿𝜷∗ − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷 + 𝜷 𝑇 𝑿 𝑇 𝑿𝜷 ∴ 𝐽 𝜷 = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗ completing the square
  • 25. Linear regression 𝐽 𝜷 = 𝒚 𝑇 𝒚 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿𝜷∗ + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗ = 𝒚 − 𝑿𝜷∗ 2 + 𝜷 − 𝜷∗ 𝑇 𝑿 𝑇 𝑿 𝜷 − 𝜷∗ 1 = 𝐽 𝜷 + ∗ 𝜷 − 𝜷 ∗ 𝑇 𝑯 𝜷 − 𝜷∗ 2 Residual sum of squares (RSS) quadratic form 𝛽2 𝐽 𝜷 = 𝑐𝑐𝑐𝑐𝑐. by Linear Regression 𝜷∗ 𝜷∗ = 𝑿 𝑇 𝑿 −1 𝑿 𝑇 𝒚 𝑯 = 2𝑿 𝑇 𝑿 𝛽1
  • 26. Hessian • 𝑯≔ = 2𝑿 𝑇 𝑿 𝜕2 𝐽 𝜕𝛽 𝑖 𝜕𝛽 𝑗 • 𝑯 has the following two features: 𝑯𝑇 = 𝑯 ∀ 𝒙 ≠ 𝟎, 𝒙 𝑇 𝑯𝑯 > 0 – symmetric matrix: – positive-definite matrix: Therefore, 𝜷∗ = 𝑿𝑇 𝑿 −1 𝑿 𝑇 𝒚 is the minimum of 𝐽 𝜷 .
  • 27. Analysis of residuals 𝒚∗ = 𝑿𝜷∗ • Then, we substitute 𝜷∗ = 𝑿 𝑇 𝑿 −1 𝑿 𝑇 𝒚 in the above, 𝒚∗ = 𝑿𝜷∗ = 𝑿 𝑿 𝑇 𝑿 −1 𝑿𝑇 𝒚 ∴ 𝒚∗ = ℋ𝒚 (Hat matrix) • the vector of residuals 𝒓 can be expressed by follows: 𝒓 = 𝒚 − 𝒚∗ = 𝒚 − ℋ𝒚 = 𝑰 − ℋ 𝒚 𝑉𝑉𝑉 𝒓 = 𝑉𝑉𝑉 𝑰 − ℋ 𝒚 = 𝑰 − ℋ 𝑉𝑉𝑉 𝒚 𝑰 − ℋ 𝑇
  • 28. Analysis of residuals ℋ = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 The hat matrix ℋ is a projection matrix, which 1. Projection: ℋ 2 = ℋ satisfies the following equations: ℋ 2 = ℋ ∙ ℋ = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 ∙ 𝑿 𝑿 𝑇 𝑿 −1 𝑿𝑇 = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 = 𝑿 𝑿 𝑇 𝑿 −1 𝑿 𝑇 = ℋ 2. Orthogonal: ℋ 𝑇 = ℋ
  • 29. Analysis of residuals 𝑥11 ⋯ 𝑥1𝑝 1 𝛽1 ∗ 𝑦1 ∗ ⋮ ⋮ = ⋮ ⋱ ⋮ ⋮ 𝛽𝑝 ∗ 𝑦 𝑁∗ 𝑥 𝑁1 ⋯ 𝑥 𝑁𝑁 1 𝛽0 ∗ 𝑥11 𝑥1𝑝 1 = 𝛽1 ∗ ⋮ + ⋯ + 𝛽 𝑝∗ ⋮ + 𝛽0 ⋮ ∗ 𝑥 𝑁1 𝑥 𝑁𝑁 1 𝒙1 𝒙𝑝 𝒙 𝑝+1 = 𝟏 linear combination in 𝑝 + 1 -th vector space
  • 30. Analysis of residuals 𝒚 𝒚∗ = ℋ𝒚 (Projection) 𝒙𝑝 𝒚∗ 𝒙𝑗 𝑝 + 1 -th dimensional super surface 𝑁-th dimensional space
  • 31. Analysis of residuals 𝒚 = 𝑿𝜷 • 𝜷 = 𝑿−1 𝒚, where 𝑿−1 is M-P generalized inverse. 𝑝= 𝑁 𝑝> 𝑁 1. Unique solution: 𝑝< 𝑁 2. Many solutions: 𝑿−1 3. No solution: 𝑿 −1 =� 𝑿𝑿 𝑿𝑿𝑿 −1 𝜷 = 𝑿−1 𝒚 is min in 𝜷 𝑿𝑿𝑿 −1 𝑿𝑿 𝒚 − 𝑿𝜷 2 is min •