SlideShare a Scribd company logo
The R´nyi entropy and
                   e
the uncertainty relations in quantum mechanics



               Iwo Bialynicki-Birula
           Center for Theoretical Physics
                  Warsaw, Poland


               www.cft.edu.pl/˜ birula



                   November 2008
Heisenberg uncertainty relation

   The standard formulation of the uncertainty relations for the
measurements of the position and momentum has the following form
                                        h
                                        ¯
                             ∆x∆p ≥
                                        2

              ∆x and ∆p are the mean standard deviations

                         ∆a =       (a − a )2

          2                                 2
     (∆x) =       dxρ(x)(x − x )2     (∆p) =    dp˜(p)(p − p )2
                                                  ρ

          This form of the uncertainty is simple to prove

           The proof requires only the Schwartz inequality
  and is usually included in the first course on quantum mechanics
Critique of the standard uncertainty relations

        The mean standard deviation ∆a (dispersion) is used a lot
         in the statistical analysis of experiments that measure a
  It is a reasonable measure of the spread when (as in measurements)
          the distribution in question is of a simple ”hump” type
For example, it is a very good characteristic for a Gaussian distribution
           since it determines the half-width of this distribution
However, when the distribution of the values has more than one hump
             the standard deviation loses some of its usefulness
          especially in connection with the notion of uncertainty

         Examples of two states of a particle in one dimension
First state describes a particle with a uniformly distributed probability
                        in a box of total length L
  Second state describes a particle localized with equal probabilities
                in two smaller boxes each of length L/4
The probability distributions corresponding to these two states
  F=flat and C=clustered, are graphically represented as follows:


                      I             II          III           IV
State F


                      II                                      IV

                      I                                       III
State C
                                1
                                     inside the box,
            State F        ρ=   L
                                0    outside the box,

                                2
            State C        ρ=   L    inside the box,
                                0    outside the box.
  In which case, F or C, is the uncertainty in the position greater?
According to our intuition, the uncertainty is greater in the case F
  In the case C we know more about the position; we know that the
particle is not in the regions II and III. However, when we calculate the
          standard deviation ∆x we obtain the opposite result:

                                Case F
                                  L
                           ∆xF = √
                                   12




                                Case C

                                    7 L
                         ∆xC =       √
                                    4 12
The second, somewhat more dramatic example of the situation where
the standard deviation does not give a sensible measure of uncertainty
        is provided by the following distribution of probability.
                                State D




      L(1-1/N)        ————— Distance NL —————–                 L/N




     The probability density is constant in two regions I and II
      separated by a large distance N L (N is a large number)
    The region I has the size L(1 − 1/N ) and the distant region II
                          has the size L/N
1
                                  L    in region I,
               State D    ρ=      1
                                       in region II,
                                  L
                                  0    elsewhere.
 For large N the standard deviation ∆x is approximately equal to:

                                 Case D
                             √            L
                     ∆xD ∼       1 + 12N √
                                           12
   This expression tends to infinity with N in spite of the fact that
   the probability of finding the particle in the region I tends to 1

This example shows clearly what is wrong with the standard deviation
         It gets very high contributions from distant regions
        because these contribution enter with a large weight:
              namely, the distance from the mean value.
Information entropy as a measure of uncertainty

The best measure of information was introduced in 1948 by Shannon
    in his paper “The mathematical theory of communication”
      Shannon entropy closely resembles the physical entropy


                        H=−          pi ln pi
                                 i

     where pi is the probability of the occurrence of the i-th event
             or the a priori probability of the i-th message
    In information theory one uses the logarithms to the base of 2
                       H is then measured in bits
     but a change of the base results only in a change of the scale

The information entropy may serve as a precise measure of uncertainty
   It has even been described already by Shannon as a measure of
                “information, choice or uncertainty”
H may indeed serve at the same time as a measure of information and
as a measure of uncertainty, because the two notions are directly related

              The information gained in the measurement
              removes the uncertainty about its outcome

           The information uncertainty is in many ways
  a much better measure of uncertainty than the standard deviation

      The Shannon entropy has a simple intuitive interpretation
                   The average number of questions
needed to discover ”the truth” (that is to remove uncertainty) which is
    hidden in one of the boxes with the probability distribution pi
                     is bounded from below by H
The renyi entropy and the uncertainty relations in quantum mechanics
To apply the Shannon measure of uncertainty to quantum mechanics
 we must discretize the distributions of the position and momentum
Histograms

The probability density to find a quantum particle at x is given by
    the modulus squared of the wave function ρ(x) = |ψ(x)|2
  The probability density to find the momentum p is given by
                                                         ˜
  the modulus squared of the Fourier transform ρ(p) = |ψ(p)|2
                                                ˜

 In order to have a discrete set of probabilities we introduce the
    finite resolutions (bins) for the measurements of x and p
 To distinguish them from the statistical dispersions ∆x and ∆p
                they will be denoted by δx and δp

           The probabilities ql and k to find the values
           of x and p in the l-th and the k-th bin are
                     xl +δx                 pk +δp
             ql =         dx ρ(x) pk =           dp ρ(p)
                                                    ˜
                    xl                     pk

 The values of ql and pk represent a histogram of measurements
Following D. Deutsch (1983) and H. M. Partovi (1983) we can
formulate the uncertainty relations between the positions and momenta
 that restrict the available information about x and p measured by the
    Shannon entropies H (x) = − ql ln ql and H (p) = − pk ln pk
           The best known (but not sharp) bound expressing
         the uncertainty relations (I. Bialynicki-Birula 1984) is

                    (x)        (p)                           δxδp
                H         +H         ≥ 1 − ln(2) − ln
                                                              h


                    RHS
                     3



                     2



                     1


                                                             ∆x∆p
                               0.5      1.    1.5       2.
                                                              h
The R´nyi entropy
                             e

Is there an uncertainty relation expressed in term of the R´nyi entropy?
                                                           e


 R´nyi entropy is a one parameter extension of the Shannon entropy
  e
                              1
                        Hα =     ln(       pα )
                                            k
                             1−α
      The limit α → 1 produces the Shannon entropy (      pk = 1)

                       d
                         ln(    pα ) =
                                 k        pk ln pk
                      dα
R´nyi entropy has many applications in information theory (coding and
 e
cryptography), physics (particle production, diffusion), signal analysis,
      nonlinear dynamics, fractals, etc. (150 000 hits in Google)
R´nyi entropy shares many properties with Shannon entropy
   e

• Hα is nonnegative
• Hα vanishes when the uncertainty is the smallest (only one pk = 0)
• Hα takes on the maximal value ln n when all n probabilities are equal
• Hα is extensive — satisfies the following condition If pij is a product of
  two probability distributions pij = pi pj then

              1                       1             α           α
      Hα =       ln(        pα ) =
                             ij          ln(       pi       p   j)   = Hα + Hα
             1−α       ij
                                     1−α       i        j

   R´nyi entropies for the position and momentum are easily defined using
    e
  the same probabilities pk and ql as for the Shannon entropy

          (x)            α           (p)
         Hα = ln        ql /(1 − α) Hα = ln                     pα /(1 − α)
                                                                 k

                                                            (x)          (p)
         What are the limitations on the values of Hα             and Hα ?
Brief mathematical interlude
  The (q, q )-norm of T is the smallest k(q, q ) such that


       Tψ    q   ≤ k(q, q ) ψ        q        1/q + 1/q = 1             q≥q
                                                            1
                                                            q
                      ψ     q   =             dx|ψ(x)|q

     K. I. Babenko (1961) and W. Beckner (1975)
      found k(q, q ) for the Fourier transformation
                                            ˜
   Choosing the physical normalization of ψ we obtain

                  ˜        1
                  ψ(p) = √                     dx e−ipx/¯ ψ(x)
                                                        h
                           2π¯
                             h
                                                        1
                                          1          − 2q
         ˜            1 1
                      q−2
                                 q       2q     q
         ψ   q   ≤¯
                  h                                             ψ   q    (I)
                                2π              2π
This inequality is saturated for all Gaussian wave functions
It is convenient to change q → 2α and q → 2β
          1  1                  α     β
            + =2                   =                 α > 1 =⇒ β < 1
          α β                  α−1   1−β

         and rewrite the norms in the inequality (I) as follows
                                     1                                      1
                                    2α                                     2β
         ˜
         ψ   2α   =     dp|˜(p)|α
                           ρ             ψ     2β    =        dx|ρ(x)|β

                                                 ˜
                      ρ(x) = |ψ(x)|2 and ρ(p) = |ψ(p)|2
                                         ˜

              are quantum mechanical probability densities

After squaring both sides of the inequality (I) we obtain (for α > β)
                                                                       1
                                                              1
                                              1
                                              α −1
                                                         α   2α   β   2β
     ρ
     ˜   α   ≤ n(α, β) ρ   β    n(α, β) = ¯
                                          h                                (II)
                                                         π        π
To make use of this mathematical inequality in the proof
           of the physical uncertainty relations we have to
       introduce into this analysis the probabilities pk and ql
           This is done by splitting the integrals into bins
                                         xk +δp
                   dp(˜(p))α =
                      ρ                           dp (˜(p))α
                                                      ρ
                                        xk

        and using the famous property of convex functions:
           The value of the function at the average point
          cannot exceed the average value of the function

 In the present case I need a somewhat more general version of this
       statement in the form of an integral Jensen inequality

             Φ        dx h(x)w(x)   ≤        dx Φ (h(x)) w(x)
                  D                      D

which is valid for a convex function Φ, an arbitrary function h(x) and
a nonnegative weight function w(x) normalized to 1 ( D dx w(x) = 1)
The power xα is a convex function for α > 1 hence
                               xk +δp        α        xk +δp
           pk
                  α
                               xk
                                     dp ρ(p)
                                        ˜             xk
                                                            dp (˜(p))α
                                                                ρ
                      =                          ≤
           δp                       δp                      δp

                                                            α
           δp1−α           pα ≤
                            k           dp (˜(p))α = ( ρ α )
                                            ρ          ˜          (A)
  For concave functions we have: The average value of the function
   cannot exceed the value of the function at the average point
                       β                                     β
            ( ρ   β)       =    dx (ρ(x))β ≤ δx1−β          ql    (B)

     Raising (A) to the power 1/α and (B) to the power of 1/β
       and we the use of the basic inequality (II) we obtain
                                    1                                 1
                 1−α                α                 1−β             β
                                                                  β
            δp    α            pα
                                k       ≤ n(α, β)δx    β         ql

1−α         1                         1−β                      1           β
    ln(δp) + ln                pα
                                k   ≤     ln(δx) + ln n(α, β) + ln        ql
 α          α                          β                       β
Final form of the uncertainty relation
              From the last inequality with the use of
      α/(α − 1) = β/(1 − β) after some work we obtain finally

       (x) (p)             1      ln α   ln β                      δxδp
     Hβ + Hα ≥ −                       +            − ln(2) − ln
                           2     1−α 1−β                            h

                 In the limit when α → 1 also β → 1
             With the help of the l’Hˆpital rule one obtains
                                     o
                                      ln α
                                 lim       = −1
                                 α→1 1 − α

The inequality for R´nyi entropies reduces to the uncertainty relation
                    e
      for the Shannon entropies proved by me twenty years ago

                     (x)        (p)                      δxδp
                 H         +H         ≥ 1 − ln(2) − ln
                                                          h
Uncertainty relations for N -level systems
For quantum systems described by vectors in the N -dimensional Hilbert
 space the analog of the uncertainty relation for the R´nyi entropies is
                                                       e
                     N                          N
           1                      1
              ln          ρα
                          ˜k   +     ln              ρβ
                                                      l    ≥ ln N,
          1−α                    1−β
                    k=1                        l=1

     where ρk = |˜k |2 , ρl = |al |2 and the amplitudes ak and al are
           ˜     a                                      ˜
          connected by the discrete Fourier transformation
                               N
                        1
                  ak = √
                  ˜                  exp(2πik l/N ) al .
                         N     l=1

       This uncertainty relation is saturated for the states that
                are either localized in “position space”
        exactly one of the amplitudes al is different from zero
                or are localized in “momentum space”
        exactly one of the amplitudes ak is different from zero
                                        ˜
               The bound does not depend on α and β
Uncertainty relations for mixed states
         The uncertainty relations for the R´nyi entropies also
                                            e
                       hold for all mixed states

        This result is not obvious because the R´nyi entropy is
                                                 e
not a convex function of the probability distributions for all values of α

      The terms on the left hand side of the uncertainty relation
                 may decrease as a result of mixing

             In order to extend our results to mixed states
        one has to go back to the original inequalities that were
            used in the proof and show that they still hold
Uncertainty relations for continuous distributions
         There also exist purely mathematical versions of
            the uncertainty relations that do not involve
  the experimental resolutions δx and δp of the measuring devices

                ∞                          ∞
     1                            1
        ln        dp (˜(p))α +
                      ρ              ln      dx (ρ(x))β
    1−α        −∞              1−β        −∞
                                 1      α       1       β
                          ≥−          ln −           ln
                             2(1 − α) π     2(1 − β) π
On the left hand side of this inequality we have, what might be called,
       the continuous or integral versions of the R´nyi entropies
                                                   e
 In order to derive this purely mathematical inequality that includes
  no reference to the finite resolution of the physical measurements
      I have dropped ¯ in the definition of the Fourier transform
                       h
This inequality has been independently proven by Zozor and Vignat
        Analogous relations for the continuous Tsallis entropies
                for x and p were obtained by Rajagopal
Uncertainty relations for continuous Shannon entropies
 Assuming that the distributions ρ(x) and ρ(p) are normalized
                                            ˜
     in the limit, when α → 1, β → 1 from the formula for
the entropic uncertainty relation in terms of the R´nyi entropies
                                                    e
          we obtain the classic uncertainty relation for
               the continuous Shannon entropies


        ∞                        ∞
    −        dp ρ(p) ln ρ(p) −
                ˜       ˜             dx ρ(x) ln ρ(x) ≥ ln(eπ)
        −∞                       −∞



  This inequality had been conjectured by Hirschman (1957)
  and later proved by Bialynicki-Birula and Mycielski (1975)
                    and by Beckner (1975)
References


    1. I. Bialynicki-Birula, “Renyi entropy and the uncertainty rela-
tions”, Foundations of Probability and Physics, Ed. G. Adenier, C. A.
Fuchs, and A. Yu. Khrennikov, American Institute of Physics, Melville,
2007.


   2. I. Bialynicki-Birula, “Formulation of the uncertainty relations in
terms of the Renyi entropies” Phys. Rev. A 74, 052101 (2006).


   Both references can be found on my home page: www.cft.edu.pl/˜ birula
Summary
   I have shown that the R´nyi entropies for canonically conjugate
                            e
           physical variables obey the inequalities that have
            the interpretation of the uncertainty relations

      In order to use the R´nyi entropies in their standard form
                             e
     (i.e. using a discrete set of probabilities), I had to introduce
        the finite resolutions δx and δp of the measuring devices

The resulting new uncertainty relations for the position and momentum
  acquire an additional meaning not found in their standard version

           The more accurately we wish to measure x and p
   the stronger the limitations imposed by the uncertainty relation

                      1     ln α   ln β             δxδp
    The lower limit −            +           − ln            increases
                      2     1−α 1−β                  π¯
                                                      h

More Related Content

PDF
Archivonewton
PDF
Bq25399403
PDF
Zahedi
PDF
International Journal of Computational Engineering Research(IJCER)
PDF
4515ijci01
PDF
On elements of deterministic chaos and cross links in non- linear dynamical s...
PDF
Frobenious theorem
Archivonewton
Bq25399403
Zahedi
International Journal of Computational Engineering Research(IJCER)
4515ijci01
On elements of deterministic chaos and cross links in non- linear dynamical s...
Frobenious theorem

What's hot (20)

PDF
Puy chosuantai2
PDF
Astaño 4
PDF
Regularized Compression of A Noisy Blurred Image
PDF
NIACFDS2014-12-16_Nishikawa_BoundaryFormula
PDF
Decreasing of quantity of radiation de fects in
PDF
Geometric properties for parabolic and elliptic pde
PDF
Robust wavelet denoising
PDF
Stability criterion of periodic oscillations in a (9)
PDF
Lecture 2
PDF
Instantons and Chern-Simons Terms in AdS4/CFT3: Gravity on the Brane?
PDF
Math Conference Poster
PDF
Nonparametric Density Estimation
PDF
Biased normalized cuts
DOCX
Primer for ordinary differential equations
PDF
New version
PDF
Logistics. Terminology
PDF
A Study of Periodic Points and Their Stability on a One-Dimensional Chaotic S...
PDF
PDF
The Graph Minor Theorem: a walk on the wild side of graphs
DOCX
Polynomial functions modelllings
Puy chosuantai2
Astaño 4
Regularized Compression of A Noisy Blurred Image
NIACFDS2014-12-16_Nishikawa_BoundaryFormula
Decreasing of quantity of radiation de fects in
Geometric properties for parabolic and elliptic pde
Robust wavelet denoising
Stability criterion of periodic oscillations in a (9)
Lecture 2
Instantons and Chern-Simons Terms in AdS4/CFT3: Gravity on the Brane?
Math Conference Poster
Nonparametric Density Estimation
Biased normalized cuts
Primer for ordinary differential equations
New version
Logistics. Terminology
A Study of Periodic Points and Their Stability on a One-Dimensional Chaotic S...
The Graph Minor Theorem: a walk on the wild side of graphs
Polynomial functions modelllings
Ad

Viewers also liked (9)

PDF
Experimental demonstration of continuous variable quantum key distribution ov...
PDF
Completely positive maps in quantum information
PDF
Smooth entropies a tutorial
PDF
Lattices, sphere packings, spherical codes
PDF
Postselection technique for quantum channels and applications for qkd
PDF
Renyis entropy
PPT
Atmospheric aberrations in coherent laser systems
PDF
Qkd and de finetti theorem
PDF
Continuous variable quantum key distribution finite key analysis of composabl...
Experimental demonstration of continuous variable quantum key distribution ov...
Completely positive maps in quantum information
Smooth entropies a tutorial
Lattices, sphere packings, spherical codes
Postselection technique for quantum channels and applications for qkd
Renyis entropy
Atmospheric aberrations in coherent laser systems
Qkd and de finetti theorem
Continuous variable quantum key distribution finite key analysis of composabl...
Ad

Similar to The renyi entropy and the uncertainty relations in quantum mechanics (20)

PDF
Further discriminatory signature of inflation
PDF
Bubbles of True Vacuum and Duality in M-theory
PDF
Galichon jds
PDF
Review20Probability20and20Statistics.pdf
PDF
A jensen-shannon
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
DOCX
2Solution of NonlinearEquations2.1 IntroductionMan.docx
PDF
Serie de dyson
PPT
Threshold network models
PDF
Quantum chaos of generic systems - Marko Robnik
PDF
Wave functions
PDF
Heesterbeek 90 definition_computation
PDF
Topological Inference via Meshing
PDF
A note on arithmetic progressions in sets of integers
PDF
Berans qm overview
PPT
Small Sampling Theory Presentation1
PDF
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
PDF
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
PDF
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
Further discriminatory signature of inflation
Bubbles of True Vacuum and Duality in M-theory
Galichon jds
Review20Probability20and20Statistics.pdf
A jensen-shannon
Welcome to International Journal of Engineering Research and Development (IJERD)
2Solution of NonlinearEquations2.1 IntroductionMan.docx
Serie de dyson
Threshold network models
Quantum chaos of generic systems - Marko Robnik
Wave functions
Heesterbeek 90 definition_computation
Topological Inference via Meshing
A note on arithmetic progressions in sets of integers
Berans qm overview
Small Sampling Theory Presentation1
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...

More from wtyru1989 (20)

PDF
Quantum optical measurement
PPTX
Gaussian discord imperial
PPT
Entropic characteristics of quantum channels and the additivity problem
PPT
Manipulating continuous variable photonic entanglement
PDF
The gaussian minimum entropy conjecture
PDF
The security of quantum cryptography
PDF
Entanglement of formation
PDF
Bound entanglement is not rare
PDF
Continuous variable quantum entanglement and its applications
PDF
Relative entropy and_squahed_entanglement
PDF
Lect12 photodiode detectors
PDF
Towards a one shot entanglement theory
PDF
Encrypting with entanglement matthias christandl
PDF
Dic rd theory_quantization_07
PDF
Em method
PDF
标量量化
PPT
Fully understanding cmrr taiwan-2012
PDF
Op amp tutorial-1
PDF
Tsinghua visit
PDF
Quantum conditional states, bayes' rule, and state compatibility
Quantum optical measurement
Gaussian discord imperial
Entropic characteristics of quantum channels and the additivity problem
Manipulating continuous variable photonic entanglement
The gaussian minimum entropy conjecture
The security of quantum cryptography
Entanglement of formation
Bound entanglement is not rare
Continuous variable quantum entanglement and its applications
Relative entropy and_squahed_entanglement
Lect12 photodiode detectors
Towards a one shot entanglement theory
Encrypting with entanglement matthias christandl
Dic rd theory_quantization_07
Em method
标量量化
Fully understanding cmrr taiwan-2012
Op amp tutorial-1
Tsinghua visit
Quantum conditional states, bayes' rule, and state compatibility

Recently uploaded (20)

PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
observCloud-Native Containerability and monitoring.pptx
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
project resource management chapter-09.pdf
PDF
August Patch Tuesday
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPT
What is a Computer? Input Devices /output devices
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
Chapter 5: Probability Theory and Statistics
Getting started with AI Agents and Multi-Agent Systems
Enhancing emotion recognition model for a student engagement use case through...
NewMind AI Weekly Chronicles - August'25-Week II
observCloud-Native Containerability and monitoring.pptx
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
project resource management chapter-09.pdf
August Patch Tuesday
A contest of sentiment analysis: k-nearest neighbor versus neural network
WOOl fibre morphology and structure.pdf for textiles
A novel scalable deep ensemble learning framework for big data classification...
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
What is a Computer? Input Devices /output devices
Group 1 Presentation -Planning and Decision Making .pptx
Module 1.ppt Iot fundamentals and Architecture
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
Assigned Numbers - 2025 - Bluetooth® Document
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Chapter 5: Probability Theory and Statistics

The renyi entropy and the uncertainty relations in quantum mechanics

  • 1. The R´nyi entropy and e the uncertainty relations in quantum mechanics Iwo Bialynicki-Birula Center for Theoretical Physics Warsaw, Poland www.cft.edu.pl/˜ birula November 2008
  • 2. Heisenberg uncertainty relation The standard formulation of the uncertainty relations for the measurements of the position and momentum has the following form h ¯ ∆x∆p ≥ 2 ∆x and ∆p are the mean standard deviations ∆a = (a − a )2 2 2 (∆x) = dxρ(x)(x − x )2 (∆p) = dp˜(p)(p − p )2 ρ This form of the uncertainty is simple to prove The proof requires only the Schwartz inequality and is usually included in the first course on quantum mechanics
  • 3. Critique of the standard uncertainty relations The mean standard deviation ∆a (dispersion) is used a lot in the statistical analysis of experiments that measure a It is a reasonable measure of the spread when (as in measurements) the distribution in question is of a simple ”hump” type For example, it is a very good characteristic for a Gaussian distribution since it determines the half-width of this distribution However, when the distribution of the values has more than one hump the standard deviation loses some of its usefulness especially in connection with the notion of uncertainty Examples of two states of a particle in one dimension First state describes a particle with a uniformly distributed probability in a box of total length L Second state describes a particle localized with equal probabilities in two smaller boxes each of length L/4
  • 4. The probability distributions corresponding to these two states F=flat and C=clustered, are graphically represented as follows: I II III IV State F II IV I III State C 1 inside the box, State F ρ= L 0 outside the box, 2 State C ρ= L inside the box, 0 outside the box. In which case, F or C, is the uncertainty in the position greater?
  • 5. According to our intuition, the uncertainty is greater in the case F In the case C we know more about the position; we know that the particle is not in the regions II and III. However, when we calculate the standard deviation ∆x we obtain the opposite result: Case F L ∆xF = √ 12 Case C 7 L ∆xC = √ 4 12
  • 6. The second, somewhat more dramatic example of the situation where the standard deviation does not give a sensible measure of uncertainty is provided by the following distribution of probability. State D L(1-1/N) ————— Distance NL —————– L/N The probability density is constant in two regions I and II separated by a large distance N L (N is a large number) The region I has the size L(1 − 1/N ) and the distant region II has the size L/N
  • 7. 1 L in region I, State D ρ= 1 in region II, L 0 elsewhere. For large N the standard deviation ∆x is approximately equal to: Case D √ L ∆xD ∼ 1 + 12N √ 12 This expression tends to infinity with N in spite of the fact that the probability of finding the particle in the region I tends to 1 This example shows clearly what is wrong with the standard deviation It gets very high contributions from distant regions because these contribution enter with a large weight: namely, the distance from the mean value.
  • 8. Information entropy as a measure of uncertainty The best measure of information was introduced in 1948 by Shannon in his paper “The mathematical theory of communication” Shannon entropy closely resembles the physical entropy H=− pi ln pi i where pi is the probability of the occurrence of the i-th event or the a priori probability of the i-th message In information theory one uses the logarithms to the base of 2 H is then measured in bits but a change of the base results only in a change of the scale The information entropy may serve as a precise measure of uncertainty It has even been described already by Shannon as a measure of “information, choice or uncertainty”
  • 9. H may indeed serve at the same time as a measure of information and as a measure of uncertainty, because the two notions are directly related The information gained in the measurement removes the uncertainty about its outcome The information uncertainty is in many ways a much better measure of uncertainty than the standard deviation The Shannon entropy has a simple intuitive interpretation The average number of questions needed to discover ”the truth” (that is to remove uncertainty) which is hidden in one of the boxes with the probability distribution pi is bounded from below by H
  • 11. To apply the Shannon measure of uncertainty to quantum mechanics we must discretize the distributions of the position and momentum
  • 12. Histograms The probability density to find a quantum particle at x is given by the modulus squared of the wave function ρ(x) = |ψ(x)|2 The probability density to find the momentum p is given by ˜ the modulus squared of the Fourier transform ρ(p) = |ψ(p)|2 ˜ In order to have a discrete set of probabilities we introduce the finite resolutions (bins) for the measurements of x and p To distinguish them from the statistical dispersions ∆x and ∆p they will be denoted by δx and δp The probabilities ql and k to find the values of x and p in the l-th and the k-th bin are xl +δx pk +δp ql = dx ρ(x) pk = dp ρ(p) ˜ xl pk The values of ql and pk represent a histogram of measurements
  • 13. Following D. Deutsch (1983) and H. M. Partovi (1983) we can formulate the uncertainty relations between the positions and momenta that restrict the available information about x and p measured by the Shannon entropies H (x) = − ql ln ql and H (p) = − pk ln pk The best known (but not sharp) bound expressing the uncertainty relations (I. Bialynicki-Birula 1984) is (x) (p) δxδp H +H ≥ 1 − ln(2) − ln h RHS 3 2 1 ∆x∆p 0.5 1. 1.5 2. h
  • 14. The R´nyi entropy e Is there an uncertainty relation expressed in term of the R´nyi entropy? e R´nyi entropy is a one parameter extension of the Shannon entropy e 1 Hα = ln( pα ) k 1−α The limit α → 1 produces the Shannon entropy ( pk = 1) d ln( pα ) = k pk ln pk dα R´nyi entropy has many applications in information theory (coding and e cryptography), physics (particle production, diffusion), signal analysis, nonlinear dynamics, fractals, etc. (150 000 hits in Google)
  • 15. R´nyi entropy shares many properties with Shannon entropy e • Hα is nonnegative • Hα vanishes when the uncertainty is the smallest (only one pk = 0) • Hα takes on the maximal value ln n when all n probabilities are equal • Hα is extensive — satisfies the following condition If pij is a product of two probability distributions pij = pi pj then 1 1 α α Hα = ln( pα ) = ij ln( pi p j) = Hα + Hα 1−α ij 1−α i j R´nyi entropies for the position and momentum are easily defined using e the same probabilities pk and ql as for the Shannon entropy (x) α (p) Hα = ln ql /(1 − α) Hα = ln pα /(1 − α) k (x) (p) What are the limitations on the values of Hα and Hα ?
  • 16. Brief mathematical interlude The (q, q )-norm of T is the smallest k(q, q ) such that Tψ q ≤ k(q, q ) ψ q 1/q + 1/q = 1 q≥q 1 q ψ q = dx|ψ(x)|q K. I. Babenko (1961) and W. Beckner (1975) found k(q, q ) for the Fourier transformation ˜ Choosing the physical normalization of ψ we obtain ˜ 1 ψ(p) = √ dx e−ipx/¯ ψ(x) h 2π¯ h 1 1 − 2q ˜ 1 1 q−2 q 2q q ψ q ≤¯ h ψ q (I) 2π 2π This inequality is saturated for all Gaussian wave functions
  • 17. It is convenient to change q → 2α and q → 2β 1 1 α β + =2 = α > 1 =⇒ β < 1 α β α−1 1−β and rewrite the norms in the inequality (I) as follows 1 1 2α 2β ˜ ψ 2α = dp|˜(p)|α ρ ψ 2β = dx|ρ(x)|β ˜ ρ(x) = |ψ(x)|2 and ρ(p) = |ψ(p)|2 ˜ are quantum mechanical probability densities After squaring both sides of the inequality (I) we obtain (for α > β) 1 1 1 α −1 α 2α β 2β ρ ˜ α ≤ n(α, β) ρ β n(α, β) = ¯ h (II) π π
  • 18. To make use of this mathematical inequality in the proof of the physical uncertainty relations we have to introduce into this analysis the probabilities pk and ql This is done by splitting the integrals into bins xk +δp dp(˜(p))α = ρ dp (˜(p))α ρ xk and using the famous property of convex functions: The value of the function at the average point cannot exceed the average value of the function In the present case I need a somewhat more general version of this statement in the form of an integral Jensen inequality Φ dx h(x)w(x) ≤ dx Φ (h(x)) w(x) D D which is valid for a convex function Φ, an arbitrary function h(x) and a nonnegative weight function w(x) normalized to 1 ( D dx w(x) = 1)
  • 19. The power xα is a convex function for α > 1 hence xk +δp α xk +δp pk α xk dp ρ(p) ˜ xk dp (˜(p))α ρ = ≤ δp δp δp α δp1−α pα ≤ k dp (˜(p))α = ( ρ α ) ρ ˜ (A) For concave functions we have: The average value of the function cannot exceed the value of the function at the average point β β ( ρ β) = dx (ρ(x))β ≤ δx1−β ql (B) Raising (A) to the power 1/α and (B) to the power of 1/β and we the use of the basic inequality (II) we obtain 1 1 1−α α 1−β β β δp α pα k ≤ n(α, β)δx β ql 1−α 1 1−β 1 β ln(δp) + ln pα k ≤ ln(δx) + ln n(α, β) + ln ql α α β β
  • 20. Final form of the uncertainty relation From the last inequality with the use of α/(α − 1) = β/(1 − β) after some work we obtain finally (x) (p) 1 ln α ln β δxδp Hβ + Hα ≥ − + − ln(2) − ln 2 1−α 1−β h In the limit when α → 1 also β → 1 With the help of the l’Hˆpital rule one obtains o ln α lim = −1 α→1 1 − α The inequality for R´nyi entropies reduces to the uncertainty relation e for the Shannon entropies proved by me twenty years ago (x) (p) δxδp H +H ≥ 1 − ln(2) − ln h
  • 21. Uncertainty relations for N -level systems For quantum systems described by vectors in the N -dimensional Hilbert space the analog of the uncertainty relation for the R´nyi entropies is e N N 1 1 ln ρα ˜k + ln ρβ l ≥ ln N, 1−α 1−β k=1 l=1 where ρk = |˜k |2 , ρl = |al |2 and the amplitudes ak and al are ˜ a ˜ connected by the discrete Fourier transformation N 1 ak = √ ˜ exp(2πik l/N ) al . N l=1 This uncertainty relation is saturated for the states that are either localized in “position space” exactly one of the amplitudes al is different from zero or are localized in “momentum space” exactly one of the amplitudes ak is different from zero ˜ The bound does not depend on α and β
  • 22. Uncertainty relations for mixed states The uncertainty relations for the R´nyi entropies also e hold for all mixed states This result is not obvious because the R´nyi entropy is e not a convex function of the probability distributions for all values of α The terms on the left hand side of the uncertainty relation may decrease as a result of mixing In order to extend our results to mixed states one has to go back to the original inequalities that were used in the proof and show that they still hold
  • 23. Uncertainty relations for continuous distributions There also exist purely mathematical versions of the uncertainty relations that do not involve the experimental resolutions δx and δp of the measuring devices ∞ ∞ 1 1 ln dp (˜(p))α + ρ ln dx (ρ(x))β 1−α −∞ 1−β −∞ 1 α 1 β ≥− ln − ln 2(1 − α) π 2(1 − β) π On the left hand side of this inequality we have, what might be called, the continuous or integral versions of the R´nyi entropies e In order to derive this purely mathematical inequality that includes no reference to the finite resolution of the physical measurements I have dropped ¯ in the definition of the Fourier transform h This inequality has been independently proven by Zozor and Vignat Analogous relations for the continuous Tsallis entropies for x and p were obtained by Rajagopal
  • 24. Uncertainty relations for continuous Shannon entropies Assuming that the distributions ρ(x) and ρ(p) are normalized ˜ in the limit, when α → 1, β → 1 from the formula for the entropic uncertainty relation in terms of the R´nyi entropies e we obtain the classic uncertainty relation for the continuous Shannon entropies ∞ ∞ − dp ρ(p) ln ρ(p) − ˜ ˜ dx ρ(x) ln ρ(x) ≥ ln(eπ) −∞ −∞ This inequality had been conjectured by Hirschman (1957) and later proved by Bialynicki-Birula and Mycielski (1975) and by Beckner (1975)
  • 25. References 1. I. Bialynicki-Birula, “Renyi entropy and the uncertainty rela- tions”, Foundations of Probability and Physics, Ed. G. Adenier, C. A. Fuchs, and A. Yu. Khrennikov, American Institute of Physics, Melville, 2007. 2. I. Bialynicki-Birula, “Formulation of the uncertainty relations in terms of the Renyi entropies” Phys. Rev. A 74, 052101 (2006). Both references can be found on my home page: www.cft.edu.pl/˜ birula
  • 26. Summary I have shown that the R´nyi entropies for canonically conjugate e physical variables obey the inequalities that have the interpretation of the uncertainty relations In order to use the R´nyi entropies in their standard form e (i.e. using a discrete set of probabilities), I had to introduce the finite resolutions δx and δp of the measuring devices The resulting new uncertainty relations for the position and momentum acquire an additional meaning not found in their standard version The more accurately we wish to measure x and p the stronger the limitations imposed by the uncertainty relation 1 ln α ln β δxδp The lower limit − + − ln increases 2 1−α 1−β π¯ h