Medical Images Compression: JPEG variations for DICOM standard
                                                                                Author: Jose Pablo Pinilla Gomez
The DICOM (Digital Imaging and Communication in Medicine) is a standard managed by the Medical
Imaging & Technology Alliance to ensure the interoperability of medical systems, images, documents
and workflow. It was released in 1993 (updated in 2011), to solve the problem with CT (Computed
Tomography) or MRI (Magnetic Resonance Imaging) images which cannot be decoded by any means
other than those provided by the same manufacturers. This discordance between devices causes
unfair competition among medical machines manufacturers by keeping health care facilities from
acquiring alternate machines that can lead to device incompatibility. Although there are data
“translators” to interconnect different standard systems, their implementation increases the total cost
and delay of the system. A standardized system is then necessary for the implementation of a PACS
(Picture Archiving and Communication System) that compiles all kinds of acquirable images for
medical records. The image file format specified in the current DICOM standard supports RLE (Run
Length Encoding), JPEG, JPEG-LS and JPEG-2000 compression and decompression schemes.


RLE is a lossless compression scheme that converts repeating byte sequences into two-byte codes
with the value of the repeating byte and the negative representation of the number of bytes in the run,
this is called a Replica Run; In case a sequence of non-repetitive bytes is found, the resulting code will
have one byte with the positive representation of the number of bytes in the sequence followed by the
literal sequence (Literal Run)1. RLE can yield good compression ratios for monochrome images but it
has a very variable behaviour for RGB and YCbCr images. The use of RLE is often preceded by other
algorithms to improve its performance.


The Joint Photographic Experts Group created the JPEG standard in 1992. It relies on a five step
process to compress pixel data with a configurable parameter of quality percentage. The processing
begins by changing the color space from the pixel RGB (Red, Green and Blue) triplet to the YCbCr
(Lluminance,Red-Chrominance and Blue-Chrominance) values. As many other image compression
schemes, JPEG uses YCbCr color space to keep the lluminance values that are more relevant to the
human eye while discarding some of the chrominance information. After decompression, files with
“chroma subsampling” are still able to display images with almost unnoticeable changes. A
configurable amount of chrominance (hue and saturation) information is discarded by a subsequent
downsampling step in JPEG.


After subsampling, the next step is to divide the whole image into 8x8 matrices of the same channel
(Y, Cb and Cr). Then, every 8x8 pixel block goes through a Discrete Cosine Transform (DCT)
computation of coefficients. The DCT gives back 64 values (coefficients) that are related to the
“concentration” of one of 64 8x8 blocks with different patterns that can represent the original image
when put together, or give a close approximation of it. Each of the predefined patterns is called a
“basis function” and it is a two-dimensional set of curves created using the cosine function.


The resulting 8x8 matrix with the coefficients is now run through a step of “Quantization”, where every
value in the input matrix is divided by a corresponding constant derived from predefined quantization
tables and user configuration of quality; the resulting values are rounded to the nearest integers. As
stated in the JPEG standard2 “The purpose of quantization is to achieve further compression by
representing DCT coefficients with no greater precision than is necessary to achieve the desired

1   DICOM standard Part 5: Data Structures and Encoding. 2011. Annex G. P101.
    http://medical.nema.org/Dicom/2011/11_05pu.pdf
2   JPEG Still Picture Compression Standard. 1991. p5. http://white.stanford.edu/~brian/psy221/reader/Wallace.JPEG.pdf
image quality. Stated another way, the goal of this processing step is to discard information which is
not visually significant.” The outcome of quantizing the DCT coefficients is that smaller, unimportant
coefficients will disappear and larger coefficients will lose unnecessary precision.


The final step in JPEG compression is called “Coding” which involves three sub-processes: DC
Coding, Zig-Zag Sequence and Entropy Coding. The “DC” coefficient in the matrix, which is the top
left-most value, contains a higher integer that is a measure of the average value of the 64 image
samples. DC coding replaces every block's DC coefficient with the difference between it and the one
of the previous block, this values are often correlated and therefore produce a small difference. Zig-
Zag Sequence is a literal zig-zag change in the organization of the matrix, taking advantage of the
differentiation between “high-frequency” and “low-frequency” coefficients; the result of this process is
that non-zero (low-frequency) and zero (high-frequency) values are grouped so that the following
compression steps are more effective. The entropy coding uses the Huffman method which can
replace repeating zeroes at the end of a matrix with a single character (byte) and use prefix coding * for
other characters according to their frequency of appearance, more frequent characters are coded into
shorter values.


JPEG's algorithm is very fast, with an average decompression time of 0.9 seconds 3, but it is overall a
lossy standard and it reflects it in most of its steps. Color space transformation is an irreversible
process because computer calculations of decimal point numbers are always approximations of the
real values. Subsampling is a straightforward elimination of data but this step is optional, unlike block
splitting in which certain blocks may require “filling” algorithms to complete the 8x8 matrices and
thereby altering the decompressed result. Next is the DCT which isn't by itself a lossy process, but
forward quantization always is. Quantization is a many-to-one mapping, and therefore is
fundamentally lossy. It is the principal source of lossiness in DCT-based encoders.4


Medical Imaging applications often require high precision so that digital alterations can't affect a
physician's criteria based on radiology results. This doesn't mean that the compression schemes in
Medical PACS have to be lossless, but they need an appropriate level of fidelity. On the other hand,
fidelity is a very subjective qualification which is why DICOM also introduced the lossless JPEG
scheme as well as the lossless/near-lossless JPEG-LS and the JPEG 2000 later on, as alternatives to
the “baseline” JPEG standard5.


Lossless JPEG standard was added as a JPEG mode of operation in 1993. It replaces the DCT and
other lossy steps (Quantization, Chroma Subsampling) from the original format with a new coding
called Differential Pulse Code Modulation (DCPM). DCPM is based on the prediction of an image
block based on its surroundings, the values in these predictions are generally accurate which is why
the difference between them and the ones achieved in the compression tends to be small. Thereby,
this variation of the standard saves these differences and them compresses them even further using
the Huffman method. The downside of lossless JPEG is that it cannot achieve compressions ratios
comparable to those of baseline JPEG compression.



*   Prefix property states that there is no valid code word in the system that is a prefix (start) of any other valid code word
    in the set.
3   JPEG 2000 still image coding versus other standards. 2000. http://www.jpeg.org/public/wg1n1816.pdf
4   SCHROEDER, Mark D.. JPEG Compression Algorithm and Associated Data Structures. 1997.
    http://akbar.marlboro.edu/~mahoney/courses/Fall01/computation/compression/jpeg/jpeg.html
5   The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html
In 1999 the Joint Photographic Experts Group released the JPEG-LS standard as a solution to
achieve better lossless compression. It uses the prediction, RLE, and context modelling (surrounding
blocks dependency) coding techniques included in the LOCO-I algorithm created by Hewlett Packard
(HP). This scheme helps to achieve a better compression while keeping the integrity of information in
accord to user's configuration6. Therefore, this standard is classified as lossless/near-lossless, where
near-lossless refers to a change from 1 to 5 between the original RGB values and the uncompressed
versions7. It takes approximately the same time to decompress a JPEG-LS file and a lossless JPEG
but the compression ratio achieved is better.


The latest standard included in the DICOM compatibility was JPEG 2000 which in addition to having a
lossless mode, it includes a set of features that are advantageous in Medical Imaging applications,
such as: Regions of Interest, scalability(Low-resolution previewing) and extensive MetaData. The
differences between baseline JPEG and JPEG 2000 algorithms reflect a new approach in which the
objective of higher compression is always accompanied by a lossless alternative. That is, the Color
Space Transformation can either be irreversible or reversible by removing the quantization errors by
using integers; instead of block splitting, JPEG 2000 uses variable size “tiles” to remove undesired
block filling; the DCT is replaced by a Wavelet Transform (WT) with lossy and lossless variations,
again by removing quantization; and it also runs a coding algorithm afterwards. The WT differs from
the DCT, in that, instead of cosine functions it uses another kind of oscillating functions called
wavelets. A wavelet's amplitude starts from zero, reaches its maximum value and then fades back to
zero creating a “frequency pulse”. The effect of this change in the compression algorithm is that this
transform's coefficients are centred around zero (can be easily coded), with a very few large
coefficients. Nevertheless, JPEG 2000's algorithms make image compression and decompression
relatively slow, taking an average of 4.3s to decompress an image.


Although there is a limitation to JPEG file formats in the DICOM standard, there is a high range of
possibilities that rise from the different JPEG versions and the quality configuration capabilities; in
addition, JPEG is now working in collaboration with DICOM to further improve this integration so that
Medical Imaging necessities are taken under consideration8. But still, when it comes to image quality
there is a lot of subjectivity in order to define how lossy an image can be, taking into account that
lossless algorithms cannot yield as good compression results as those with lossy schemes. Therefore,
the decision to implement a lossy or lossless scheme, and how lossy it can be is now turning into a
decision of what features and performance is required by the application.


Lossless and near-lossless standards with good speed, compression ratio, and new features have the
Medical Imaging industry as one of its targets. However, in order to improve any of those features the
system has to sacrifice performance in another, for example, the average compression ratio for the
lossless configurations of JPEG(2.09), JPEG-LS(2.98) and JPEG 2000(2.50) differ, making JPEG-LS
the best choice for lossless compression, also due to it's fast performance. But if a certain amount of
quality can be compromised (near-lossless), the JPEG algorithm will give better results in considerably
less time. Although, when compared to JPEG 2000 that offers the “Medical-Imaging-oriented”
features, JPEG 2000 will outperform JPEG by giving out the same compression ratios with more
quality settings, taking into account that this superiority is toned down by the amount of time that it
takes to decompress.



6   The JPEG Committee. Lossless JPEG. JPEG-LS. 2007. http://www.jpeg.org/jpeg/jpegls.html
7   Lossless, Near-lossless and Lossy Compression of PTM Images. HP. 2001.
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.1635&rep=rep1&type=pdf
8   The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html

More Related Content

PDF
steganography based image compression
PDF
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...
PDF
Jl2516751681
PDF
Paper id 25201490
PDF
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
PDF
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
PDF
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
steganography based image compression
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...
Jl2516751681
Paper id 25201490
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...

What's hot (20)

PDF
Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images
PDF
O0342085098
PDF
3 d discrete cosine transform for image compression
PDF
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...
PDF
Halftoning-based BTC image reconstruction using patch processing with border ...
PDF
H010315356
PDF
Bg044357364
PDF
B140715
PDF
ROI Based Image Compression in Baseline JPEG
PDF
Compressed Medical Image Transfer in Frequency Domain
PDF
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
PDF
THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...
PDF
Review of Diverse Techniques Used for Effective Fractal Image Compression
PDF
Color image compression based on spatial and magnitude signal decomposition
PDF
Efficient Image Compression Technique using JPEG2000 with Adaptive Threshold
PDF
International Journal on Soft Computing ( IJSC )
PDF
Probabilistic model based image segmentation
PDF
PDF
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
PDF
Brain image segmentation based on FCM Clustering Algorithm and Rough Set
Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images
O0342085098
3 d discrete cosine transform for image compression
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...
Halftoning-based BTC image reconstruction using patch processing with border ...
H010315356
Bg044357364
B140715
ROI Based Image Compression in Baseline JPEG
Compressed Medical Image Transfer in Frequency Domain
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...
Review of Diverse Techniques Used for Effective Fractal Image Compression
Color image compression based on spatial and magnitude signal decomposition
Efficient Image Compression Technique using JPEG2000 with Adaptive Threshold
International Journal on Soft Computing ( IJSC )
Probabilistic model based image segmentation
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
Brain image segmentation based on FCM Clustering Algorithm and Rough Set
Ad

Viewers also liked (12)

PPTX
DICOM structure
PPTX
DICOM Structure Basics
PDF
Compression using JPEG
PPTX
Introduction to hl7 v3
PPTX
Medical image compression
PPTX
Medical Image Compression
PPTX
Hl7 reference information model
PPT
DOC
Seminar Report on image compression
PPTX
Picture Archiving and Communication System (PACS)
PDF
IJEST12-04-09-150
DOCX
Medical Image Compression
DICOM structure
DICOM Structure Basics
Compression using JPEG
Introduction to hl7 v3
Medical image compression
Medical Image Compression
Hl7 reference information model
Seminar Report on image compression
Picture Archiving and Communication System (PACS)
IJEST12-04-09-150
Medical Image Compression
Ad

Similar to Medical images compression: JPEG variations for DICOM standard (20)

PPTX
PDF
Comparison of different Fingerprint Compression Techniques
PDF
Digital Image Processing - Image Compression
PDF
Jv2517361741
PDF
Jv2517361741
PDF
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
PDF
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
PPTX
PDF
Squashed JPEG Image Compression via Sparse Matrix
PDF
Squashed JPEG Image Compression via Sparse Matrix
PDF
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
PDF
Wavelet based Image Coding Schemes: A Recent Survey
PDF
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PDF
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
PDF
Jl2516751681
PDF
Design of Image Compression Algorithm using MATLAB
DOCX
Himadeep
PDF
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...
PDF
FPGA based JPEG Encoder
PDF
Ec36783787
Comparison of different Fingerprint Compression Techniques
Digital Image Processing - Image Compression
Jv2517361741
Jv2517361741
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Squashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse Matrix
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
Wavelet based Image Coding Schemes: A Recent Survey
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
Jl2516751681
Design of Image Compression Algorithm using MATLAB
Himadeep
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...
FPGA based JPEG Encoder
Ec36783787

More from Jose Pinilla (11)

PDF
Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...
PDF
Instruction Level Parallelism (ILP) Limitations
PDF
X-ISCKER
PDF
CWCAS X-ISCKER Poster
PDF
Presentación Proyecto de Grado: X-ISCKER
PDF
Black wednesday SOPA/PIPA Report
PDF
Telemedicine and telecardiology report
PDF
The internet success factors
PDF
FPGA como alternativa
PDF
FPGA @ UPB-BGA
PDF
"Basta de historias" de Andrés Oppenheimer
Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...
Instruction Level Parallelism (ILP) Limitations
X-ISCKER
CWCAS X-ISCKER Poster
Presentación Proyecto de Grado: X-ISCKER
Black wednesday SOPA/PIPA Report
Telemedicine and telecardiology report
The internet success factors
FPGA como alternativa
FPGA @ UPB-BGA
"Basta de historias" de Andrés Oppenheimer

Recently uploaded (20)

PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
International_Financial_Reporting_Standa.pdf
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
HVAC Specification 2024 according to central public works department
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
Hazard Identification & Risk Assessment .pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
IGGE1 Understanding the Self1234567891011
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PPTX
20th Century Theater, Methods, History.pptx
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
Trump Administration's workforce development strategy
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
International_Financial_Reporting_Standa.pdf
Share_Module_2_Power_conflict_and_negotiation.pptx
HVAC Specification 2024 according to central public works department
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Hazard Identification & Risk Assessment .pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Paper A Mock Exam 9_ Attempt review.pdf.
Environmental Education MCQ BD2EE - Share Source.pdf
Uderstanding digital marketing and marketing stratergie for engaging the digi...
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
IGGE1 Understanding the Self1234567891011
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
20th Century Theater, Methods, History.pptx
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
Trump Administration's workforce development strategy
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf

Medical images compression: JPEG variations for DICOM standard

  • 1. Medical Images Compression: JPEG variations for DICOM standard Author: Jose Pablo Pinilla Gomez The DICOM (Digital Imaging and Communication in Medicine) is a standard managed by the Medical Imaging & Technology Alliance to ensure the interoperability of medical systems, images, documents and workflow. It was released in 1993 (updated in 2011), to solve the problem with CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images which cannot be decoded by any means other than those provided by the same manufacturers. This discordance between devices causes unfair competition among medical machines manufacturers by keeping health care facilities from acquiring alternate machines that can lead to device incompatibility. Although there are data “translators” to interconnect different standard systems, their implementation increases the total cost and delay of the system. A standardized system is then necessary for the implementation of a PACS (Picture Archiving and Communication System) that compiles all kinds of acquirable images for medical records. The image file format specified in the current DICOM standard supports RLE (Run Length Encoding), JPEG, JPEG-LS and JPEG-2000 compression and decompression schemes. RLE is a lossless compression scheme that converts repeating byte sequences into two-byte codes with the value of the repeating byte and the negative representation of the number of bytes in the run, this is called a Replica Run; In case a sequence of non-repetitive bytes is found, the resulting code will have one byte with the positive representation of the number of bytes in the sequence followed by the literal sequence (Literal Run)1. RLE can yield good compression ratios for monochrome images but it has a very variable behaviour for RGB and YCbCr images. The use of RLE is often preceded by other algorithms to improve its performance. The Joint Photographic Experts Group created the JPEG standard in 1992. It relies on a five step process to compress pixel data with a configurable parameter of quality percentage. The processing begins by changing the color space from the pixel RGB (Red, Green and Blue) triplet to the YCbCr (Lluminance,Red-Chrominance and Blue-Chrominance) values. As many other image compression schemes, JPEG uses YCbCr color space to keep the lluminance values that are more relevant to the human eye while discarding some of the chrominance information. After decompression, files with “chroma subsampling” are still able to display images with almost unnoticeable changes. A configurable amount of chrominance (hue and saturation) information is discarded by a subsequent downsampling step in JPEG. After subsampling, the next step is to divide the whole image into 8x8 matrices of the same channel (Y, Cb and Cr). Then, every 8x8 pixel block goes through a Discrete Cosine Transform (DCT) computation of coefficients. The DCT gives back 64 values (coefficients) that are related to the “concentration” of one of 64 8x8 blocks with different patterns that can represent the original image when put together, or give a close approximation of it. Each of the predefined patterns is called a “basis function” and it is a two-dimensional set of curves created using the cosine function. The resulting 8x8 matrix with the coefficients is now run through a step of “Quantization”, where every value in the input matrix is divided by a corresponding constant derived from predefined quantization tables and user configuration of quality; the resulting values are rounded to the nearest integers. As stated in the JPEG standard2 “The purpose of quantization is to achieve further compression by representing DCT coefficients with no greater precision than is necessary to achieve the desired 1 DICOM standard Part 5: Data Structures and Encoding. 2011. Annex G. P101. http://medical.nema.org/Dicom/2011/11_05pu.pdf 2 JPEG Still Picture Compression Standard. 1991. p5. http://white.stanford.edu/~brian/psy221/reader/Wallace.JPEG.pdf
  • 2. image quality. Stated another way, the goal of this processing step is to discard information which is not visually significant.” The outcome of quantizing the DCT coefficients is that smaller, unimportant coefficients will disappear and larger coefficients will lose unnecessary precision. The final step in JPEG compression is called “Coding” which involves three sub-processes: DC Coding, Zig-Zag Sequence and Entropy Coding. The “DC” coefficient in the matrix, which is the top left-most value, contains a higher integer that is a measure of the average value of the 64 image samples. DC coding replaces every block's DC coefficient with the difference between it and the one of the previous block, this values are often correlated and therefore produce a small difference. Zig- Zag Sequence is a literal zig-zag change in the organization of the matrix, taking advantage of the differentiation between “high-frequency” and “low-frequency” coefficients; the result of this process is that non-zero (low-frequency) and zero (high-frequency) values are grouped so that the following compression steps are more effective. The entropy coding uses the Huffman method which can replace repeating zeroes at the end of a matrix with a single character (byte) and use prefix coding * for other characters according to their frequency of appearance, more frequent characters are coded into shorter values. JPEG's algorithm is very fast, with an average decompression time of 0.9 seconds 3, but it is overall a lossy standard and it reflects it in most of its steps. Color space transformation is an irreversible process because computer calculations of decimal point numbers are always approximations of the real values. Subsampling is a straightforward elimination of data but this step is optional, unlike block splitting in which certain blocks may require “filling” algorithms to complete the 8x8 matrices and thereby altering the decompressed result. Next is the DCT which isn't by itself a lossy process, but forward quantization always is. Quantization is a many-to-one mapping, and therefore is fundamentally lossy. It is the principal source of lossiness in DCT-based encoders.4 Medical Imaging applications often require high precision so that digital alterations can't affect a physician's criteria based on radiology results. This doesn't mean that the compression schemes in Medical PACS have to be lossless, but they need an appropriate level of fidelity. On the other hand, fidelity is a very subjective qualification which is why DICOM also introduced the lossless JPEG scheme as well as the lossless/near-lossless JPEG-LS and the JPEG 2000 later on, as alternatives to the “baseline” JPEG standard5. Lossless JPEG standard was added as a JPEG mode of operation in 1993. It replaces the DCT and other lossy steps (Quantization, Chroma Subsampling) from the original format with a new coding called Differential Pulse Code Modulation (DCPM). DCPM is based on the prediction of an image block based on its surroundings, the values in these predictions are generally accurate which is why the difference between them and the ones achieved in the compression tends to be small. Thereby, this variation of the standard saves these differences and them compresses them even further using the Huffman method. The downside of lossless JPEG is that it cannot achieve compressions ratios comparable to those of baseline JPEG compression. * Prefix property states that there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. 3 JPEG 2000 still image coding versus other standards. 2000. http://www.jpeg.org/public/wg1n1816.pdf 4 SCHROEDER, Mark D.. JPEG Compression Algorithm and Associated Data Structures. 1997. http://akbar.marlboro.edu/~mahoney/courses/Fall01/computation/compression/jpeg/jpeg.html 5 The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html
  • 3. In 1999 the Joint Photographic Experts Group released the JPEG-LS standard as a solution to achieve better lossless compression. It uses the prediction, RLE, and context modelling (surrounding blocks dependency) coding techniques included in the LOCO-I algorithm created by Hewlett Packard (HP). This scheme helps to achieve a better compression while keeping the integrity of information in accord to user's configuration6. Therefore, this standard is classified as lossless/near-lossless, where near-lossless refers to a change from 1 to 5 between the original RGB values and the uncompressed versions7. It takes approximately the same time to decompress a JPEG-LS file and a lossless JPEG but the compression ratio achieved is better. The latest standard included in the DICOM compatibility was JPEG 2000 which in addition to having a lossless mode, it includes a set of features that are advantageous in Medical Imaging applications, such as: Regions of Interest, scalability(Low-resolution previewing) and extensive MetaData. The differences between baseline JPEG and JPEG 2000 algorithms reflect a new approach in which the objective of higher compression is always accompanied by a lossless alternative. That is, the Color Space Transformation can either be irreversible or reversible by removing the quantization errors by using integers; instead of block splitting, JPEG 2000 uses variable size “tiles” to remove undesired block filling; the DCT is replaced by a Wavelet Transform (WT) with lossy and lossless variations, again by removing quantization; and it also runs a coding algorithm afterwards. The WT differs from the DCT, in that, instead of cosine functions it uses another kind of oscillating functions called wavelets. A wavelet's amplitude starts from zero, reaches its maximum value and then fades back to zero creating a “frequency pulse”. The effect of this change in the compression algorithm is that this transform's coefficients are centred around zero (can be easily coded), with a very few large coefficients. Nevertheless, JPEG 2000's algorithms make image compression and decompression relatively slow, taking an average of 4.3s to decompress an image. Although there is a limitation to JPEG file formats in the DICOM standard, there is a high range of possibilities that rise from the different JPEG versions and the quality configuration capabilities; in addition, JPEG is now working in collaboration with DICOM to further improve this integration so that Medical Imaging necessities are taken under consideration8. But still, when it comes to image quality there is a lot of subjectivity in order to define how lossy an image can be, taking into account that lossless algorithms cannot yield as good compression results as those with lossy schemes. Therefore, the decision to implement a lossy or lossless scheme, and how lossy it can be is now turning into a decision of what features and performance is required by the application. Lossless and near-lossless standards with good speed, compression ratio, and new features have the Medical Imaging industry as one of its targets. However, in order to improve any of those features the system has to sacrifice performance in another, for example, the average compression ratio for the lossless configurations of JPEG(2.09), JPEG-LS(2.98) and JPEG 2000(2.50) differ, making JPEG-LS the best choice for lossless compression, also due to it's fast performance. But if a certain amount of quality can be compromised (near-lossless), the JPEG algorithm will give better results in considerably less time. Although, when compared to JPEG 2000 that offers the “Medical-Imaging-oriented” features, JPEG 2000 will outperform JPEG by giving out the same compression ratios with more quality settings, taking into account that this superiority is toned down by the amount of time that it takes to decompress. 6 The JPEG Committee. Lossless JPEG. JPEG-LS. 2007. http://www.jpeg.org/jpeg/jpegls.html 7 Lossless, Near-lossless and Lossy Compression of PTM Images. HP. 2001. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.1635&rep=rep1&type=pdf 8 The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html