Scientific Journal Impact Factor (SJIF): 1.711
International Journal of Modern Trends in Engineering
and Research
www.ijmter.com
@IJMTER-2014, All rights Reserved 374
e-ISSN: 2349-9745
p-ISSN: 2393-8161
A Critical Review of Well Known Method For Image Compression
Nidhi A. Sodha1, Hiral R. Shah2,
1,2
Department of Computer Engineering,
Noble engineering college, Junagadh.
Gujarat 362001, India.
ABSTRACT: The increasing attractiveness and trust in a digital photography will rise its use for
visual communication. But it requires storage of large quantities of data. For that Image compression
is a key technology in transmission and storage of digital images. Compression of an image is
significantly different then compression of binary raw data. Many techniques are available for
compression of the images. But in some cases these techniques will reduce the quality and originality
of image. For this purpose there are basically two types are introduced namely lossless and lossy
image compression techniques. This paper gives intro to various compression techniques which is
applicable to various fields of image processing.
Keywords: Compression; Image Compression; Lossy Compression; Lossless Compression;
Encoding; Decoding; Redundancy
I. INTRODUCTION
Images are important documents now days. Work with images in some applications we need to do
compression depends on our aim of the applications such as storage of images in a data base, picture
archiving, TV and videoconferencing. Image compression plays a important role in the transmission
and storage of image data because of storage limitations. The main aim of image compression is to
represent an image in the less number of bits without losing the essential information content from
original image.
Many algorithms that performs image compression in many different ways from that some are
lossless and some are lossy. Lossless it will keep the same information as in the original image but in
lossy compression we loss some information from original image while compressing an image.
Image compression is nothing but reducing the amount of data required to represent an image. After
performing compression we can calculate the compression ratio which is is defined as follows:
Cr= N1/N2
Where N1 and N2 are data of images respectively original image and compressed image. From this
formula as increasing the compression ratio it will increase the reduction of data. From this equation
as increase the compression rate decrease the data [1].
II. IMAGE COMPRESSION
Image compression is nothing but reducing the amount of data required to represent an image. Image
compression was done by taking advantage of redundant data because every image having some
redundant data redundant data means duplication of data or we can says that in a image some pixels
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 375
are duplicates or some pattern occurs frequently in an image. So we can achieve image compression
when this redundancies are reduce or eliminated. There are three basic data redundancy use for the
compression that are:
1. Inter Pixel Redundancy
In image pixel are not independent they are correlated to their neighboring pixels in this type of
redundancy there are many ways from that one is predict the value of pixel by the value of their
neighboring pixels.
2. Coding Redundancy
In this variable length code words are use and store in a lookup tables (LUTs).that variable length
code words selected to match statistics of the original image. This technique is always reversible
huffman coding and arithmetic coding are example of this technique.
3. Psycho Visual Redundancy
From many experiments prove that human eye does not respond equally to all visual information.
For images some pieces are more important than others .now a day most of image coding algorithm
using this type of redundancy technique.
The main purpose for using compression algorithm is to represent given data in to law bit rates[2].
There are number of ways to evaluating compression algorithm. and for the measuring an algorithm
we can find complexity of the algorithm, how much memory require to implement an algorithm, on
given machine how fast that algorithm will perform ,compression rate and how closely reconstructed
image resemble the original image. simple block diagram for image compression system is as shown
in fig(1). devise who perform compression task is known as encoder from given diagram encoder
compress image from A to B. Where A is input image and B is compressed image which is then
passes to decoder through channel or a storage system. Then from compressed data decoder will
reconstruct or decompress image C as per our application channel will affected by noice so it will
distort image during the transmission here we assume channel as a error-free channel from given
figure for lossless compression C is identical to A where for lossy compression C is different from
A.
Figure 1. Block diagram of image compression system
2.1. Lossless Compression
As the name itself indicates the original image can be perfectly recovered using the lossless
compression techniques. This technique is also known as entropy coding, noiseless compression etc.
They will not introduce any noises to the image and they are using statistics or decomposition
techniques to reduce the redundancy. These techniques are preferred for medical imaging, technical
drawing etc. The following are some different type of the methods which are used for lossless
compression.
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 376
1. Runlength encoding.
2. Entropy encoding.
3. Huffman encoding.
4. Arithmetic encoding.
1) Run Length Encoding:
It is a very simple form of image compression in which runs of data are stored as a single data value
and count, rather than as the original run. It is used for sequential data and it is helpful for repetitive
data. In this technique replaces sequences of identical symbol (pixel), called runs. The Run length
code for a grayscale image is represented by a sequence {Vi, Ri} where Vi is the intensity of pixel
and Ri refers to the number of consecutive pixels with the intensity Vi as shown in the figure. This is
most useful on data that contains many such runs for example, simple graphic images such as icons,
line drawings, and animations. It is not useful with files that don't have many runs as it could greatly
increase the file size. Run-length encoding performs lossless image compression [4]. Run-length
encoding is used in fax machines.
35 35 35 40 40 40 40 70 70 70
2) Entropy Encoding:
In information theory an entropy encoding is a lossless data compression scheme that is independent
of the specific characteristics of the medium. One of the main types of entropy coding creates and
assigns a unique prefix-free code for each unique symbol that occurs in the input. These entropy
encoders then compress the image by replacing each fixed-length input symbol with the
corresponding variable-length prefix free output codeword.
3) Huffman Encoding: In computer science and information theory, Huffman coding is an entropy
encoding algorithm used for lossless data compression. It was developed by Huffman. Huffman
coding [8] today is often used as a "back-end" to some other compression methods. The term refers
to the use of a variable-length code table for encoding a source symbol where the variable-length
code table has been derived in a particular way based on the estimated probability of occurrence for
each possible value of the source symbol. The pixels in the image are treated as symbols. The
symbols which occur more frequently are assigned a smaller number of bits, while the symbols that
occur less frequently are assigned a relatively larger number of bits. Huffman code is a prefix code.
This means that the (binary) code of any symbol is not the prefix of the code of any other symbol.
4) Arithmetic Coding:
Arithmetic coding is a form of entropy encoding used in lossless data compression. Normally, a
string of characters such as the words "hello there" is represented using a fixed number of bits per
character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used
characters will be stored with little bits and not-so-frequently occurring characters will be stored with
more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy
encoding such as Huffman coding [10] in that rather than separating the input into component
symbols and replacing each with a code, arithmetic coding encodes the entire message into a single
number.
{35,3} {40,4} {70,3}
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 377
2.2 LOSSY COMPRESSION TECHNIQUES
Lossy schemes provide much higher compression ratios than lossless schemes. By this scheme, the
decompressed image is not identical to the original image, but reasonably close to it. But this scheme
is widely used. Lossy methods are especially suitable for natural images such as photographs in
applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate.
The lossy compression that produces imperceptible differences may be called visually lossless. The
following methods are used in lossy compression
1. Chroma subsampling
2. Transform coding
3. Fractal Compression
1) Chroma subsampling
This takes advantage of the fact that the human eye perceives spatial changes of brightness more
sharply than those of color, by averaging or dropping some of the chrominance information in the
image. It works by taking advantage of the human visual system's lower acuity for color differences
than for luminance.[1] Ii is mainly used in video encoding, jpeg encoding etc. Chroma sub sampling
is a method that stores color information at lower resolution than intensity information. The
overwhelming majority of graphics programs perform 2x2 chroma sub sampling, which breaks the
image into 2x2 pixel blocks and only stores the average color information for each 2x2 pixel group.
This process introduces two kinds of errors.
2) Transform coding
It is a type of compression for natural data like photographic images. It will result a low quality
output of original image. It is a core technique recommended by jpeg. Transform coding is used to
convert spatial image pixel values to transform coefficient values. Since this is a linear process and
no information is lost, the number of coefficients produced is equal to the number of pixels
transformed. Many types of transforms have been tried for picture coding, including for example
Fourier, Karhonen-Loeve, Walsh-Hadamard, lapped orthogonal, discrete cosine (DCT), and recently,
wavelets.
3) Fractal Compression It is one of the lossy compression technique used in digital images. As the
name indicates it mainly based on the fractals. This approach is good for natural images and textures.
It works on the fact that parts of an image often resemble other parts of the same image. This mehod
converts these parts into mathematical data. These data are called “fractal codes” Which are used to
recreate the encoded image.
V. CONCLUSION
This paper survey different image compression techniques this methods are mainly classified into
two major categories that are lossy compression and lossless compression . As their name it self
suggest how they works. In lossless technique it decode without loss of any information. At other
and in lossy technique there surtain loss of information from data which represent image both
techniques have their own applications like lossy compression used in a multimedia data and lossless
used in a text or data file like bank records, text articles. Sometimes it is helpful to make on master
lossless file which is used to make compressed file for different applications.
.
REFERENCES
1) Nelson M. The Data Compression Book. 2nd ed. New York: M&T Books 1995.
2) Khalid S. Introduction to Data Compression. 2nd ed. New York:Elsevier 2005.
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 378
3) David Jeff Jackson & Sidney Joel Hannah, “Comparative Analysis of image Compression Techniques,”
System Theory 1993, Proceedings SSST ’93, 25th Southeastern Symposium,pp 513-517, 7 –9 March
1993.
4) Tzong Jer Chen and Keh-Shih Chuang,‖ A Pseudo Lossless Image Compression Method,‖IEEE, pp. 610-
615, 2010.
5) Mridul Kumar Mathur, Seema Loonker and Dr. Dheeraj Saxena ―Lossless Huffman Coding Technique
For Image Compression And Reconstruction Using Binary Trees,‖IJCTA, pp. 76-79, 2012.
6) V.K Padmaja and Dr. B. Chandrasekhar,‖Literature Review of Image Compression Algorithm,‖ IJSER,
Volume 3, pp. 1-6, 2012.
7) David Jeff Jackson & Sidney Joel Hannah, “Comparative Analysis of image Compression Techniques,”
System Theory 1993, Proceedings SSST ’93, 25th Southeastern Symposium,pp 513-517, 7 –9 March
1993.
8) Anitha S,“2D image compression technique-A survey”, International Journal of Scientific & Engineering
Research Volume 2, Issue 7, pp 1-7July-2011.
9) Dr.B Eswara Reddy and K Venkata Narayana “A Lossless Image Compression Using Traditional and
Lifting Based Wavelets”, Signal & Image Processing: An International Journal (SIPIJ) Vol.3, No.2, and
April 2012.
10) Jagadish H. Pujar and Lohit M. Kadlaskar,‖ A New Lossless Method Of Image Compression and
Decompression Using Huffman Coding Techniques,‖ JATIT, pp. 18-22, 2012.
11) S. Sahami and M.G. Shayesteh,‖ Bi-level image compression technique using neural networks,‖ IET
Image Process, Vol. 6, Iss. 5, pp. 496–506, 2012.
12) S. Dharanidharan , S. B. Manoojkumaar and D. Senthilkumar,‖Modified International Data Encryption
Algorithm using in Image Compression Techniques,‖IJESIT , pp. 186-191,2013.
A Critical Review of Well Known Method For Image Compression
A Critical Review of Well Known Method For Image Compression

More Related Content

PDF
Fuzzy Type Image Fusion Using SPIHT Image Compression Technique
PDF
A Study of Image Compression Methods
PDF
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PDF
O017429398
PDF
Digital image compression techniques
PDF
Lossless Image Compression Techniques Comparative Study
PDF
M017427985
PDF
J017426467
Fuzzy Type Image Fusion Using SPIHT Image Compression Technique
A Study of Image Compression Methods
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
O017429398
Digital image compression techniques
Lossless Image Compression Techniques Comparative Study
M017427985
J017426467

What's hot (19)

DOCX
Thesis on Image compression by Manish Myst
PDF
IMAGE COMPRESSION AND DECOMPRESSION SYSTEM
PDF
D017542937
PDF
Inpainting scheme for text in video a survey
PDF
Effective Compression of Digital Video
PDF
E017443136
PDF
Secured Data Transmission Using Video Steganographic Scheme
PDF
Paper id 21201419
PDF
Data compression
PDF
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PDF
SECURE OMP BASED PATTERN RECOGNITION THAT SUPPORTS IMAGE COMPRESSION
PDF
Hy3115071512
PDF
akashreport
PDF
Image Steganography Using HBC and RDH Technique
PDF
M1803016973
PDF
Ijetcas14 335
PDF
Image Encryption Based on Pixel Permutation and Text Based Pixel Substitution
PDF
Embedding Patient Information In Medical Images Using LBP and LTP
Thesis on Image compression by Manish Myst
IMAGE COMPRESSION AND DECOMPRESSION SYSTEM
D017542937
Inpainting scheme for text in video a survey
Effective Compression of Digital Video
E017443136
Secured Data Transmission Using Video Steganographic Scheme
Paper id 21201419
Data compression
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
SECURE OMP BASED PATTERN RECOGNITION THAT SUPPORTS IMAGE COMPRESSION
Hy3115071512
akashreport
Image Steganography Using HBC and RDH Technique
M1803016973
Ijetcas14 335
Image Encryption Based on Pixel Permutation and Text Based Pixel Substitution
Embedding Patient Information In Medical Images Using LBP and LTP
Ad

Similar to A Critical Review of Well Known Method For Image Compression (20)

PDF
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
PDF
Enhanced Image Compression Using Wavelets
PDF
Design of Image Compression Algorithm using MATLAB
PDF
Blank Background Image Lossless Compression Technique
DOC
Seminar Report on image compression
PDF
Comparison and improvement of image compression
PDF
Comparison and improvement of image compression
PDF
Comparison and improvement of image compression
PDF
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
PDF
International Journal on Soft Computing ( IJSC )
PDF
Wavelet based Image Coding Schemes: A Recent Survey
PDF
Secure OMP Based Pattern Recognition that Supports Image Compression
PPTX
DIP.pptx
PPTX
Spiht 3d
PDF
Comparative Analysis of Huffman and Arithmetic Coding Algorithms for Image Co...
PDF
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
PDF
Comprehensive Study of the Work Done In Image Processing and Compression Tech...
PDF
Ijrdtvlis11 140006
PDF
A spatial image compression algorithm based on run length encoding
PDF
Presentation on Image Compression
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
Enhanced Image Compression Using Wavelets
Design of Image Compression Algorithm using MATLAB
Blank Background Image Lossless Compression Technique
Seminar Report on image compression
Comparison and improvement of image compression
Comparison and improvement of image compression
Comparison and improvement of image compression
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
International Journal on Soft Computing ( IJSC )
Wavelet based Image Coding Schemes: A Recent Survey
Secure OMP Based Pattern Recognition that Supports Image Compression
DIP.pptx
Spiht 3d
Comparative Analysis of Huffman and Arithmetic Coding Algorithms for Image Co...
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
Comprehensive Study of the Work Done In Image Processing and Compression Tech...
Ijrdtvlis11 140006
A spatial image compression algorithm based on run length encoding
Presentation on Image Compression
Ad

More from Editor IJMTER (20)

PDF
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIP
PDF
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
PDF
Analysis of VoIP Traffic in WiMAX Environment
PDF
A Hybrid Cloud Approach for Secure Authorized De-Duplication
PDF
Aging protocols that could incapacitate the Internet
PDF
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...
PDF
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMES
PDF
Sustainable Construction With Foam Concrete As A Green Green Building Material
PDF
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TEST
PDF
Textual Data Partitioning with Relationship and Discriminative Analysis
PDF
Testing of Matrices Multiplication Methods on Different Processors
PDF
Survey on Malware Detection Techniques
PDF
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICE
PDF
SURVEY OF GLAUCOMA DETECTION METHODS
PDF
Survey: Multipath routing for Wireless Sensor Network
PDF
Step up DC-DC Impedance source network based PMDC Motor Drive
PDF
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATION
PDF
Software Quality Analysis Using Mutation Testing Scheme
PDF
Software Defect Prediction Using Local and Global Analysis
PDF
Software Cost Estimation Using Clustering and Ranking Scheme
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIP
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
Analysis of VoIP Traffic in WiMAX Environment
A Hybrid Cloud Approach for Secure Authorized De-Duplication
Aging protocols that could incapacitate the Internet
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMES
Sustainable Construction With Foam Concrete As A Green Green Building Material
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TEST
Textual Data Partitioning with Relationship and Discriminative Analysis
Testing of Matrices Multiplication Methods on Different Processors
Survey on Malware Detection Techniques
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICE
SURVEY OF GLAUCOMA DETECTION METHODS
Survey: Multipath routing for Wireless Sensor Network
Step up DC-DC Impedance source network based PMDC Motor Drive
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATION
Software Quality Analysis Using Mutation Testing Scheme
Software Defect Prediction Using Local and Global Analysis
Software Cost Estimation Using Clustering and Ranking Scheme

Recently uploaded (20)

PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
Soil Improvement Techniques Note - Rabbi
PPTX
Module 8- Technological and Communication Skills.pptx
PDF
Java Basics-Introduction and program control
PDF
Applications of Equal_Area_Criterion.pdf
PDF
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
PDF
Prof. Dr. KAYIHURA A. SILAS MUNYANEZA, PhD..pdf
PDF
Design of Material Handling Equipment Lecture Note
PPT
Chapter 1 - Introduction to Manufacturing Technology_2.ppt
PPTX
wireless networks, mobile computing.pptx
PPTX
Principal presentation for NAAC (1).pptx
PDF
Cryptography and Network Security-Module-I.pdf
PPTX
Management Information system : MIS-e-Business Systems.pptx
PPTX
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
PDF
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
PDF
MLpara ingenieira CIVIL, meca Y AMBIENTAL
PDF
20250617 - IR - Global Guide for HR - 51 pages.pdf
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PPTX
CONTRACTS IN CONSTRUCTION PROJECTS: TYPES
PPTX
PRASUNET_20240614003_231416_0000[1].pptx
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
Soil Improvement Techniques Note - Rabbi
Module 8- Technological and Communication Skills.pptx
Java Basics-Introduction and program control
Applications of Equal_Area_Criterion.pdf
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
Prof. Dr. KAYIHURA A. SILAS MUNYANEZA, PhD..pdf
Design of Material Handling Equipment Lecture Note
Chapter 1 - Introduction to Manufacturing Technology_2.ppt
wireless networks, mobile computing.pptx
Principal presentation for NAAC (1).pptx
Cryptography and Network Security-Module-I.pdf
Management Information system : MIS-e-Business Systems.pptx
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
MLpara ingenieira CIVIL, meca Y AMBIENTAL
20250617 - IR - Global Guide for HR - 51 pages.pdf
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
CONTRACTS IN CONSTRUCTION PROJECTS: TYPES
PRASUNET_20240614003_231416_0000[1].pptx

A Critical Review of Well Known Method For Image Compression

  • 1. Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com @IJMTER-2014, All rights Reserved 374 e-ISSN: 2349-9745 p-ISSN: 2393-8161 A Critical Review of Well Known Method For Image Compression Nidhi A. Sodha1, Hiral R. Shah2, 1,2 Department of Computer Engineering, Noble engineering college, Junagadh. Gujarat 362001, India. ABSTRACT: The increasing attractiveness and trust in a digital photography will rise its use for visual communication. But it requires storage of large quantities of data. For that Image compression is a key technology in transmission and storage of digital images. Compression of an image is significantly different then compression of binary raw data. Many techniques are available for compression of the images. But in some cases these techniques will reduce the quality and originality of image. For this purpose there are basically two types are introduced namely lossless and lossy image compression techniques. This paper gives intro to various compression techniques which is applicable to various fields of image processing. Keywords: Compression; Image Compression; Lossy Compression; Lossless Compression; Encoding; Decoding; Redundancy I. INTRODUCTION Images are important documents now days. Work with images in some applications we need to do compression depends on our aim of the applications such as storage of images in a data base, picture archiving, TV and videoconferencing. Image compression plays a important role in the transmission and storage of image data because of storage limitations. The main aim of image compression is to represent an image in the less number of bits without losing the essential information content from original image. Many algorithms that performs image compression in many different ways from that some are lossless and some are lossy. Lossless it will keep the same information as in the original image but in lossy compression we loss some information from original image while compressing an image. Image compression is nothing but reducing the amount of data required to represent an image. After performing compression we can calculate the compression ratio which is is defined as follows: Cr= N1/N2 Where N1 and N2 are data of images respectively original image and compressed image. From this formula as increasing the compression ratio it will increase the reduction of data. From this equation as increase the compression rate decrease the data [1]. II. IMAGE COMPRESSION Image compression is nothing but reducing the amount of data required to represent an image. Image compression was done by taking advantage of redundant data because every image having some redundant data redundant data means duplication of data or we can says that in a image some pixels
  • 2. International Journal of Modern Trends in Engineering and Research (IJMTER) Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161 @IJMTER-2014, All rights Reserved 375 are duplicates or some pattern occurs frequently in an image. So we can achieve image compression when this redundancies are reduce or eliminated. There are three basic data redundancy use for the compression that are: 1. Inter Pixel Redundancy In image pixel are not independent they are correlated to their neighboring pixels in this type of redundancy there are many ways from that one is predict the value of pixel by the value of their neighboring pixels. 2. Coding Redundancy In this variable length code words are use and store in a lookup tables (LUTs).that variable length code words selected to match statistics of the original image. This technique is always reversible huffman coding and arithmetic coding are example of this technique. 3. Psycho Visual Redundancy From many experiments prove that human eye does not respond equally to all visual information. For images some pieces are more important than others .now a day most of image coding algorithm using this type of redundancy technique. The main purpose for using compression algorithm is to represent given data in to law bit rates[2]. There are number of ways to evaluating compression algorithm. and for the measuring an algorithm we can find complexity of the algorithm, how much memory require to implement an algorithm, on given machine how fast that algorithm will perform ,compression rate and how closely reconstructed image resemble the original image. simple block diagram for image compression system is as shown in fig(1). devise who perform compression task is known as encoder from given diagram encoder compress image from A to B. Where A is input image and B is compressed image which is then passes to decoder through channel or a storage system. Then from compressed data decoder will reconstruct or decompress image C as per our application channel will affected by noice so it will distort image during the transmission here we assume channel as a error-free channel from given figure for lossless compression C is identical to A where for lossy compression C is different from A. Figure 1. Block diagram of image compression system 2.1. Lossless Compression As the name itself indicates the original image can be perfectly recovered using the lossless compression techniques. This technique is also known as entropy coding, noiseless compression etc. They will not introduce any noises to the image and they are using statistics or decomposition techniques to reduce the redundancy. These techniques are preferred for medical imaging, technical drawing etc. The following are some different type of the methods which are used for lossless compression.
  • 3. International Journal of Modern Trends in Engineering and Research (IJMTER) Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161 @IJMTER-2014, All rights Reserved 376 1. Runlength encoding. 2. Entropy encoding. 3. Huffman encoding. 4. Arithmetic encoding. 1) Run Length Encoding: It is a very simple form of image compression in which runs of data are stored as a single data value and count, rather than as the original run. It is used for sequential data and it is helpful for repetitive data. In this technique replaces sequences of identical symbol (pixel), called runs. The Run length code for a grayscale image is represented by a sequence {Vi, Ri} where Vi is the intensity of pixel and Ri refers to the number of consecutive pixels with the intensity Vi as shown in the figure. This is most useful on data that contains many such runs for example, simple graphic images such as icons, line drawings, and animations. It is not useful with files that don't have many runs as it could greatly increase the file size. Run-length encoding performs lossless image compression [4]. Run-length encoding is used in fax machines. 35 35 35 40 40 40 40 70 70 70 2) Entropy Encoding: In information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium. One of the main types of entropy coding creates and assigns a unique prefix-free code for each unique symbol that occurs in the input. These entropy encoders then compress the image by replacing each fixed-length input symbol with the corresponding variable-length prefix free output codeword. 3) Huffman Encoding: In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. It was developed by Huffman. Huffman coding [8] today is often used as a "back-end" to some other compression methods. The term refers to the use of a variable-length code table for encoding a source symbol where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. The pixels in the image are treated as symbols. The symbols which occur more frequently are assigned a smaller number of bits, while the symbols that occur less frequently are assigned a relatively larger number of bits. Huffman code is a prefix code. This means that the (binary) code of any symbol is not the prefix of the code of any other symbol. 4) Arithmetic Coding: Arithmetic coding is a form of entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with little bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding [10] in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number. {35,3} {40,4} {70,3}
  • 4. International Journal of Modern Trends in Engineering and Research (IJMTER) Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161 @IJMTER-2014, All rights Reserved 377 2.2 LOSSY COMPRESSION TECHNIQUES Lossy schemes provide much higher compression ratios than lossless schemes. By this scheme, the decompressed image is not identical to the original image, but reasonably close to it. But this scheme is widely used. Lossy methods are especially suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. The lossy compression that produces imperceptible differences may be called visually lossless. The following methods are used in lossy compression 1. Chroma subsampling 2. Transform coding 3. Fractal Compression 1) Chroma subsampling This takes advantage of the fact that the human eye perceives spatial changes of brightness more sharply than those of color, by averaging or dropping some of the chrominance information in the image. It works by taking advantage of the human visual system's lower acuity for color differences than for luminance.[1] Ii is mainly used in video encoding, jpeg encoding etc. Chroma sub sampling is a method that stores color information at lower resolution than intensity information. The overwhelming majority of graphics programs perform 2x2 chroma sub sampling, which breaks the image into 2x2 pixel blocks and only stores the average color information for each 2x2 pixel group. This process introduces two kinds of errors. 2) Transform coding It is a type of compression for natural data like photographic images. It will result a low quality output of original image. It is a core technique recommended by jpeg. Transform coding is used to convert spatial image pixel values to transform coefficient values. Since this is a linear process and no information is lost, the number of coefficients produced is equal to the number of pixels transformed. Many types of transforms have been tried for picture coding, including for example Fourier, Karhonen-Loeve, Walsh-Hadamard, lapped orthogonal, discrete cosine (DCT), and recently, wavelets. 3) Fractal Compression It is one of the lossy compression technique used in digital images. As the name indicates it mainly based on the fractals. This approach is good for natural images and textures. It works on the fact that parts of an image often resemble other parts of the same image. This mehod converts these parts into mathematical data. These data are called “fractal codes” Which are used to recreate the encoded image. V. CONCLUSION This paper survey different image compression techniques this methods are mainly classified into two major categories that are lossy compression and lossless compression . As their name it self suggest how they works. In lossless technique it decode without loss of any information. At other and in lossy technique there surtain loss of information from data which represent image both techniques have their own applications like lossy compression used in a multimedia data and lossless used in a text or data file like bank records, text articles. Sometimes it is helpful to make on master lossless file which is used to make compressed file for different applications. . REFERENCES 1) Nelson M. The Data Compression Book. 2nd ed. New York: M&T Books 1995. 2) Khalid S. Introduction to Data Compression. 2nd ed. New York:Elsevier 2005.
  • 5. International Journal of Modern Trends in Engineering and Research (IJMTER) Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161 @IJMTER-2014, All rights Reserved 378 3) David Jeff Jackson & Sidney Joel Hannah, “Comparative Analysis of image Compression Techniques,” System Theory 1993, Proceedings SSST ’93, 25th Southeastern Symposium,pp 513-517, 7 –9 March 1993. 4) Tzong Jer Chen and Keh-Shih Chuang,‖ A Pseudo Lossless Image Compression Method,‖IEEE, pp. 610- 615, 2010. 5) Mridul Kumar Mathur, Seema Loonker and Dr. Dheeraj Saxena ―Lossless Huffman Coding Technique For Image Compression And Reconstruction Using Binary Trees,‖IJCTA, pp. 76-79, 2012. 6) V.K Padmaja and Dr. B. Chandrasekhar,‖Literature Review of Image Compression Algorithm,‖ IJSER, Volume 3, pp. 1-6, 2012. 7) David Jeff Jackson & Sidney Joel Hannah, “Comparative Analysis of image Compression Techniques,” System Theory 1993, Proceedings SSST ’93, 25th Southeastern Symposium,pp 513-517, 7 –9 March 1993. 8) Anitha S,“2D image compression technique-A survey”, International Journal of Scientific & Engineering Research Volume 2, Issue 7, pp 1-7July-2011. 9) Dr.B Eswara Reddy and K Venkata Narayana “A Lossless Image Compression Using Traditional and Lifting Based Wavelets”, Signal & Image Processing: An International Journal (SIPIJ) Vol.3, No.2, and April 2012. 10) Jagadish H. Pujar and Lohit M. Kadlaskar,‖ A New Lossless Method Of Image Compression and Decompression Using Huffman Coding Techniques,‖ JATIT, pp. 18-22, 2012. 11) S. Sahami and M.G. Shayesteh,‖ Bi-level image compression technique using neural networks,‖ IET Image Process, Vol. 6, Iss. 5, pp. 496–506, 2012. 12) S. Dharanidharan , S. B. Manoojkumaar and D. Senthilkumar,‖Modified International Data Encryption Algorithm using in Image Compression Techniques,‖IJESIT , pp. 186-191,2013.