SlideShare a Scribd company logo
www.trp.org.in/ajcst
CompressionofCompoundImages Using
Wavelet Transform
P.S.Jagadeesh Kumar
University of Cambridge, United Kingdom
Abstract
Compression of compound image is essential for remote control software, such as real virtualnetwork computing (VNC), which allows
a person at a remote computer to view and interact with another computer across the network, as if sitting in front of the computer. A
smart display device acts as a portable screen with 802.11b wireless connection to a nearby desktop PC, enabling people to surf the
web or browse pictures that are stored on the desktop PC. The challenge is that the huge amount of real-time computer screen video
data should be transmitted over the cable or wireless networks. One 800×600 frame of true color screen image has a size of 1.44 MB
and 85 frames/sec produces more than 100 MB data. In this paper, wavelet sub-band coding based lossless compression of compound
image is discussed and the implementation shows excellent visual quality of text in compressed computer screen images.
Keywords
Compound image, Computer screen image, Image compression, Wavelet sub-band coding, Image segmentation
I. Introduction
Digital Image Processing is the use of computer algorithms to
perform image processing on digital images. Digital image
processing has many advantages over analog image processing.
It allows a much wider range of algorithms to be applied to the
input data and can avoid problems such as the build-up of noise
and signal distortion during processing. Since images are defined
over two dimensions, digital image processing may be modelled
in the form of multidimensional systems. It allows the use of
much more complex algorithms for image processing and can
offer both more sophisticated performance at simple tasks, and
the implementation of methods which would be impossible by
analog means.
II. Compound Image
A compound image is an image that contains data of various
types such as text, graphics and pictures. Each of these data
types has different statistical properties and is characterized by
different level of distortion that a human observer can notice. The
sensitivity of human eyes for natural image and text is different.
The quality requirement of compound image coding is different
fromgeneralimagecodingbecauseuserscannotacceptthequality
iftextisnotclearenoughtorecognize. Mostoftheeffortsinimage
compression until now have been in developing new algorithms
thatachievebettercompressionatthecostofconsiderableincrease
in complexity. Class room setup with wireless projectors and
presentation computer providing the flexibility to site anywhere
in the room without cable connection to the laptop is another
important application of compound image compression.
III. Image Compression
The objective of image compression is to reduce irrelevance and
redundancyoftheimagedatainordertobeabletostoreortransmit
data in an efficient form. There are two types;
1. Lossless compression
2. Lossy compression
A. Lossless compression
Lossless compression is preferred for archival purposes and often
for medical imaging, technical drawings, clip art, or comics. This
is because lossy compression methods, especially when used
at low bit rates, introduce compression artifacts. Lossless data
compression is used in many applications. For example, it is used
in the ZIP file format and in the UNIX tool gzip. It is also often
used as a component within lossy data compression technologies.
Typical examples are executable programs, text documents and
source code. Some image file formats, like PNG or GIF, use only
lossless compression, while others like TIFF and MNG mayuse
either lossless or lossy methods.
B. Lossy compression
Lossy methods are especially suitable for natural images such
as photographs in applications where minor loss of fidelity is
acceptable to achieve a substantial reduction in bit rate. The
lossy compression that produces imperceptible differences may
be called visually lossless. The procedure aims to minimize the
amount of data that needs to be held, handled, and/or transmitted
by a computer. Typically, a substantial amount of data can be
discarded before the result is sufficiently degraded to be noticed
by the user. Lossy compression is most commonly used to
compress multimedia data (audio, video, still images), especially
in applications such as streaming media and internet telephony.
By contrast, lossless compression is required for text and data
files, such as bank records, text articles, etc.
IV.Block Classification
Bandwidth is a very important limiting factor in application
of image segmentation. Several segmentation schemes require
morphological analysis of the different regions, and multiple
passes over the image being segmented. However, each pass
normallyrequiresloading memorydatafromslowto fast memory,
whichisaslowprocess.Segmentationsolutionsbasedonmultiple
passes are much slower or costly than what can be expected by,
for instance, counting the number of operations. Thus, an ideal
solution would use a single pass to decide on the type of image
region.Suchsolution would beverydifficult witharbitraryshapes
of segmentation regions, but it is feasible if consider only a pre-
defined shape. For example, rectangular blocks decide the image
type based only on the properties of the pixels inside the block.
Such techniques are called block classification. This technique is
theoretically sub-optimal, since it must classify all pixels in the
block in the same manner, even if the block contains the boundary
betweentwo regions.Another potential problem would be the fact
that it does not consider the pixels in the block’s neighbourhood.
www.trp.org.in/ajcst
264
Some factors mitigate the sub-optimal performance of a block-
based scheme around region boundaries. First, compound images
nowhave fairly high resolutions. Ifthe block size is small enough,
one cannot expect to have the image type changing for every
block.Therefore,boundaryblocksconsistofasmallfractionofthe
image. It is easy to identify those boundary blocks doing a simple
analysis ona blocklevel. Such analysis would not requireas much
bandwidth, becausethe number ofblocksis much smaller than the
number of pixels and it is required only when some transition is
found. Using thosetechniques, onecan obtaina segmentation that
is good enough for image compression, allowing high text quality
and with a complexity that is practically the same as required by
one-pass segmentation. Furthermore, the resulting segmentation
hasveryattractivefeatures.Forinstance,manyimagecompression
methods are efficient on rectangular regions, but do not work well
on arbitrary regions.
tasks in image processing. Segmentation accuracy determines the
eventual success or failure of computerized analysis procedures.
For this reason considerable care should be taken to improve
the probability of rugged segmentation. In some situations, such
as industrial inspection applications, at least some measure of
control over the environment is possible at times. In others, as
in remote sensing, user control over image acquisition islimited
principally to the choice of imaging sensors. Segmentation
algorithms for monochrome images generally are based on one
of two basic properties of image intensity values: discontinuity
and similarity. In the first category, the approach is to partition an
image based on abrupt changes in intensity, such as edges in an
image. The principal approaches in the second category are based
on partitioning an image into regions that are similar according
to a set of predefined criteria. The segmentation approaches
used in compound document compression can be grouped into
4 classes;
• Object-based segmentation
• Layer-based segmentation
• Block-based segmentation
• Image-Coding based segmentation
Fig. 1: One pass block classification using neighbourhood
analysis
The above Fig.1 shows how the block classification is applied
to an individual block. First, the block is classified according to
the distribution of its pixels. Next, the class of the neighbouring
blocks is analysed. At this stage, identify if a block is in the
boundary between two regions, or change the classification if its
confidence is low. To avoid adding the burden of analysis to the
decoder,thefinalclassificationisaddedtothecompressedstream.
Finally, the block is compressed according to the identified type.
The identification of boundaries is very important because it can
change compression parameters for those blocks. For instance,
different quantization can be set when moving from a lossy to
lossless region. By introducing extra buffering, it is possible to
use the block neighbourhood analysis for merging blocks that are
classified in the same manner.
V.Segmentation
The goal of image segmentation is to cluster pixels into salient
image regions, i.e., regions corresponding to individual surfaces,
objects,ornaturalpartsofobjects.Segmentationcouldbeused for
object recognition, occlusion boundary estimation within motion
or stereo systems, image compression, image editing, or image
database look-up. In this paper, bottom-up image segmentation is
considered. For input, primarily image brightness is considered,
although similar techniques can be used with colour, motion,
and/or stereo disparity information. Segmentation subdivides an
image into it’s constitute regions or objects. The level to which
the subdivision is carried depends on the problem being solved.
That is, segmentation should stop when the objects of interest in
an application have been isolated. In the automated inspection
of electronic assemblies, interest lies in analysing images of the
productswiththeobjectiveofdeterminingthepresenceorabsence
of specific anomalies, such as missing components or broken
connection paths. There is no point in carrying segmentation
past the level of detail required to identify those elements.
Segmentation of nontrivial images is one of the most difficult
A. Object Based Segmentation
In this case, a page is divided into regions, where each region
followsexactobjectboundaries.Anobjectmaybeaphotograph, a
graphicalobject,aletter,etc.Inprinciple,thismethod mayprovide
the best compression, since it provides the best match between a
data type and the compression method most suitable for this data
type. In reality, the best compression may not be achievable for
thefollowingreasons.Codingtheobjectboundariesrequiresextra
bits,andthetypicalalgorithms, used forlossyimagecompression,
are designed to operate on rectangular objects. They canoperate
on objects with non-rectangular boundaries, but the compression
performance will suffer. Complexity is another drawback of this
method, since precise image segmentation may require the use
of very sophisticated segmentation algorithms.
B. Layer Based Segmentation
This approach can be regarded as a simplified version of the
full object based segmentation. The original page is divided
into rectangular layers, where each layer can have one or more
objects, and “mask” planes. A mask plane tells which pixels of a
particular layer should be included in the final composite page.
Each layer is compressed with a specific compression method.
The advantages of this approach are simplified segmentation and
a better match between layer boundaries and the compression
algorithms. Standard, off-the-shelf compression methods can
be easily incorporated into this structure. The drawbacks of this
method are: mismatch between the compression method used for
a particular layer and the data types, mismatch between the object
boundariesandthecompressedregionboundaries,andanintrinsic
redundancy, due to the fact that the same parts of the original
imageappearinseverallayers.Thelayer-basedapproachesmainly
proposedtoefficientlycompresscompoundimages.Theysegment
an image into foreground layer, background layer and mask. The
colours of text are presented in the fore-ground layer and the
positions in the mask. Both foreground and background layers
are compressed by the approaches similar to those for natural
images.
www.trp.org.in/ajcst
C. Block Based Segmentation
Block based segmentation is a simplified version of the full
object segmentation. Each region follows approximate object
boundaries and is made of rectangular blocks. The size of the
blocks may vary within the same region to better approximate
the actual object boundary. The advantages of this approach are:
simplified segmentation,better matchbetweenregionboundaries
f(x), the resulting coefficients are called the discrete wavelet
transform (DWT) of f(x). For this case, the series expansion
Wφ
(j0
,k) = (1/√M) ∑ f(x)φj0,k
(x) approximate coefficient …(1)
WΨ
(j,k) = (1/√M) ∑ f(x)Ψj,k
(x) detail coefficient … (2)
VII. System design
The system design contains two phases
and the compression algorithms, and the lack of redundancy, •
which may be present in the layer-based approach. The potential •
drawbacks are the potential loss in the compression performance
Segmentation phase
Coding phase
compared to the true object-based segmentation and the need
to slightly modify the off-the-shelf algorithms to work on non-
rectangular regions. Note that the segmentation performed in
this case is done with the purpose of optimizing the compression
performance and may not be appropriate for other uses, such as
OCR, image enhancement, etc. The block-based approaches do
not need accurate segmentation. Thus, some simple characters of
images such as colour number, histogram and gradient are used
for classification. Blocks can then be categorized into different
types, such as text, graphics, natural image, and so on. A new
method is proposed to represent a text/graphics block byseveral
base colours and an index map.
D. Image Coding Based Segmentation
They adopt the conventional image coding schemes but improve
the bit allocation between text/graphics and natural images areas
becausethetext/graphicsareasareoftenblurredaftercompression.
Thus, the quantization steps in text/ graphics areas are decreased
and more bits are allocated to them. For a fixed budget, it would
correspondingly decrease bits for the coding of natural image
areas. Consequently, the overall quality after compression is still
not good.
VI. Wavelet Transform
UnliketheFouriertransform, whosebasisfunctionsaresinusoids,
wavelet transforms are based on small waves called wavelets of
varying frequency and limited duration. In 1987, wavelets are
first shown to be the foundation of a powerful new approach
to signal processing and analysis called multiresolution theory.
Multiresolution theory incorporates and unifies techniques
from a variety of disciplines including subband coding signal
processing , quadrature mirror filtering from digital speech
recognition and pyramidal image processing. Anotherimportant
imagingtechnique withtiesto multiresolutionanalysisis subband
coding. In this coding, an image is decomposed into a set of band-
limited components called subbands, which can be reassembled
to reconstruct the original image without error.
A. WaveletFunctions
A wavelet function Ψ(x) is given with its integer translates and
binary scaling, spans the difference between any two adjacent
scaling subspaces, Vj
and Vj+1
Ψ(x) = ∑((hΨ
(n) *√2φ(2x – n))
hΨ
(n) are called the wavelet function coefficients
hΨ
is the wavelet vector
B. Discrete Wavelet Transform
Like the Fourier series expansion, the wavelet seriesexpansion
of the previous section maps a function of a continuous variable
into a sequence of coefficients. If the function being expanded
is a sequence of numbers like samples of a continuousfunction
A. Segmentation Phase
Compound image is split into 8x8 blocks of pixels. Apply 2D
DWToneach8x8blocksofpixels.ThenafterapplyingDWT,four
typescoefficientssuchasapproximatecoefficients(LLsubbands),
horizontal coefficients (LH subbands), vertical coefficients (HL
subbands) and diagonal coefficients (HH subbands) are obtained.
Mean and standard deviation for LH, HL and HH subbands are
obtained and assigned threshold value for each three subbands
(i.e., LH, HL and HH). Standard deviation and threshold values
of corresponding LH, HL and HL subbands are compared. The
splitted 8x8 blocks of pixels into text/graphics blocks and picture/
background blocks are separated.
B. Coding Phase
Wavelet coding algorithm is applied for compressing each
text/graphics blocks and then Lossy JPEG coding algorithm
for compressing each picture/background blocks is applied.
After compression, decompression is performed for each text/
graphics blocks and picture/background blocks. The combined
decompressed blocks are used to reconstruct the original input
image with high visual quality of text as shown in Fig 2.
Fig. 2 : System design
VIII. Testing
A. Definition of Testing
Testing can be described as a process used for revealing defects
in software, and for establishing that the software has attained a
specified degree of quality with respect to selected attributes.
B. Unit testing
The principle goal for unit testing is to ensure that each individual
software unit is functioning according to its specification. Good
www.trp.org.in/ajcst
266
testing practice calls for unit tests that are planned and public.
C. Integration testing
Integration test for procedural code has two major goals such as
to detect defects that occur on the interfaces of units and another
goal is to assemble the individual units into working subsystems
and finally a complete system that is ready for system test.
D. Block Classification Testing
The classified 8x8 blocks of compound image are tested by using
precision rate and recall rate. The recall rate is defined as the ratio
of correctly detected text/graphics blocks to the sum of correctly
detected text/graphicsblocksplus false negatives.False negatives
are those blocks in the image which are actually text characters,
but have not been detected by the algorithm. The precision rate
is defined as the ratio of correctly detected text/graphics blocks
to the sum of correctly detected blocks plus false positives. False
positives are those blocks in the image which are actually not
characters of a text, but have been detected by the algorithm as
text blocks.
Table 1: Precision rate and recall rate obtained using the three
block classification schemes for the images
E. Compression Testing
The compressed and decompressed text/graphics blocks and
picture/background blocks can be tested by using compression
ratio.Indigital image processing, the compressionratio is defined
as the ratio of size of original image in gray scale to the size of
decompressed image. The proposed system provides competitive
compression ratio.
Compression ratio = (Size of original image) /
(Size of compressed image)
The peak signal-to-noise ratio, often abbreviated PSNR, is an
engineering term for the ratio between the maximum possible
power of a signal and the power of corrupting noise that
affects the fidelity of its representation. Because many signals
have a very wide dynamic range, PSNR is usually expressed
in terms of the logarithmic decibel scale. The PSNR is most
commonly used as a measure of quality of reconstruction of lossy
compression codec’s (e.g., for image compression). The signal in
this case is the original data, and the noise is the error introduced
by compression. When comparing compression codec’s it is
used as an approximation to human perception ofreconstruction
quality, therefore in some cases one reconstruction may appear
to be closer to the original than another, even though it has a
lower PSNR (a higher PSNR would normally indicate that the
reconstruction is of higher quality). Mean Squared Error (MSE)
is two m×n monochrome images I and K where one of the images
is considered a noisy approximation of the other is defined as:
m-1 n-1
MSE = (1/(m*n)) ∑ ∑ [I(i,j) – K(i,j)]2
i=0 j=0
The PSNR is defined as:
PSNR = 10*log ((MAX2
) / (MSE))
Here, MAXI
is the maximum possible pixel value of image.
F. Functional testing
Functional tests at the system level are used to ensure that the
behaviour ofthesystemadheresto therequirementsspecification.
Functional tests are black box in nature. The focus is on the inputs
and proper outputs for each function. Improper and illegal inputs
must also be handled by the system. System behaviour under the
latter circum-stances tests must be observed. All functions must
be tested.
IX. Conclusion and Future Enhancement
A. Experimental Results
The proposed algorithm is implemented on an Intel Core 2 Duo
1.66GHzusing MATLAB 7.0, andseveralgrayandcolourcomputer
screen images of various sizes to demonstrate the performance of
the proposed algorithm. First, the results of block classification
and then results ofcomputer screen image compression have been
discussed.
B. Future Works
For the proposed DWT based algorithm, the average precision
rate is 89.58% and average recall rate is 96.53%. Hence, it leads
to effective coding for text or graphics blocks as well as picture
or background blocks. Wavelet coding provides excellent visual
quality of text in computer screen images when compared with
other compound images. In future, work is to be done to improve
the efficiency of the lossless coding of text/graphics regions.
Images
Original
size
(KB)
Compressed
size
(KB)
Compression
ratio
PSNR
compscreen1. tif
109 17.1 7.4:1 28.86
compscreen2.tif 122 15.9 7.7:1 27.39
text.tif 134 17.1 6.8:1 19.25
picture.tif 157 14.9 5.5:1 24.42
compound.tif 160 14.3 7.2:1 27.13
Table 2 : Compression ratio and PSNR of various images
References
[1] Florinabel D J, Juliet S E, Dr Sadasivam V, Efficient Coding
ofComputerScreenImageswithPreciseBlockClassification
using Wavelet Transform, Volume 91, May 2010.
[2] Gonzalez R C, Woods R E and Eddins S L, Digital Image
Processing using MATLAB, Prentice Hall, Upper Saddle
River, NJ, 2004.
[3] Keslassy I, Kalman M, Wang D and Girod, Classification
Image Colour based
Classification
TCL based
Classification
Proposed
DWT based
Size of
Image
800x1280
Precision
Rate
Recall
Rate
Precision
Rate
Recall
rate
Precision
Rate
Recall
rate
Web
page 1
60.87 60.91 71.43 87.40 75.00 89.35
Web
page 2
85.45 97.92 75.00 78.50 88.89 98.40
ppt 1 59.00 100.00 93.40 96.80 96.70 98.89
ppt 2 90.62 100.00 94.00 95.40 97.73 99.46
Average 73.99 89.71 83.46 89.53 89.58 96.53
www.trp.org.in/ajcst
of Compound Images based on Transform Coefficient
Likelihood, Proceedings of International Conference on
Image Processing, vol 1, October 2001.
[4] Li X and Lei S, Block-based Segmentation and Adaptive
Coding for Visually Lossless Compression of Scanned
Documents, Proceedings of International Conference on
Image Processing, vol 3, 2001, pp 450-453.
[5] LinandHao P,CompoundImage CompressionforReal-Time
Computer Screen Image Transmission, IEEE Transactions
on Image Processing, vol 14, no 8, August 2005, pp 993-
1055.
[6] Mallat S, A Wavelet Tour of Signal Processing, Second
Edition, Academic Press, 1999.
Fig.6:Horizontalcoefficients(LHsubbands)of Computerscreen
image
Fig. 3: Computer screen image
Fig. 7 : Vertical coefficients (HL subbands) of Computer screen
image
Fig. 4: 8x8 blocks of Computer screen image
Fig. 8 : Diagonal coefficients (HH subbands) of Computer screen
image
Fig. 5 : Approximate coefficients (LL subbands) of Computer
screen image
www.trp.org.in/ajcst
268
Fig. 9 : Text/graphics blocks of Computer screen image using
DWT
Fig. 10 : Compressed text/graphics blocks of Computer screen
image using Wavelet coding
Fig.11 : Picture/background blocks of Computer screen image
using DWT

More Related Content

PDF
Iaetsd performance analysis of discrete cosine
PDF
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
PDF
A Study of Image Compression Methods
PDF
Enhanced Image Compression Using Wavelets
PDF
[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...
DOCX
Thesis on Image compression by Manish Myst
PDF
An Efficient Analysis of Wavelet Techniques on Image Compression in MRI Images
DOCX
3.introduction onwards deepa
Iaetsd performance analysis of discrete cosine
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
A Study of Image Compression Methods
Enhanced Image Compression Using Wavelets
[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...
Thesis on Image compression by Manish Myst
An Efficient Analysis of Wavelet Techniques on Image Compression in MRI Images
3.introduction onwards deepa

What's hot (19)

PDF
Reduction of Blocking Artifacts In JPEG Compressed Image
PDF
A Comprehensive lossless modified compression in medical application on DICOM...
PDF
steganography based image compression
PDF
DIP Using Image Encryption and XOR Operation Affine Transform
PDF
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
PDF
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
PPTX
Image Processing Training in Chandigarh
PDF
Quality assessment of resultant images after processing
PDF
An Approach Towards Lossless Compression Through Artificial Neural Network Te...
PDF
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
PDF
A LITERATURE SURVEY ON SECURE JOINT DATA HIDING AND COMPRESSION SCHEME TO STO...
PDF
01 introduction image processing analysis
PDF
1674 1677
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
K-SVD: ALGORITHM FOR FINGERPRINT COMPRESSION BASED ON SPARSE REPRESENTATION
PDF
A spatial image compression algorithm based on run length encoding
PPTX
Digital image processing
PPTX
project_final
Reduction of Blocking Artifacts In JPEG Compressed Image
A Comprehensive lossless modified compression in medical application on DICOM...
steganography based image compression
DIP Using Image Encryption and XOR Operation Affine Transform
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
Image Processing Training in Chandigarh
Quality assessment of resultant images after processing
An Approach Towards Lossless Compression Through Artificial Neural Network Te...
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
A LITERATURE SURVEY ON SECURE JOINT DATA HIDING AND COMPRESSION SCHEME TO STO...
01 introduction image processing analysis
1674 1677
International Journal of Engineering Research and Development (IJERD)
K-SVD: ALGORITHM FOR FINGERPRINT COMPRESSION BASED ON SPARSE REPRESENTATION
A spatial image compression algorithm based on run length encoding
Digital image processing
project_final
Ad

Similar to Compression of Compound Images Using Wavelet Transform (20)

PDF
Efficient Image Compression Technique using Clustering and Random Permutation
PDF
Efficient Image Compression Technique using Clustering and Random Permutation
PDF
Evaluation Of Proposed Design And Necessary Corrective Action
PDF
Remote HD and 3D image processing challenges in Embedded Systems
PDF
Using Compression Techniques to Streamline Image and Video Storage and Retrieval
PDF
Blur Detection Methods for Digital Images-A Survey
PDF
Image Watermarking in Spatial Domain Using QIM and Genetic Algorithm
PDF
M.sc.iii sem digital image processing unit v
PDF
B017250715
PDF
Developing and comparing an encoding system using vector quantization &
PDF
Developing and comparing an encoding system using vector quantization &
PDF
Unit 1 DIP Fundamentals - Presentation Notes.pdf
PDF
Ijri ece-01-01 joint data hiding and compression based on saliency and smvq
PDF
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
PDF
Implementation of Brain Tumor Extraction Application from MRI Image
PDF
Image Authentication Using Digital Watermarking
PDF
Image Compression using a Raspberry Pi
PDF
3 d discrete cosine transform for image compression
DOCX
Multimodel Operation for Visually1.docx
Efficient Image Compression Technique using Clustering and Random Permutation
Efficient Image Compression Technique using Clustering and Random Permutation
Evaluation Of Proposed Design And Necessary Corrective Action
Remote HD and 3D image processing challenges in Embedded Systems
Using Compression Techniques to Streamline Image and Video Storage and Retrieval
Blur Detection Methods for Digital Images-A Survey
Image Watermarking in Spatial Domain Using QIM and Genetic Algorithm
M.sc.iii sem digital image processing unit v
B017250715
Developing and comparing an encoding system using vector quantization &
Developing and comparing an encoding system using vector quantization &
Unit 1 DIP Fundamentals - Presentation Notes.pdf
Ijri ece-01-01 joint data hiding and compression based on saliency and smvq
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
Implementation of Brain Tumor Extraction Application from MRI Image
Image Authentication Using Digital Watermarking
Image Compression using a Raspberry Pi
3 d discrete cosine transform for image compression
Multimodel Operation for Visually1.docx
Ad

More from DR.P.S.JAGADEESH KUMAR (20)

PDF
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
PDF
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
PDF
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
PDF
Optical Picbots as a Medicament for Leukemia
PDF
Integrating Medical Robots for Brain Surgical Applications
PDF
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
PDF
Continuous and Discrete Crooklet Transform
PDF
A Theoretical Perception of Gravity from the Quantum to the Relativity
PDF
Advanced Robot Vision for Medical Surgical Applications
PDF
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
PDF
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
PDF
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
PDF
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
PDF
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
PDF
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
PDF
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
PDF
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
PDF
Machine Learning based Retinal Therapeutic for Glaucoma
PDF
Fingerprint detection and face recognition for colonization control of fronti...
PDF
New Malicious Attacks on Mobile Banking Applications
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Optical Picbots as a Medicament for Leukemia
Integrating Medical Robots for Brain Surgical Applications
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Continuous and Discrete Crooklet Transform
A Theoretical Perception of Gravity from the Quantum to the Relativity
Advanced Robot Vision for Medical Surgical Applications
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Machine Learning based Retinal Therapeutic for Glaucoma
Fingerprint detection and face recognition for colonization control of fronti...
New Malicious Attacks on Mobile Banking Applications

Recently uploaded (20)

PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
PPT on Performance Review to get promotions
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
Sustainable Sites - Green Building Construction
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
R24 SURVEYING LAB MANUAL for civil enggi
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
web development for engineering and engineering
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
DOCX
573137875-Attendance-Management-System-original
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPT
Project quality management in manufacturing
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPT on Performance Review to get promotions
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Sustainable Sites - Green Building Construction
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
R24 SURVEYING LAB MANUAL for civil enggi
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Model Code of Practice - Construction Work - 21102022 .pdf
web development for engineering and engineering
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
UNIT-1 - COAL BASED THERMAL POWER PLANTS
573137875-Attendance-Management-System-original
Embodied AI: Ushering in the Next Era of Intelligent Systems
Project quality management in manufacturing

Compression of Compound Images Using Wavelet Transform

  • 1. www.trp.org.in/ajcst CompressionofCompoundImages Using Wavelet Transform P.S.Jagadeesh Kumar University of Cambridge, United Kingdom Abstract Compression of compound image is essential for remote control software, such as real virtualnetwork computing (VNC), which allows a person at a remote computer to view and interact with another computer across the network, as if sitting in front of the computer. A smart display device acts as a portable screen with 802.11b wireless connection to a nearby desktop PC, enabling people to surf the web or browse pictures that are stored on the desktop PC. The challenge is that the huge amount of real-time computer screen video data should be transmitted over the cable or wireless networks. One 800×600 frame of true color screen image has a size of 1.44 MB and 85 frames/sec produces more than 100 MB data. In this paper, wavelet sub-band coding based lossless compression of compound image is discussed and the implementation shows excellent visual quality of text in compressed computer screen images. Keywords Compound image, Computer screen image, Image compression, Wavelet sub-band coding, Image segmentation I. Introduction Digital Image Processing is the use of computer algorithms to perform image processing on digital images. Digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions, digital image processing may be modelled in the form of multidimensional systems. It allows the use of much more complex algorithms for image processing and can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means. II. Compound Image A compound image is an image that contains data of various types such as text, graphics and pictures. Each of these data types has different statistical properties and is characterized by different level of distortion that a human observer can notice. The sensitivity of human eyes for natural image and text is different. The quality requirement of compound image coding is different fromgeneralimagecodingbecauseuserscannotacceptthequality iftextisnotclearenoughtorecognize. Mostoftheeffortsinimage compression until now have been in developing new algorithms thatachievebettercompressionatthecostofconsiderableincrease in complexity. Class room setup with wireless projectors and presentation computer providing the flexibility to site anywhere in the room without cable connection to the laptop is another important application of compound image compression. III. Image Compression The objective of image compression is to reduce irrelevance and redundancyoftheimagedatainordertobeabletostoreortransmit data in an efficient form. There are two types; 1. Lossless compression 2. Lossy compression A. Lossless compression Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. This is because lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the UNIX tool gzip. It is also often used as a component within lossy data compression technologies. Typical examples are executable programs, text documents and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG mayuse either lossless or lossy methods. B. Lossy compression Lossy methods are especially suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. The lossy compression that produces imperceptible differences may be called visually lossless. The procedure aims to minimize the amount of data that needs to be held, handled, and/or transmitted by a computer. Typically, a substantial amount of data can be discarded before the result is sufficiently degraded to be noticed by the user. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony. By contrast, lossless compression is required for text and data files, such as bank records, text articles, etc. IV.Block Classification Bandwidth is a very important limiting factor in application of image segmentation. Several segmentation schemes require morphological analysis of the different regions, and multiple passes over the image being segmented. However, each pass normallyrequiresloading memorydatafromslowto fast memory, whichisaslowprocess.Segmentationsolutionsbasedonmultiple passes are much slower or costly than what can be expected by, for instance, counting the number of operations. Thus, an ideal solution would use a single pass to decide on the type of image region.Suchsolution would beverydifficult witharbitraryshapes of segmentation regions, but it is feasible if consider only a pre- defined shape. For example, rectangular blocks decide the image type based only on the properties of the pixels inside the block. Such techniques are called block classification. This technique is theoretically sub-optimal, since it must classify all pixels in the block in the same manner, even if the block contains the boundary betweentwo regions.Another potential problem would be the fact that it does not consider the pixels in the block’s neighbourhood.
  • 2. www.trp.org.in/ajcst 264 Some factors mitigate the sub-optimal performance of a block- based scheme around region boundaries. First, compound images nowhave fairly high resolutions. Ifthe block size is small enough, one cannot expect to have the image type changing for every block.Therefore,boundaryblocksconsistofasmallfractionofthe image. It is easy to identify those boundary blocks doing a simple analysis ona blocklevel. Such analysis would not requireas much bandwidth, becausethe number ofblocksis much smaller than the number of pixels and it is required only when some transition is found. Using thosetechniques, onecan obtaina segmentation that is good enough for image compression, allowing high text quality and with a complexity that is practically the same as required by one-pass segmentation. Furthermore, the resulting segmentation hasveryattractivefeatures.Forinstance,manyimagecompression methods are efficient on rectangular regions, but do not work well on arbitrary regions. tasks in image processing. Segmentation accuracy determines the eventual success or failure of computerized analysis procedures. For this reason considerable care should be taken to improve the probability of rugged segmentation. In some situations, such as industrial inspection applications, at least some measure of control over the environment is possible at times. In others, as in remote sensing, user control over image acquisition islimited principally to the choice of imaging sensors. Segmentation algorithms for monochrome images generally are based on one of two basic properties of image intensity values: discontinuity and similarity. In the first category, the approach is to partition an image based on abrupt changes in intensity, such as edges in an image. The principal approaches in the second category are based on partitioning an image into regions that are similar according to a set of predefined criteria. The segmentation approaches used in compound document compression can be grouped into 4 classes; • Object-based segmentation • Layer-based segmentation • Block-based segmentation • Image-Coding based segmentation Fig. 1: One pass block classification using neighbourhood analysis The above Fig.1 shows how the block classification is applied to an individual block. First, the block is classified according to the distribution of its pixels. Next, the class of the neighbouring blocks is analysed. At this stage, identify if a block is in the boundary between two regions, or change the classification if its confidence is low. To avoid adding the burden of analysis to the decoder,thefinalclassificationisaddedtothecompressedstream. Finally, the block is compressed according to the identified type. The identification of boundaries is very important because it can change compression parameters for those blocks. For instance, different quantization can be set when moving from a lossy to lossless region. By introducing extra buffering, it is possible to use the block neighbourhood analysis for merging blocks that are classified in the same manner. V.Segmentation The goal of image segmentation is to cluster pixels into salient image regions, i.e., regions corresponding to individual surfaces, objects,ornaturalpartsofobjects.Segmentationcouldbeused for object recognition, occlusion boundary estimation within motion or stereo systems, image compression, image editing, or image database look-up. In this paper, bottom-up image segmentation is considered. For input, primarily image brightness is considered, although similar techniques can be used with colour, motion, and/or stereo disparity information. Segmentation subdivides an image into it’s constitute regions or objects. The level to which the subdivision is carried depends on the problem being solved. That is, segmentation should stop when the objects of interest in an application have been isolated. In the automated inspection of electronic assemblies, interest lies in analysing images of the productswiththeobjectiveofdeterminingthepresenceorabsence of specific anomalies, such as missing components or broken connection paths. There is no point in carrying segmentation past the level of detail required to identify those elements. Segmentation of nontrivial images is one of the most difficult A. Object Based Segmentation In this case, a page is divided into regions, where each region followsexactobjectboundaries.Anobjectmaybeaphotograph, a graphicalobject,aletter,etc.Inprinciple,thismethod mayprovide the best compression, since it provides the best match between a data type and the compression method most suitable for this data type. In reality, the best compression may not be achievable for thefollowingreasons.Codingtheobjectboundariesrequiresextra bits,andthetypicalalgorithms, used forlossyimagecompression, are designed to operate on rectangular objects. They canoperate on objects with non-rectangular boundaries, but the compression performance will suffer. Complexity is another drawback of this method, since precise image segmentation may require the use of very sophisticated segmentation algorithms. B. Layer Based Segmentation This approach can be regarded as a simplified version of the full object based segmentation. The original page is divided into rectangular layers, where each layer can have one or more objects, and “mask” planes. A mask plane tells which pixels of a particular layer should be included in the final composite page. Each layer is compressed with a specific compression method. The advantages of this approach are simplified segmentation and a better match between layer boundaries and the compression algorithms. Standard, off-the-shelf compression methods can be easily incorporated into this structure. The drawbacks of this method are: mismatch between the compression method used for a particular layer and the data types, mismatch between the object boundariesandthecompressedregionboundaries,andanintrinsic redundancy, due to the fact that the same parts of the original imageappearinseverallayers.Thelayer-basedapproachesmainly proposedtoefficientlycompresscompoundimages.Theysegment an image into foreground layer, background layer and mask. The colours of text are presented in the fore-ground layer and the positions in the mask. Both foreground and background layers are compressed by the approaches similar to those for natural images.
  • 3. www.trp.org.in/ajcst C. Block Based Segmentation Block based segmentation is a simplified version of the full object segmentation. Each region follows approximate object boundaries and is made of rectangular blocks. The size of the blocks may vary within the same region to better approximate the actual object boundary. The advantages of this approach are: simplified segmentation,better matchbetweenregionboundaries f(x), the resulting coefficients are called the discrete wavelet transform (DWT) of f(x). For this case, the series expansion Wφ (j0 ,k) = (1/√M) ∑ f(x)φj0,k (x) approximate coefficient …(1) WΨ (j,k) = (1/√M) ∑ f(x)Ψj,k (x) detail coefficient … (2) VII. System design The system design contains two phases and the compression algorithms, and the lack of redundancy, • which may be present in the layer-based approach. The potential • drawbacks are the potential loss in the compression performance Segmentation phase Coding phase compared to the true object-based segmentation and the need to slightly modify the off-the-shelf algorithms to work on non- rectangular regions. Note that the segmentation performed in this case is done with the purpose of optimizing the compression performance and may not be appropriate for other uses, such as OCR, image enhancement, etc. The block-based approaches do not need accurate segmentation. Thus, some simple characters of images such as colour number, histogram and gradient are used for classification. Blocks can then be categorized into different types, such as text, graphics, natural image, and so on. A new method is proposed to represent a text/graphics block byseveral base colours and an index map. D. Image Coding Based Segmentation They adopt the conventional image coding schemes but improve the bit allocation between text/graphics and natural images areas becausethetext/graphicsareasareoftenblurredaftercompression. Thus, the quantization steps in text/ graphics areas are decreased and more bits are allocated to them. For a fixed budget, it would correspondingly decrease bits for the coding of natural image areas. Consequently, the overall quality after compression is still not good. VI. Wavelet Transform UnliketheFouriertransform, whosebasisfunctionsaresinusoids, wavelet transforms are based on small waves called wavelets of varying frequency and limited duration. In 1987, wavelets are first shown to be the foundation of a powerful new approach to signal processing and analysis called multiresolution theory. Multiresolution theory incorporates and unifies techniques from a variety of disciplines including subband coding signal processing , quadrature mirror filtering from digital speech recognition and pyramidal image processing. Anotherimportant imagingtechnique withtiesto multiresolutionanalysisis subband coding. In this coding, an image is decomposed into a set of band- limited components called subbands, which can be reassembled to reconstruct the original image without error. A. WaveletFunctions A wavelet function Ψ(x) is given with its integer translates and binary scaling, spans the difference between any two adjacent scaling subspaces, Vj and Vj+1 Ψ(x) = ∑((hΨ (n) *√2φ(2x – n)) hΨ (n) are called the wavelet function coefficients hΨ is the wavelet vector B. Discrete Wavelet Transform Like the Fourier series expansion, the wavelet seriesexpansion of the previous section maps a function of a continuous variable into a sequence of coefficients. If the function being expanded is a sequence of numbers like samples of a continuousfunction A. Segmentation Phase Compound image is split into 8x8 blocks of pixels. Apply 2D DWToneach8x8blocksofpixels.ThenafterapplyingDWT,four typescoefficientssuchasapproximatecoefficients(LLsubbands), horizontal coefficients (LH subbands), vertical coefficients (HL subbands) and diagonal coefficients (HH subbands) are obtained. Mean and standard deviation for LH, HL and HH subbands are obtained and assigned threshold value for each three subbands (i.e., LH, HL and HH). Standard deviation and threshold values of corresponding LH, HL and HL subbands are compared. The splitted 8x8 blocks of pixels into text/graphics blocks and picture/ background blocks are separated. B. Coding Phase Wavelet coding algorithm is applied for compressing each text/graphics blocks and then Lossy JPEG coding algorithm for compressing each picture/background blocks is applied. After compression, decompression is performed for each text/ graphics blocks and picture/background blocks. The combined decompressed blocks are used to reconstruct the original input image with high visual quality of text as shown in Fig 2. Fig. 2 : System design VIII. Testing A. Definition of Testing Testing can be described as a process used for revealing defects in software, and for establishing that the software has attained a specified degree of quality with respect to selected attributes. B. Unit testing The principle goal for unit testing is to ensure that each individual software unit is functioning according to its specification. Good
  • 4. www.trp.org.in/ajcst 266 testing practice calls for unit tests that are planned and public. C. Integration testing Integration test for procedural code has two major goals such as to detect defects that occur on the interfaces of units and another goal is to assemble the individual units into working subsystems and finally a complete system that is ready for system test. D. Block Classification Testing The classified 8x8 blocks of compound image are tested by using precision rate and recall rate. The recall rate is defined as the ratio of correctly detected text/graphics blocks to the sum of correctly detected text/graphicsblocksplus false negatives.False negatives are those blocks in the image which are actually text characters, but have not been detected by the algorithm. The precision rate is defined as the ratio of correctly detected text/graphics blocks to the sum of correctly detected blocks plus false positives. False positives are those blocks in the image which are actually not characters of a text, but have been detected by the algorithm as text blocks. Table 1: Precision rate and recall rate obtained using the three block classification schemes for the images E. Compression Testing The compressed and decompressed text/graphics blocks and picture/background blocks can be tested by using compression ratio.Indigital image processing, the compressionratio is defined as the ratio of size of original image in gray scale to the size of decompressed image. The proposed system provides competitive compression ratio. Compression ratio = (Size of original image) / (Size of compressed image) The peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a measure of quality of reconstruction of lossy compression codec’s (e.g., for image compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codec’s it is used as an approximation to human perception ofreconstruction quality, therefore in some cases one reconstruction may appear to be closer to the original than another, even though it has a lower PSNR (a higher PSNR would normally indicate that the reconstruction is of higher quality). Mean Squared Error (MSE) is two m×n monochrome images I and K where one of the images is considered a noisy approximation of the other is defined as: m-1 n-1 MSE = (1/(m*n)) ∑ ∑ [I(i,j) – K(i,j)]2 i=0 j=0 The PSNR is defined as: PSNR = 10*log ((MAX2 ) / (MSE)) Here, MAXI is the maximum possible pixel value of image. F. Functional testing Functional tests at the system level are used to ensure that the behaviour ofthesystemadheresto therequirementsspecification. Functional tests are black box in nature. The focus is on the inputs and proper outputs for each function. Improper and illegal inputs must also be handled by the system. System behaviour under the latter circum-stances tests must be observed. All functions must be tested. IX. Conclusion and Future Enhancement A. Experimental Results The proposed algorithm is implemented on an Intel Core 2 Duo 1.66GHzusing MATLAB 7.0, andseveralgrayandcolourcomputer screen images of various sizes to demonstrate the performance of the proposed algorithm. First, the results of block classification and then results ofcomputer screen image compression have been discussed. B. Future Works For the proposed DWT based algorithm, the average precision rate is 89.58% and average recall rate is 96.53%. Hence, it leads to effective coding for text or graphics blocks as well as picture or background blocks. Wavelet coding provides excellent visual quality of text in computer screen images when compared with other compound images. In future, work is to be done to improve the efficiency of the lossless coding of text/graphics regions. Images Original size (KB) Compressed size (KB) Compression ratio PSNR compscreen1. tif 109 17.1 7.4:1 28.86 compscreen2.tif 122 15.9 7.7:1 27.39 text.tif 134 17.1 6.8:1 19.25 picture.tif 157 14.9 5.5:1 24.42 compound.tif 160 14.3 7.2:1 27.13 Table 2 : Compression ratio and PSNR of various images References [1] Florinabel D J, Juliet S E, Dr Sadasivam V, Efficient Coding ofComputerScreenImageswithPreciseBlockClassification using Wavelet Transform, Volume 91, May 2010. [2] Gonzalez R C, Woods R E and Eddins S L, Digital Image Processing using MATLAB, Prentice Hall, Upper Saddle River, NJ, 2004. [3] Keslassy I, Kalman M, Wang D and Girod, Classification Image Colour based Classification TCL based Classification Proposed DWT based Size of Image 800x1280 Precision Rate Recall Rate Precision Rate Recall rate Precision Rate Recall rate Web page 1 60.87 60.91 71.43 87.40 75.00 89.35 Web page 2 85.45 97.92 75.00 78.50 88.89 98.40 ppt 1 59.00 100.00 93.40 96.80 96.70 98.89 ppt 2 90.62 100.00 94.00 95.40 97.73 99.46 Average 73.99 89.71 83.46 89.53 89.58 96.53
  • 5. www.trp.org.in/ajcst of Compound Images based on Transform Coefficient Likelihood, Proceedings of International Conference on Image Processing, vol 1, October 2001. [4] Li X and Lei S, Block-based Segmentation and Adaptive Coding for Visually Lossless Compression of Scanned Documents, Proceedings of International Conference on Image Processing, vol 3, 2001, pp 450-453. [5] LinandHao P,CompoundImage CompressionforReal-Time Computer Screen Image Transmission, IEEE Transactions on Image Processing, vol 14, no 8, August 2005, pp 993- 1055. [6] Mallat S, A Wavelet Tour of Signal Processing, Second Edition, Academic Press, 1999. Fig.6:Horizontalcoefficients(LHsubbands)of Computerscreen image Fig. 3: Computer screen image Fig. 7 : Vertical coefficients (HL subbands) of Computer screen image Fig. 4: 8x8 blocks of Computer screen image Fig. 8 : Diagonal coefficients (HH subbands) of Computer screen image Fig. 5 : Approximate coefficients (LL subbands) of Computer screen image
  • 6. www.trp.org.in/ajcst 268 Fig. 9 : Text/graphics blocks of Computer screen image using DWT Fig. 10 : Compressed text/graphics blocks of Computer screen image using Wavelet coding Fig.11 : Picture/background blocks of Computer screen image using DWT