SlideShare a Scribd company logo
NORTHWESTERN POLYTECHNICAL UNIVERSITY
SCHOOL OF ELECTRONICS AND INFORMATION
DEPARTMENT OF COMMUNICATION
MAJOR: COMMUNICATION AND INFORMATION SYSTEMS
ASSIGNMENT OF SOURCE CODING THEORY AND APPLICATION
LECTURE: Associated Prof. WAN Shuai
DONE by Gaspard GASHEMA
Date:21st
March,2013
ASSIGNMENT #1
We are asked to show that for a discrete source x ,the entropy is maximized when the
output symbols are equally probable.
ANSWER
To solve this problem, let us consider tossing a coin with two cases.
For one case its probabilities of coming up heads or tails are known, but not necessarily
fair.
The second case, its probabilities of coming up heads or tails are known and fair.
For these two cases, the entropy of the unknown result of the next toss of the coin is
maximized if only the coin is fair (that is, if heads and tails both have equal probability
1/2) otherwise the entropy will not be maximized .Normally this is the situation of
maximum uncertainty as it is most difficult to predict the outcome of the next toss; the
result of each toss of the coin delivers a full 1 bit of information as we are going to see
them below.
Consider a source that emits a sequence of statistically independent letters, where each
output letter is either 0 with probability q or 1 with probability 1-q.
The entropy of this source is:
As we will see on graph of Entropy against Probability of outcomes ,the maximum value
of the entropy function occurs at
q=0.5 where H(0.5)=1.
H ( X ) ≡ H (q) = −q log q − (1− q)log (1− q)
Figure:Graph of Entropy against Probability of outcomes
As the above graph shows,the enthropy of the coin(fair case) will be maximum only
when q=0.5.Remember that the two probabilities are both equal i.e q=1/2 and p=1-p=1-
0.5=0.5.Therefore also the graph shows the value of entropy for this system. Its value is
1bi /letter.
However, if we know the coin is not fair, but comes up heads or tails with probabilities p
and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more
likely to come up than the other. The reduced uncertainty is quantified in a lower entropy:
on average each toss of the coin delivers less than a full 1 bit of information.
For this case, let us calculate the entropy when p=0.8.As q=1-p we have q=1-0.8=0.2
The entropy for this case will be
q=0. where H(0.5)=1.
H ( X ) ≡ H (q) = −q log q − (1− q)log (1− q)
The extreme case is that of a double-headed coin that never comes up tails, or a double-
tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero:
each toss of the coin delivers no information.
Consider the case for a fair dice with of faces with probability
p(1)=1/4,p(2)=1/4,p(3)=1/4 and p(4)=1/4 equally probable case.
Entropy of {e1,…en} is maximized when p1=p2=…=pn=1/n 􀃆 H(e1,…,en)=log2n
No symbol is “better” than the other or contains more Information 2k symbols must be
represented by k bits
􀃆 Entropy of {e1,…en} is minimized when p1=1, p2=…=pn=0 . H(e1,…,en)=0
Therefore ,entropy of will be calculated as follows:
H(x)=-[1/4*log2(1/4)+ 1/4*log2(1/4)+ 1/4*log2(1/4)+ 1/4*log2(1/4)]= -4/4*log(1/4)= -
log2(1/4)=log24=2 bits/symbol
Let us see the the value of this entropy when the dice is unfair with ,probability p(1) =
1/2, p(2) =1/4, p(3) = p(4) = 1/8
The entropy for this case will be H(x)=-[ 1/2 *log2(1/2)+1/4 *log2(1/4)+1/8*log2(1/8)+
1/8*log2(1/8)]=7/4 bits/symbol
As you see from those above two examples, the entropy for fair case is always greater
than that of unfair case
For generalization, first consider a set of possible outcomes (events)
, with
qual probability .
The entropy of source for this outcomes will be
with the base of the logarithm is .
Where p(xi)=1/n the probability of each outcome as the output symbols are equally
probable, the above formula will become.
Finally Thus the entropy becomes maximized.
ASSIGNMENT #2
The problem consists of writing a program to implement Run length coding for
1100000000011000000101011111111.
ANSWER
To solve this probrem,I have used matlab as programming language.
Run length coding
Run length coding
% Run Length Encoder
% EE113D Project
function encoded = RLE_encode(input)
my_size = size(input);
length = my_size(2);
run_length = 1;
encoded = [];
for i=2:length
if input(i) == input(i-1)
run_length = run_length + 1;
else
encoded = [encoded input(i-1) run_length];
run_length = 1;
end
end
if length > 1
% Add last value and run length to output
encoded = [encoded input(i) run_length];
else
% Special case if input is of length 1
encoded = [input(1) 1];
end
After Saving this matlab code with “RLC_encode” as file name ,you will run it using
Window commend as follow
>> RLE_encode([1 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 1 1 ])
ans =
1 2 0 9 1 1 0 1 1 1 0 1 1 8 This means (1)2,(0)9,(1)1,(0)1,(1)1,(0)1,(1)8
ASSIGNMENT #3
The problem consists of finding or writing a JPEG and a JPEG 2000 algorithm to run
both at same compression ratio my own photo.
ANSWER
N.B:
 .The original photo is in JPEG format
 .Compression for both JPEG and JPEG2000 formats is 1:2 i.e 50%
Therefore to do this question, two different softwares were used. These softwares used
are Adobe Photosho(PS) CS6 and JPEG 2000 Studio. Adobe Photoshop CS6 was used to
run(compress) original photo(Image) in JPEG format while JPEG2000 Studio used to
run the same original Image in JPEG 2000 format.
There were two steps to do this problem:
The first step consists of taking this original photo, compressing it then save it JPEG
format using Adobe Photoshop CS6 (PS).
The size of image before compression is 2.31MB.
When you run this image in Matlab when you type commend “imread(‘file name of
original Image’) in matlab commend window then aigain you type commend “ whos”,in
this matlab commend window,the information of the size for this original image will look
like.
Name Size Bytes Class Attributes
ans 1944x2592x3 15116544 uint8
Information of Image after compression
1. Compression using Adobe Photoshop
After compressing this original Image and save it in JPEG format, the size of it became
115 KB.By using Matlab commend in matlab window comment the information of this
image after compression is :
Name Size Bytes Class Attributes
ans 972x1296x3 3779136 uint8
NB:As I have mentioned on the beginning of the answer of this probmem,the
compression ratio is 1:2(i.e 50 %) and I chose medium quality to compress this image
using this method .
2. Compression using JPEG2000 format
When I used JPEG 2000 Studio software for Compressing this original Image,after
compression ist size became 295 KB.
The information of Image in matlab commend window
N.B:To compress these image using those two different softawes I have mentioned
above, I tried to do all possible in order to get at least images which was not degraded too
much. To do this my object is to compare the quality of the output image form those two
different software.
Discussions and Conclusion
From that information of Image in two cases above(for compressed cases), it is clear that
the size of Image after compression using Adobe Photoshop is less than that obtained
after compression using JPEG 2000 Studio. When you compare these two quantities
using the rule of ratio .i.e 295/115 which give 2.565 ≅ 3 .This means that when I take
original Image with 2.31MB as its size, the result of this original Image after compressing
and saving it using Adobe Photoshop CS6( in JPEG format) is approximately 3 times less
than that obtained after compression using JPEG 2000 studio. But at the other hand, the
quality of Image in JPEG 2000 is better than that of obtained after compression in JPEG
format.
ASSIGNMENT #4
We are required to find video stream with errors and try to explain how the errors
propagate and they stop.
To explain how errors propagate in video stream, let make a quick look how video
encoder work for decoding video.
To see, the diagram the below schematic block diagram of a typical encoder explains it in
details. Therefore, let the schematic of a typical video encoder is shown in Figure below .
For video coding, a frame is divided into MBs of 16x16 pixels. For each MB, motion
estimation finds the best match from the reference frame(s) by minimizing the difference
between the current MB and the candidate MBs (from the reference frame). These
residual MBs form a residual frame that is essentially the difference between the current
frame and the corresponding motion compensated predicted frame. Simultaneously,
motion vectors (MVs) are used to encode the locations of MBs that have been used to
each MB in the current frame. The residual frame is then transformed through DCT or
integer transform, and quantized. Usually,the quantized coefficients and the MVs are
coded by variable length codes (VLCs). The VLCs (e.g.,Huffman code, arithmetic code)
achieve a higher compression ratio compared to the fixed length codes,and hence the
VLC schemes have been extensively used in almost all coding standards to encode
various syntax elements. For every coded frame, the encoder transmits the transformed
coefficients, motion vectors and some header information essential for decoding. Some
frames, known as intra-frame, in the video sequence are coded without using motion
estimation/compensation.
Figure:The block diagram of a typical video encoder
When the compressed video bitstream is transmitted over a communication channel, it is
subjected to channel errors/noise. Generally, forward error correction (FEC) code is used
for protecting data against channel errors. The FEC is effective for random errors, but
inadequate in the case of long-duration bursterrors. For this the study of the use of
hierarchical quadratic amplitude modulation (QAM) for channel modulation is required.
This scheme provides unequal protection to the bits depending on their
priority level. The symbols with the same high priority bits are assigned to the same
QAM constellation cluster and any two neighboring nodes in a cluster only differ in one
bit. This scheme provides asignificant PSNR improvement when the channel SNR is
lower.
To handle the errors, the following stages are required :
Error detection and localization
Resynchronization
Error concealment
Error detection is done with the help of video syntax and/or semantics. When
violation of video semantics/syntax is observed, decoder reports an error, and tries to
resynchronize at the next start code,which is typically a long codeword and different from
any combination of other possible codes. For some video coding standards (e.g., MPEG-
4), which use reversible variable length codes (RVLCs), data is recovered from the
corrupted packet by carrying out decoding in backward direction. Therefore, the
corrupted packets are simply discarded and the lost region of video frame is concealed.
The error concealment schemes try to minimize the visual artifacts due to errors, and can
be grouped into two categories: intra-frame interpolation and inter-frame interpolation. In
intra-frame interpolation, the values of missing pixels are estimated from the surrounding
pixels of the same frame, without using the temporal information. On the other hand, the
inter-frame
interpolation is done based on the corresponding region(s) of the reference frame(s). If a
motion vector is missing, it can be calculated based on the motion vectors of the
surrounding regions.
N.B:You can watch this video to its errors on the following link:”
http://www.youtube.com/watch?v=SSPv-xzMwOI
Apart from those have mentioned above , the following are common streaming video
problems.
1. Website Issues
Many of the popular video streaming websites, including YouTube, use Adobe Flash
Player. Confirm that your computer is using the latest version of Adobe Flash Player. To
do this, go to the Adobe website and download the appropriate uninstaller for your
computer. Once the current installation has been uninstalled, go back to the Adobe
website and install the most recent version of the plugin.
2.Firewall Issues
Firewalls may block streaming video ports. If you are having problems watching
streaming videos via a desktop application such as Windows Media Player, and you
normally have to use an proxy server to gain Web access, you may have to enable your
media player to also use the proxy server. This option can be found in the Options or
Preferences menu of your particular media player.
3..Internet Connection
The speed and stability of an Internet connection has a direct effect on the quality of
streaming video. A slow connection will result in choppy video and audio, while an
unstable Internet connection can result in video stopping suddenly. Try to view streaming
video using only a fast connection such as broadband cable or DSL. If you are using a
wireless broadband (Wi-Fi) connection, make sure you have a strong signal.
3.Software Issues
Running out-of-date software can result in problems watching streaming video. Other
common issues include buffering problems or acceleration problems. Consult your
application's help file to learn how to update your media viewer to the latest version, and
to learn how to adjust the buffer or video acceleration.
4.Hardware Issues
The quality of your streaming video is dependent on the power of your graphics and
sound cards. Older computers may not have robust enough video or sound cards to
handle modern streaming video. If you have an older machine, try updating the drivers
for your video and sound cards to ensure the best quality.
Assignments of source coding theory and applications

More Related Content

PDF
Network security
PPT
Image compression
PPTX
Image compression
DOC
Image compression
PDF
Compression using JPEG
PDF
first_assignment_Report
PPT
Compression ii
PPT
Image compression jpeg coding standards
Network security
Image compression
Image compression
Image compression
Compression using JPEG
first_assignment_Report
Compression ii
Image compression jpeg coding standards

Viewers also liked (12)

PDF
Full-Rate Full-Diversity Space-Frequency Block Coding for Digital TV Broadcas...
PPT
Code rule
PPTX
The Ethics of Predictive Coding
PDF
Predictive Coding Legaltech
PPT
Coding style for good synthesis
PPTX
Predictive coding
PPTX
Block Truncation Coding
PPT
Huffman Student
PDF
Huffman and Arithmetic coding - Performance analysis
PPTX
Linear block code
PPTX
Linear block coding
PPT
Source coding
Full-Rate Full-Diversity Space-Frequency Block Coding for Digital TV Broadcas...
Code rule
The Ethics of Predictive Coding
Predictive Coding Legaltech
Coding style for good synthesis
Predictive coding
Block Truncation Coding
Huffman Student
Huffman and Arithmetic coding - Performance analysis
Linear block code
Linear block coding
Source coding
Ad

Similar to Assignments of source coding theory and applications (20)

PPT
Image compression1.ppt
PPT
Mmclass2
PPT
ImageCompression.ppt
PPT
ImageCompression.ppt
PDF
Arithmetic Coding
PDF
cp467_12_lecture14_image compression1.pdf
PPT
Compression Ii
PPT
Compression Ii
PPT
image compresson
PPT
Image compression
PPTX
Fundamental Limits on Performance in InformationTheory.pptx
PDF
Itblock2 150209161919-conversion-gate01
PPTX
Information Theory and coding - Lecture 2
PDF
Lec_8_Image Compression.pdf
PDF
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PDF
Ijrdtvlis11 140006
PDF
25 quantization and_compression
PDF
25 quantization and_compression
PPT
Datacompression1
Image compression1.ppt
Mmclass2
ImageCompression.ppt
ImageCompression.ppt
Arithmetic Coding
cp467_12_lecture14_image compression1.pdf
Compression Ii
Compression Ii
image compresson
Image compression
Fundamental Limits on Performance in InformationTheory.pptx
Itblock2 150209161919-conversion-gate01
Information Theory and coding - Lecture 2
Lec_8_Image Compression.pdf
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
Ijrdtvlis11 140006
25 quantization and_compression
25 quantization and_compression
Datacompression1
Ad

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Spectroscopy.pptx food analysis technology
PDF
Encapsulation theory and applications.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Electronic commerce courselecture one. Pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
Big Data Technologies - Introduction.pptx
Empathic Computing: Creating Shared Understanding
Unlocking AI with Model Context Protocol (MCP)
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Spectroscopy.pptx food analysis technology
Encapsulation theory and applications.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Encapsulation_ Review paper, used for researhc scholars
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
MIND Revenue Release Quarter 2 2025 Press Release
Advanced methodologies resolving dimensionality complications for autism neur...
20250228 LYD VKU AI Blended-Learning.pptx
Electronic commerce courselecture one. Pdf
Digital-Transformation-Roadmap-for-Companies.pptx
The AUB Centre for AI in Media Proposal.docx
Per capita expenditure prediction using model stacking based on satellite ima...

Assignments of source coding theory and applications

  • 1. NORTHWESTERN POLYTECHNICAL UNIVERSITY SCHOOL OF ELECTRONICS AND INFORMATION DEPARTMENT OF COMMUNICATION MAJOR: COMMUNICATION AND INFORMATION SYSTEMS ASSIGNMENT OF SOURCE CODING THEORY AND APPLICATION LECTURE: Associated Prof. WAN Shuai DONE by Gaspard GASHEMA Date:21st March,2013 ASSIGNMENT #1 We are asked to show that for a discrete source x ,the entropy is maximized when the output symbols are equally probable. ANSWER To solve this problem, let us consider tossing a coin with two cases. For one case its probabilities of coming up heads or tails are known, but not necessarily fair. The second case, its probabilities of coming up heads or tails are known and fair. For these two cases, the entropy of the unknown result of the next toss of the coin is maximized if only the coin is fair (that is, if heads and tails both have equal probability 1/2) otherwise the entropy will not be maximized .Normally this is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers a full 1 bit of information as we are going to see them below. Consider a source that emits a sequence of statistically independent letters, where each output letter is either 0 with probability q or 1 with probability 1-q. The entropy of this source is: As we will see on graph of Entropy against Probability of outcomes ,the maximum value of the entropy function occurs at q=0.5 where H(0.5)=1.
  • 2. H ( X ) ≡ H (q) = −q log q − (1− q)log (1− q) Figure:Graph of Entropy against Probability of outcomes As the above graph shows,the enthropy of the coin(fair case) will be maximum only when q=0.5.Remember that the two probabilities are both equal i.e q=1/2 and p=1-p=1- 0.5=0.5.Therefore also the graph shows the value of entropy for this system. Its value is 1bi /letter. However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than a full 1 bit of information. For this case, let us calculate the entropy when p=0.8.As q=1-p we have q=1-0.8=0.2 The entropy for this case will be q=0. where H(0.5)=1. H ( X ) ≡ H (q) = −q log q − (1− q)log (1− q)
  • 3. The extreme case is that of a double-headed coin that never comes up tails, or a double- tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no information. Consider the case for a fair dice with of faces with probability p(1)=1/4,p(2)=1/4,p(3)=1/4 and p(4)=1/4 equally probable case. Entropy of {e1,…en} is maximized when p1=p2=…=pn=1/n 􀃆 H(e1,…,en)=log2n No symbol is “better” than the other or contains more Information 2k symbols must be represented by k bits 􀃆 Entropy of {e1,…en} is minimized when p1=1, p2=…=pn=0 . H(e1,…,en)=0 Therefore ,entropy of will be calculated as follows: H(x)=-[1/4*log2(1/4)+ 1/4*log2(1/4)+ 1/4*log2(1/4)+ 1/4*log2(1/4)]= -4/4*log(1/4)= - log2(1/4)=log24=2 bits/symbol Let us see the the value of this entropy when the dice is unfair with ,probability p(1) = 1/2, p(2) =1/4, p(3) = p(4) = 1/8 The entropy for this case will be H(x)=-[ 1/2 *log2(1/2)+1/4 *log2(1/4)+1/8*log2(1/8)+ 1/8*log2(1/8)]=7/4 bits/symbol As you see from those above two examples, the entropy for fair case is always greater than that of unfair case For generalization, first consider a set of possible outcomes (events) , with qual probability . The entropy of source for this outcomes will be with the base of the logarithm is .
  • 4. Where p(xi)=1/n the probability of each outcome as the output symbols are equally probable, the above formula will become. Finally Thus the entropy becomes maximized. ASSIGNMENT #2 The problem consists of writing a program to implement Run length coding for 1100000000011000000101011111111. ANSWER To solve this probrem,I have used matlab as programming language. Run length coding Run length coding % Run Length Encoder % EE113D Project function encoded = RLE_encode(input) my_size = size(input); length = my_size(2); run_length = 1; encoded = []; for i=2:length if input(i) == input(i-1) run_length = run_length + 1; else encoded = [encoded input(i-1) run_length]; run_length = 1; end end if length > 1 % Add last value and run length to output encoded = [encoded input(i) run_length]; else % Special case if input is of length 1
  • 5. encoded = [input(1) 1]; end After Saving this matlab code with “RLC_encode” as file name ,you will run it using Window commend as follow >> RLE_encode([1 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 1 1 ]) ans = 1 2 0 9 1 1 0 1 1 1 0 1 1 8 This means (1)2,(0)9,(1)1,(0)1,(1)1,(0)1,(1)8 ASSIGNMENT #3 The problem consists of finding or writing a JPEG and a JPEG 2000 algorithm to run both at same compression ratio my own photo. ANSWER N.B:  .The original photo is in JPEG format  .Compression for both JPEG and JPEG2000 formats is 1:2 i.e 50% Therefore to do this question, two different softwares were used. These softwares used are Adobe Photosho(PS) CS6 and JPEG 2000 Studio. Adobe Photoshop CS6 was used to run(compress) original photo(Image) in JPEG format while JPEG2000 Studio used to run the same original Image in JPEG 2000 format. There were two steps to do this problem: The first step consists of taking this original photo, compressing it then save it JPEG format using Adobe Photoshop CS6 (PS). The size of image before compression is 2.31MB. When you run this image in Matlab when you type commend “imread(‘file name of original Image’) in matlab commend window then aigain you type commend “ whos”,in this matlab commend window,the information of the size for this original image will look like. Name Size Bytes Class Attributes
  • 6. ans 1944x2592x3 15116544 uint8 Information of Image after compression 1. Compression using Adobe Photoshop After compressing this original Image and save it in JPEG format, the size of it became 115 KB.By using Matlab commend in matlab window comment the information of this image after compression is : Name Size Bytes Class Attributes ans 972x1296x3 3779136 uint8 NB:As I have mentioned on the beginning of the answer of this probmem,the compression ratio is 1:2(i.e 50 %) and I chose medium quality to compress this image using this method . 2. Compression using JPEG2000 format When I used JPEG 2000 Studio software for Compressing this original Image,after compression ist size became 295 KB. The information of Image in matlab commend window N.B:To compress these image using those two different softawes I have mentioned above, I tried to do all possible in order to get at least images which was not degraded too much. To do this my object is to compare the quality of the output image form those two different software. Discussions and Conclusion From that information of Image in two cases above(for compressed cases), it is clear that the size of Image after compression using Adobe Photoshop is less than that obtained after compression using JPEG 2000 Studio. When you compare these two quantities
  • 7. using the rule of ratio .i.e 295/115 which give 2.565 ≅ 3 .This means that when I take original Image with 2.31MB as its size, the result of this original Image after compressing and saving it using Adobe Photoshop CS6( in JPEG format) is approximately 3 times less than that obtained after compression using JPEG 2000 studio. But at the other hand, the quality of Image in JPEG 2000 is better than that of obtained after compression in JPEG format. ASSIGNMENT #4 We are required to find video stream with errors and try to explain how the errors propagate and they stop. To explain how errors propagate in video stream, let make a quick look how video encoder work for decoding video. To see, the diagram the below schematic block diagram of a typical encoder explains it in details. Therefore, let the schematic of a typical video encoder is shown in Figure below . For video coding, a frame is divided into MBs of 16x16 pixels. For each MB, motion estimation finds the best match from the reference frame(s) by minimizing the difference between the current MB and the candidate MBs (from the reference frame). These residual MBs form a residual frame that is essentially the difference between the current frame and the corresponding motion compensated predicted frame. Simultaneously, motion vectors (MVs) are used to encode the locations of MBs that have been used to each MB in the current frame. The residual frame is then transformed through DCT or integer transform, and quantized. Usually,the quantized coefficients and the MVs are coded by variable length codes (VLCs). The VLCs (e.g.,Huffman code, arithmetic code) achieve a higher compression ratio compared to the fixed length codes,and hence the VLC schemes have been extensively used in almost all coding standards to encode various syntax elements. For every coded frame, the encoder transmits the transformed coefficients, motion vectors and some header information essential for decoding. Some frames, known as intra-frame, in the video sequence are coded without using motion estimation/compensation.
  • 8. Figure:The block diagram of a typical video encoder When the compressed video bitstream is transmitted over a communication channel, it is subjected to channel errors/noise. Generally, forward error correction (FEC) code is used for protecting data against channel errors. The FEC is effective for random errors, but inadequate in the case of long-duration bursterrors. For this the study of the use of hierarchical quadratic amplitude modulation (QAM) for channel modulation is required. This scheme provides unequal protection to the bits depending on their priority level. The symbols with the same high priority bits are assigned to the same QAM constellation cluster and any two neighboring nodes in a cluster only differ in one bit. This scheme provides asignificant PSNR improvement when the channel SNR is lower. To handle the errors, the following stages are required : Error detection and localization Resynchronization Error concealment Error detection is done with the help of video syntax and/or semantics. When violation of video semantics/syntax is observed, decoder reports an error, and tries to resynchronize at the next start code,which is typically a long codeword and different from any combination of other possible codes. For some video coding standards (e.g., MPEG- 4), which use reversible variable length codes (RVLCs), data is recovered from the corrupted packet by carrying out decoding in backward direction. Therefore, the corrupted packets are simply discarded and the lost region of video frame is concealed. The error concealment schemes try to minimize the visual artifacts due to errors, and can be grouped into two categories: intra-frame interpolation and inter-frame interpolation. In intra-frame interpolation, the values of missing pixels are estimated from the surrounding pixels of the same frame, without using the temporal information. On the other hand, the inter-frame interpolation is done based on the corresponding region(s) of the reference frame(s). If a motion vector is missing, it can be calculated based on the motion vectors of the surrounding regions.
  • 9. N.B:You can watch this video to its errors on the following link:” http://www.youtube.com/watch?v=SSPv-xzMwOI Apart from those have mentioned above , the following are common streaming video problems. 1. Website Issues Many of the popular video streaming websites, including YouTube, use Adobe Flash Player. Confirm that your computer is using the latest version of Adobe Flash Player. To do this, go to the Adobe website and download the appropriate uninstaller for your computer. Once the current installation has been uninstalled, go back to the Adobe website and install the most recent version of the plugin. 2.Firewall Issues Firewalls may block streaming video ports. If you are having problems watching streaming videos via a desktop application such as Windows Media Player, and you normally have to use an proxy server to gain Web access, you may have to enable your media player to also use the proxy server. This option can be found in the Options or Preferences menu of your particular media player. 3..Internet Connection The speed and stability of an Internet connection has a direct effect on the quality of streaming video. A slow connection will result in choppy video and audio, while an unstable Internet connection can result in video stopping suddenly. Try to view streaming video using only a fast connection such as broadband cable or DSL. If you are using a wireless broadband (Wi-Fi) connection, make sure you have a strong signal. 3.Software Issues Running out-of-date software can result in problems watching streaming video. Other common issues include buffering problems or acceleration problems. Consult your application's help file to learn how to update your media viewer to the latest version, and to learn how to adjust the buffer or video acceleration. 4.Hardware Issues The quality of your streaming video is dependent on the power of your graphics and sound cards. Older computers may not have robust enough video or sound cards to handle modern streaming video. If you have an older machine, try updating the drivers for your video and sound cards to ensure the best quality.