An Improved Parametric Bit Rate Model for Frame-level Rate Control in Video Coding

Research Paper / Jan 2011

Cookies are important to the proper functioning of a website. To improve your experience, we use cookies to remember log-in details and provide secure log-in, collect statistics to optimize site functionality, and deliver content tailored to your interests. By continuing to browse or closing this banner, you indicate your agreement.

- Home /

An Improved Parametric Bit Rate Model for Frame-level Rate Control in Video Coding

Research Paper / Jan 2011

WHITE PAPER / Sep 2019

The automotive domain is in transition from a driver focus towards autonomous-system-based mobility. This transition is being taken even further with the development of cooperative driving, where (semi) autonomous vehicles (AVs) are cooperating in executing various driving functions. With the increase in IT-based functions in autonomous and cooperative driving, a...

WHITE PAPER / Jul 2019

The arrival of YouTube in 2005, followed by Apple’s first iPhone in 2007, and Google’s Android platform in 2008, serves as the preamble to what has become a two-screen video market. A decade and a half might seem like a long period of time, but put into context of television’s...

The fifth generation of mobile networks, commonly known as 5G, holds a lot of promise. Historically, 2G brought us mobile voice, while 3G introduced us to mobile data. 4G and LTE enabled usable mobile broadband services and now 5G is supposed to unlock further value from our mobile networks with...

BLOG / Feb 2019
/
MWC19,
IDCCatMWC19,
IDCC,
InterDigital,
MWC,
Mobile World Congress,
MWL
/ Posted By: The InterDigital Communications Team

BLOG / Jan 2019
/
MWC19,
IDCCatMWC19,
IDCC,
InterDigital,
MWC,
Mobile World Congress,
MWL
/ Posted By: The InterDigital Communications Team

1

An Improved Parametric Bit Rate Model for

Frame-level Rate Control in Video Coding

Zhifeng Chen∗, Serhad Doken∗, and Dapeng Wu†

∗InterDigital, Inc., 781 Third Avenue, King of Prussia, Pennsylvania, 19406 USA

†University of Florida, Gainesville, Florida 32611 USA

Abstract

In a wireless video communication system, the video encoder needs to dynamically control the

coding parameters based on the instantaneous video statistics and channel condition to achieve the best

video quality. An accurate bit rate model can help the encoder to achieve accurate bit rate control and

good rate-distortion (R-D) performance. In this paper, we improve the bit rate model by modeling the

component of run-level mapping plus entropy coding as the process of choosing different codebooks for

different quantized transform coefficients. We also compensate the mismatch between the true histogram

and the assumed Laplacian distribution in a parametric model by utilizing the estimation deviation of

previous frames. The experimental results show that our method achieves more accurate estimation of bit

rate than existing models. We then apply our bit rate model to frame-level rate control in the H.264/AVC

JM reference software. The experimental results show that our rate control algorithm achieves better

R-D performance than the existing rate control algorithm in JM.

I. INTRODUCTION

Most practical video compression algorithms reduce spatial and temporal redundancy via

transform coding and motion estimation, respectively. However, the degree of redundancy, and

therefore the resulting rate for a given distortion, can fluctuate widely from scene to scene. For

example, scenes with high motion content will require more bits than more stationary ones [1]. In

real-time video communications, the end-to-end delay for transmitting video data needs to be very

small, particularly in two-way interactive applications such as video calls or videoconferencing.

If the encoded video is transmitted through a fixed-rate channel, the coded bits are placed

into a small buffer and a finite number of bits can be sent from the buffer during each frame

interval [2]. Given the end-to-end delay constraint and the buffer fullness, the video encoder

needs to dynamically control the coding parameters (or an operating point in an R-D sense)

based on the instantaneous video statistics and channel bandwidth. The instantaneous coding

parameters control problem becomes even more important and challenging under a variable

bit-rate channel, e.g. fading channels in 3G/LTE systems.

November 8, 2010 DRAFT

2

In general, the coding parameters involve three aspects, that is, 1) spatial domain parameter,

e.g., spatial sampling rate, 2) temporal domain parameter, e.g., frame rate, and 3) coefficient

domain parameter, e.g., quantization step size. In most typical video encoders, e.g., H.263/264

and MPEG-2/4 encoders, usually only the coefficient domain parameter is adjusted in the encoder

to meet the delay and bit rate constraints.1 Since the video statistics vary between frames and

within each frame, a desirable method is to choose different quantization step sizes for encoding

different frames or different regions within one frame (or basic units), which highlights the

importance of the bit allocation problem. Under certain end-to-end delay constraints, the bit

allocation and quantization step size control problems, which together are called the encoder

rate control problem, can be deployed at different levels, e.g., GOP level, frame level, and basic

unit level. These different-level rate control methods are indeed adopted in the H.264/AVC JM

reference software [3], [4], [5].

For the bit allocation problem, a rate-distortion model is required to minimize the overall dis-

tortion under an overall bit constraint. For the quantization step size control problem, the encoder

needs to do a one-to-one mapping between the quantization step size and allocated bits given

the video statistics. In a practical encoder design, solving both of these two problems demands

a prior knowledge of bit rate as a function of video statistics and coding parameters. Plenty of

bit rate models have been developed in existing literature. Most of the existing works derive bit

rate as a function of video statistics and quantization step size [2], [6], [7], [8], while others

model bit rate as a function of video statistics and other parameters [9]. In general, these models

come from either experimental observation [9], [7], [10], [11] or parametric modeling [12], [8],

[13]. However, both of them have some limitations. The experimental modeling usually induces

some model parameters which can only be estimated from previous frames. Therefore, the model

accuracy depends not only on the statistics and coding parameters but also on the estimation

accuracy of those model parameters. However, in theory, the instantaneous frame bit rate should

be independent of previous frames given instantaneous frame statistics and coding parameters.

In addition, the estimation error of those model parameters may have a significant impact on the

model accuracy, which can be observed in the H.264/AVC JM reference software [3] and will be

explained in detail in the experimental section of this paper. On the other hand, the parametric

modeling has the following two limitations: 1) the assumed residual probability distribution,

e.g., Laplacian distribution, may deviate significantly from the true histogram; 2) the implicit

assumption of all transform coefficients being identically distributed is not valid if run-length

coding is conducted before the entropy coding as in most practical encoders. Since the model-

1In a hybrid video encoder with block-based coding, the reference points and prediction modes can also be adjusted accordingly

for a given quantization step size using rate-distortion optimization.

November 8, 2010 DRAFT

3

selection problem may often be more important than having an optimized algorithm [1], simply

applying these parametric models to a real encoder may result in poor R-D performance. To

compensate the inaccuracy of those parametric models, Refs. [14], [8] introduce some model

parameters, which are determined by heuristic values.

In this paper, we improve the bit rate model by modeling the component of run-level mapping

plus entropy coding as the process of choosing different codebooks for different quantized

transform coefficients. We also compensate the mismatch between the true histogram and the

assumed Laplacian distribution in the parametric model by utilizing the estimation deviation

of previous frames. The experimental results show that our method achieves a more accurate

estimate of bit rate compared to existing models. We then apply our bit rate model to frame-level

rate control in the H.264/AVC JM reference software [3]. The experimental results show that our

rate control algorithm achieves better R-D performance than the existing rate control algorithm

in JM.

The rest of this paper is organized as follows. Section II derives our bit rate model as a function

of video statistics and quantization step size. Section III shows the experimental results, which

demonstrates both the higher accuracy of our model and the better R-D performance achieved

with it over existing models. Section IV concludes the paper.

II. BIT RATE MODELING FOR A HYBRID VIDEO CODER WITH BLOCK-BASED CODING

SCHEME

In this section, we first derive residual bit rate as a function of video statistics and quanti-

zation step size, and then design the estimation algorithm for model parameters with practical

consideration.

A. Derivation of residual rate function

1) The Entropy of Quantized Transform Coefficients for i.i.d. Zero-mean Laplacian Source

under Uniform Quantizer: For transform coefficients with independent and identically distributed

(i.i.d.) zero-mean Laplacian distribution, the probability density function (pdf) is f(x) = λ

2

·eλ·|x|,

where λ =

√

2

σ

and σ is the standard deviation. For the uniform quantizer with quantization step

size Q and quantization offset θ2, the probability of zero after quantization is

P0 = 2

∫ Q·(1−θ2)

0

p(x)dx = 1− e−θ1·(1−θ2), (1)

and the probability of level n after quantizion is

Pn =

∫ Q·(n+1−θ2)

Q·(n−θ2)

p(x)dx =

1

2

(1− e−θ1) · eθ1·θ2 · e−λ·Q·n, (2)

November 8, 2010 DRAFT

4

where

θ1 =

√

2 ·Q

σ

. (3)

As a result, the entropy is

H = −P0 · log2 P0 − 2

∞∑

n=1

Pn · log2 Pn

= −P0 · log2 P0 + (1− P0) · (

θ1 · log2 e

1− e−θ1 − log2(1− e

−θ1)− θ1 · θ2 · log2 e + 1).

(4)

2) Improve with run length model: In a video encoder, the quantized transform coefficients are

actually not i.i.d. Although we may assume the DCT transform or integer transform [15] highly

de-correlates the correlation among neighboring pixels, different transform coefficients have very

different variances in statistics. For example, in a 4x4 integer transform, the 16 coefficients show

a decreasing variance in the well-known zigzag scan order as used in H.264. As a result, the

coefficients with higher frequency have higher probability of being zeroes after quantization. On

the other hand, the coefficients with lower frequency show more randomness in different levels

even after quantization. Such characteristics are exploited by the run-level mapping after zigzag

scan to further increase the compressibility for entropy coding. We may regard the component of

run-level mapping plus entropy coding as choosing different codebooks for different quantized

transform coefficients. From information theory, we know the concavity of the entropy as a

function of the distribution (Theorem 2.7.3 in Ref [16]). Therefore, not considering the mixture

of 16 coefficients with different variances will overestimate the entropy of quantized transform

coefficients [12], [8], [13].

To derive the joint entropy for 16 coefficients with different variances, we need to model

the variance relationship among those 16 coefficients. Doing extensive experiments, we find an

interesting phenomenon2, that is, the variance is approximately a function of position in the

two-dimensional transform domain as follows

σ2(x,y) = 2

−(x+y) · σ20, (5)

where x and y is the position in the two-dimensional transform domain, and σ20 is the variance

of the coefficient at position (0, 0).

By using this property, we can derive the variance σ2(x,y) for all positions given the average

variance σ2. For a 4x4 integer transform with average variance σ2, the variance for each transform

2This phenomenon is found from samples in one frame or one GOP for CIF sequences, i.e., the number of sample is larger

than 101376.

November 8, 2010 DRAFT

5

coefficient can be calculate by (5) as

σ2 =

1

16

4∑

x=0

4∑

y=0

σ2(x,y) =

225

1024

· σ20 . (6)

Therefore, we have

σ2(x,y) = 2

−(x+y) · 1024

225

· σ2. (7)

Then, the estimated joint entropy of 16 non-identical transform coefficients by compensating

the run length coding model is

Hrlc =

1

16

3∑

x=0

3∑

y=0

H(x,y), (8)

where H(x,y) is the entropy for coefficient position (x, y), and can be calculated by (7), (1), (3)

and (4) with their own σ2(x,y) and θ1(x,y).

3) Improvement by considering the model mismatch: The assumed residual probability distri-

bution, e.g., Laplacian distribution, may deviate significantly from the true histogram especially

when the number of samples are not sufficient. Therefore, we need to compensate the mismatch

between the true residual histogram and assumed Laplacian distribution to obtain a better

estimate. Denote Hl as the entropy for the case with a Laplacian distribution, Ht as the entropy

for the case with true histogram and ν = Hl

Ht

. In a video sequence, the changes of residual

statistics and quantization step size between adjacent frames have almost the same effect on Hl

and Ht. Therefore, we may use the previous frame statistics to compensate the estimated result

from (8). Assume the ratio between Hkl and Hkt approximate νk−1, we have H

k

l

Hk

t

=

Hk−1

l

Hk−1

t

. As a

result, (8) can be further compensated as

Hˆk =

Hk−1t ·Hk

Hk−1l

. (9)

(8) and (9) significantly improve the estimation accuracy of residual entropy as shown in

Fig. 1.

B. Parameter estimation with practical quantization step size control consideration

1) Encoding bit rate estimation for the H.264 encoder: For a hybrid video coder with block-

based coding scheme, e.g., H.264 encoder, the encoded bit rate R consists of residual bits Rresi,

motion information bits Rmv , prediction mode bits Rmode, and syntax bits Rsyntax. That is,

Rk = Hˆk ·Nresolution ·Nfps +Rkmv +Rkmode +Rksyntax, (10)

where Nresolution is the normalized video resolution considering color components, and Nfps

means frame per second. Compared to Rkresi, Rkmv , Rkmode, and Rksyntax are less affected by Q.

Therefore, Rkmv, Rkmode, Rksyntax can be estimated from the statistics in the previous frames.

November 8, 2010 DRAFT

6

2) Estimation of σ2 in the k-th frame: To control Qk by (3) before the real encoding

processing, the residual variance σ2 in the k-th frame should be estimated first. A simply method

is to estimate it by σ2 in the k − 1-th frame. This method is valid for most P-frames with inter

prediction. However, for some scene change frames, such an estimate may cause a very inaccurate

estimation result. Therefore, we need to treat the scene change frames differently. Note that in

a practical video encoder with rate-distortion optimization (RDO), most macroblocks in the

scene change frames would be encoded with intra prediction mode. That is, there is no motion

information bits from (10). On the other hand, σ2 cannot simply be estimated from the k − 1-

th frame. Note that the residual variance after the intra mode prediction should be less than

the frame difference from the previous frame. We can approximately estimate σ2 for the scene

change frame by

(σ2)k =

(fk − fˆk−1)2

Ck

, (11)

where fk means the original pixel value in the k-th frame, fˆk−1 means the reconstructed pixel

value in the k − 1-th frame, and Ck is a normalizing factor, which is estimated from the

normalizing factor in previous scene change frames.

Another special frame is the P-frame right after the scene change frame. In these P-frames,

apparently σ2 may be much smaller than that of the previous frame. In such a case, we may

estimate σ2 by

(σ2)k = min((σ2)k−1,

(fk − fˆk−1)2

Ck

). (12)

3) Practical consideration of Laplacian assumption: Statistically speaking, (8) is only valid

for sufficiently large samples. When there are not enough samples or the sample variance is very

small, e.g., Q > 3σ, the Laplacian assumption for individual coefficients is not accurate. In such

cases, we may use (4) as the estimate instead of (8). That is,

Hk =

estimated by (8), Q ≤ 3σ

estimated by (4), otherwise.

(13)

4) Practical consideration of encoder setting: In the R-D sense, if there are only a few

coefficients with small values in one macroblock (MB) to be coded, they may be discard as in

the H.264/AVC JM reference software [3]. For example, ‘sum cnt nonz’ is the accumulation

of coeff cost over a whole macro block. If sum cnt nonz ≤ LUMA COEFF COST = 5 for

the whole MB, all nonzero coefficients are discarded for the MB and the reconstructed block

is set equal to the prediction. Therefore, the skip mode may increase and the resulting bit rate

becomes a little less than that calculated from (4) for a Laplacian source. Note that this is a

pure encoder issue and hence does not have any implication on the standard.

November 8, 2010 DRAFT

7

Note that in Refs. [14], [8], authors try to improve their model accuracy by taking the skip

mode into consideration. Authors claim that both P0 and Pn, in their analytical formula, should

be normalized by the probability of skip blocks, which indeed decreases the estimated bit rate

for a given σ2 and Q for a Laplacian source. However, this is not valid in theory since (4)

is the lower bound for the i.i.d. case. In fact, the probability of skip blocks depends on the

preset value of LUMA COEFF COST 3. As a result, we should analyze the bit rate for those

skip blocks and non-skip blocks separately with their probability rather than simply normalizing

P0 and Pn by the probability of non-skip blocks. In addition, PsP0 is a function of Q both in

theory and by simulation. However, Refs. [14], [8] use a constant Ps

P0

. Since the preset values

of LUMA COEFF COST and other similar parameters are a pure encoder issue, we set them

equal to zero in order to compare the bit rate model accuracy between our model and existing

models.

III. EXPERIMENTAL RESULTS

In this section, we first verify the accuracy of our proposed bit rate model. Then, we compare

the R-D performance of the frame-level quantization step size control algorithm with our model

to existing frame-level rate control algorithms 4.

A. Experimental Setup

The JM16.0 encoder and decoder [3] are used in the experiments. All the tested video

sequences are in CIF resolution at 30fps. Each video sequence is encoded for its first 100

frames where the first frame is an I-frame and the following frames are P-frames. The encoder

setting is given as below: the number of reference frames is 3; B slices are not included; only

4x4 transform is used; CABAC is enabled for entropy coding; for all rate control algorithms,

the first frame uses a fixed quantization parameter (QP), i.e., QP=28.

B. Model Accuracy

Fig. 1 shows the true residual bit rate and estimated residual bit rate for ‘foreman’ and ‘mobile’

for the first 20 frames in order to have a distinguishable comparison. In Fig. 1, ‘True bpp’ means

the true bit per pixel (bpp) produced by the JM16.0 encoder; ‘without rlc’ means bpp estimated

by (4); ‘with rlc’ means bpp estimated by (8); ‘without compensation’ means bpp estimated

by (13); ‘with compensation’ means bpp estimated by (13) and (9); ‘Rho-domain’ means bpp

estimated by Refs. [17], [18]; ‘Xiang’s model’ means bpp estimated by Refs. [14], [8].

3There are some other similar parameters setting, e.g., CHROMA COEFF COST , LUMA MB COEFF COST and

LUMA 8x8 COEFF COST in the reference software.

4For single pass rate control algorithms, they are indeed the quantization step size control algorithm.

November 8, 2010 DRAFT

8

2 4 6 8 10 12 14 16 18 20

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Frame Index

bi

ts

p

er

p

ix

el

(b

pp

)

foreman−cif−800000bps

True bpp

Estimated bpp without compensation

Estimated bpp with compensation

Estimated bpp without rlc

Estimated bpp with rlc

Estimated bpp by Rho−domain

Estimated bpp by Xiang’s model

2 4 6 8 10 12 14 16 18 20

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Frame Index

bi

ts

p

er

p

ix

el

(b

pp

)

mobile−cif−800000bps

True bpp

Estimated bpp without compensation

Estimated bpp with compensation

Estimated bpp without rlc

Estimated bpp with rlc

Estimated bpp by Rho−domain

Estimated bpp by Xiang’s model

(a) (b)

Fig. 1. bpp vs. Frame index: (a) foreman, (b) mobile.

We can see that the estimation accuracy is improved by (8) when true bpp is relatively large.

However, when true bpp is small, ‘without rlc’ gives higher estimation accuracy. By utilizing

the statistics of the previous frame from (9), the estimation accuracy is further improved. We

also find that ‘Rho-domain’ is accurate at low bpp; however, it is not accurate at high bpp. For

‘Xiang’s model’, the estimated bpp is smaller than the true bpp in most cases. Note that we also

want to compare the bit rate model used in JM16.0. However, due to the estimation error of its

model parameters, the first few frames may abnormally underestimate the quantization step size

Q. Therefore, the rate control algorithm in JM16.0 use three parameters, i.e., RCMinQPPSlice,

RCMaxQPPSlice and RCMaxQPChange, to reduce the effect of the estimation error. Their

default values are 8, 42, 4, respectively. However, we believe a good rate control algorithm

should depend mainly on the model accuracy rather than those manually chosen thresholds.

When those parameters are set as 0, 51, 51, the estimated QP could even be 0 in the first few

frames. That is, the first few frames consume most of the allocated bits, and there are only few

bits available for the remaining frames in JM. Therefore, we do not test its model accuracy in

this subsection. Instead, we will plot the R-D performance for it in Section III-C.

C. Performance Comparison

Fig. 2 shows Y-component PSNR vs. bit rate for ‘foreman’ and ‘mobile’5. We test three

different settings of (RCMinQPPSlice, RCMaxQPPSlice, RCMaxQPChange), i.e., (8, 42, 4), (0,

51, 16) and (0, 51, 51) for JM and our proposed rate control algorithms. We see that the R-

D performance of our model is better than, or at least similar to, that of JM for all the cases

5In our rate control algorithm, only the frame-level rate control is used. Therefore, we average the overall bit rate over all

frames except the first frame since it uses a constant QP.

November 8, 2010 DRAFT

9

compared. To be more specific, when RCMinQPPSlice, RCMaxQPPSlice and RCMaxQPChange

are set to the default values, i.e., 8, 42, 4, JM’s rate control algorithm performs almost the same

as our model. However, when those parameters are set to 0, 51, and 51, our model performs

much better than JM’s rate control algorithm. From Fig. 2, we see that the PSNR at 600kbps is

even lower than the PSNR at 400kbps for ‘JM’. This is because the first few frames consume

most of bits at 600kbps due to the estimation error of the model parameters; therefore, the overall

PSNR becomes worse.

3 4 5 6 7 8 9 10

x 105

28

30

32

34

36

38

40

42

Bit rate (bit/sec)

PS

NR

(d

B)

RD−foreman−cif−Y

JM−(8,42,4)

proposed−(8,42,4)

JM−(0,51,16)

proposed−(0,51,16)

JM−(0,51,51)

proposed−(0,51,51)

3 4 5 6 7 8 9 10 11

x 105

25

26

27

28

29

30

31

32

Bit rate (bit/sec)

PS

NR

(d

B)

RD−mobile−cif−Y

JM−(8,42,4)

proposed−(8,42,4)

JM−(0,51,16)

proposed−(0,51,16)

JM−(0,51,51)

proposed−(0,51,51)

(a) (b)

Fig. 2. PSNR vs. Bit rate: (a) foreman, (b) mobile.

From Fig. 2, we also find that the R-D performance of the proposed model without the QP

constraint is very similar to that of the proposed model with the QP constraint. In fact, the bit

rate achieved by the proposed model without the QP constraint is more accurate than the bit

rate achieved by the proposed model with the QP constraint. Actually, there is no reason to

control the QP change to be within a certain limit. Instead, it is much more important to control

the distortion level change to be within a certain limit. In other words, QP can be changed

dramatically to accommodate the residual statistics change from frame to frame; and this can

be achieved with an accurate bit rate model and distortion model.

IV. CONCLUSION

In this paper, we improved the bit rate model by modeling the component of run-level

mapping plus entropy coding as choosing different codebooks for different quantized transform

coefficients. We also compensated the mismatch between the true histogram and the assumed

Laplacian distribution in a parametric model by utilizing the estimation deviation of previous

frames. We considered several practical factors in model parameter estimation for the design of

a quantization step size control algorithm in practical video encoders. The experimental results

showed that 1) our method achieves more accurate estimation of bit rate than existing models;

and 2) the rate control algorithm with our model achieves better R-D performance than the

November 8, 2010 DRAFT

10

existing rate control algorithm in H.264/AVC JM reference software. In our future work, we

will use the same compensation technique for a parametric distortion model and apply both the

bit rate model and distortion model to solving the R-D optimized bit allocation problem.

REFERENCES

[1] A. Ortega and K. Ramchandran, “Rate-distortion methods for image and video compression,” IEEE Signal Processing

Magazine, vol. 15, no. 6, pp. 23–50, 1998.

[2] J. Ribas-Corbera and S. Lei, “Rate control in DCT video coding for low-delay communications,” IEEE Transactions on

Circuits and Systems for Video Technology, vol. 9, no. 1, pp. 172–185, 1999.

[3] “H.264/AVC reference software JM16.0,” Jul. 2009. [Online]. Available: http://iphome.hhi.de/suehring/tml/download

[4] S. Ma, Z. Li, and F. Wu, “Proposed draft of adaptive rate control,” in Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T

VCEG, Doc. JVT-H017r3, 8th Meeting, Geneva, 2003, pp. 20–26.

[5] Z. Li, W. Gao, F. Pan, S. Ma, K. Lim, G. Feng, X. Lin, S. Rahardja, H. Lu, and Y. Lu, “Adaptive rate control for H.

264,” Journal of Visual Communication and Image Representation, vol. 17, no. 2, pp. 376–406, 2006.

[6] T. Chiang and Y. Zhang, “A new rate control scheme using quadratic rate distortion model,” IEEE Transactions on Circuits

and Systems for Video Technology, vol. 7, no. 1, pp. 246–250, 1997.

[7] S. Ma, W. Gao, and Y. Lu, “Rate-distortion analysis for H. 264/AVC video coding and its application to rate control,”

IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 12, p. 1533, 2005.

[8] X. Li, N. Oertel, A. Hutter, and A. Kaup, “Laplace distribution based Lagrangian rate distortion optimization for hybrid

video coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 2, pp. 193–205, 2009.

[9] Z. He and S. Mitra, “Optimum bit allocation and accurate rate control for video coding via-domain source modeling,”

IEEE transactions on Circuits and Systems for Video Technology, vol. 12, no. 10, 2002.

[10] H. Lee, T. Chiang, and Y. Zhang, “Scalable rate control for MPEG-4 video,” IEEE Transactions on Circuits and Systems

for Video Technology, vol. 10, no. 6, pp. 878–894, 2000.

[11] K. Yang, A. Jacquin, and N. Jayant, “A normalized rate-distortion model for H. 263-compatible codecs and its application

to quantizer selection,” in icip. Published by the IEEE Computer Society, 1997, p. 41.

[12] H. Hang and J. Chen, “Source model for transform video coder and its application. I. Fundamental theory,” 1997.

[13] F. Moscheni, F. Dufaux, and H. Nicolas, “Entropy criterion for optimal bit allocation between motion and prediction error

information,” in Proc. SPIEs Conf. Visual Communications and Image Processing (VCIP. Citeseer, 1993, pp. 235–242.

[14] X. Li, N. Oertel, A. Hutter, and A. Kaup, “RATE-DISTORTION OPTIMIZED FRAME LEVEL RATE CONTROL FOR

H. 264/AVC,” 2009.

[15] ITU-T Series H: Audiovidual and Multimedia Systems, Advanced video coding for generic audiovisual services, Nov. 2007.

[16] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley-Interscience, 1991.

[17] Z. He, J. Cai, and C. W. Chen, “Joint source channel rate-distortion analysis for adaptive mode selection and rate control

in wireless video coding,” IEEE Transactions on Circuits and System for Video Technology, special issue on wireless video,

vol. 12, pp. 511–523, Jun. 2002.

[18] Z. He, Y. Kim, and S. Mitra, “Object-level bit allocation and scalable rate control for MPEG-4 video coding,” in MPEG-4.

2001 Proceedings of Workshop and Exhibition on. IEEE, 2002, pp. 63–66.

November 8, 2010 DRAFT

InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment.

© Copyright 2019 InterDigital, Inc. All Rights Reserved