Enhanced Streaming services in 3GPP systems
Research Paper / Jan 2010
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
Enhanced Streaming services in 3GPP systems
Debashish Purkayastha, Kamel Shaheen, InterDigital, Inc. Communications Corporation.
Abstract—3GPP Packet Switched Streaming Service (PSS) is a standard for audio/video streaming to handheld devices. This paper describes the service for Non-IMS and IMS-based framework, relevant network architecture and functional components. These services are based on RTSP/RTCP/RTP protocols. To improve the streaming service, adaptive streaming is introduced using the same set of protocols. This is a server-based solution where clients need to send all information and the server makes the decision. Recent trends show that HTTP-based streaming is gaining popularity for reasons such as NAT/Firewall traversal, cClient control, and reuse of existing servers. 3GPP introduces Adaptive HTTP for streaming as an alternative solution. It standardizes the server to client interface and data description known as MPD. The paper briefly compares the performance of the different streaming protocols, and describes a generic model to be applied for HTTP Adaptive streaming. In order to make HTTP streaming more useful, a key feature extension such as Bookmarking is described. Bookmarking will allow the user to exactly store the position in the HTTP stream where it paused/stopped viewing media, and later resume from that position. Various bookmarking methods such as Client-based and Server-based methods are described. Comment by difazira: Does this acronym need to be defined?
Index Terms—3GPP PSS, ADAPTIVE HTTP, MPD, BOOKMARKING
he enhancements to 3G systems with the introduction of HSPA and, LTE has made more bandwidth available to the users and /operators. This makes it possible to offer new and improved services such as . Streaming is one such service which is being developed for 3G systems. MMS is a service which is used tois used to send and receive multimedia clips, along with messages. Streaming can be viewed as the next level of service for 3G users, where users they can view multimedia content while it is still being transported. In 3G, this is termed as PSS, “Packet Switched Streaming Service (PSS)”.
Figure 1 shows the basic entities involved in the sStreaming service and how they connect. Clients initiate the service and connect to the selected content server. Content servers can provide stored or generate live content, e.g., video from a concert. User profile and terminal capability data can be stored on a network server thatand will be contacted at the initial set up. The uUser pProfile will provide the sStreaming service with the user’s preferences. Terminal capabilities will be used by the sStreaming service to decide whether or not the client is capable of receiving the streamed content.
Portals are servers that allow convenient access to streamed media content. For instance, a portal might offer content browse and search facilities. In the simplest case, it is a Web/WAP-page with a list of links to streaming content. The content itself is usually stored on content servers, which can be located elsewhere in the network.
Fig. 1. 3GPP PSS architecture of PSS
3GPP has defined the architecture and protocols for the PSS system. It has defined PSS for IMS as well as for Non-IMS systems. This paper describes details of the PSS within IMS and non-IMS frameworks.
PSS systems, as defined within 3GPP, use RTSP protocol for control, and RTP/RTCP for media. Gradually, HTTP is gaining popularity as a streaming protocol, and is being introduced as the main streaming protocol to provide streaming service, with some specific modifications. One of the most important developments is “Adaptive HTTP Streaming.” . This paper describes the details ofabout the Adaptive HTTP protocol, why it is important, and its advantages, etc. Finally, the paperit concludes with some possible extensions to adaptive HTTP, for example,such as Bookmarking, that can be done to the adaptive HTTP streaming protocol in order to support advanced VCR-like features such as , e.g., PLAY, PAUSE, and RESUME, etc.
PSS SYSTEM DESCRIPTION
Streaming is a mechanism whereby media content can be rendered at the same time that it is being transmitted to the client over the data network.
The 3GPP PSS system allows media to be streamed over a wireless network. It PSS also encompasses the composition of media objects, thereby allowing compelling multimedia services to be provisioned. For instance, a mobile cinema ticketing application would allow the user to view the film trailers.
PSS has been defined within the context of IMS, as well as non-IMS. The following section describes details about the IMS and non-IMS PSS service.
Non- IMS PSS and MBMS
Fig. 2. Non- IMS PSS and MBMS Architecture
The diagram represents a PSS system within a non IMS framework.
The sources consist of all multimedia content in streaming or file form, e.g., live encoders process feeds from TV or Music Radio channels.
The PSS server performs control and streaming delivery functions on a Unicast access type.
The BM-SC performs control and streaming/download delivery functions in a hybrid Unicast/Multicast/Broadcast access type. Comment by difazira: Should define BM-SC
The core network and RAN enable mobility, and provide IP connectivity over Unicast/Multicast/Broadcast bearers between servers and clients.
The PSS & MBMS client, located in the UE, performs service selection and initiation, also receiving and presenting the content to the user. Comment by difazira: Define acronymn.
The PSS client interfaces to the PSS server transparently through the Packet Switch Network. The PSS client can discover the PSS services via multiple means, such as browsing. The session description protocol is SDP. The session control protocol is RTSP. The transport protocol is RTP.
The MBMS client interfaces to the BM-SC via layer 3 protocols defined between the UE and the GGSN and the GGSN with the BM-SC (Gmb). Comment by difazira: Define or delete.
The PSS and MBMS client interfaces via the Radio interface to the RAN and the CN. Comment by difazira: Define
IMS based PSS and MBMS
Fig. 3. IMS based PSS Architecture
The PSS within the IMS framework introduces a new logical functionality, PSS Adapter. It performs bi-directional protocol translation between SIP and RTSP to offer control of PSS servers. It proxies RTSP messaging from the UE and SIP/RTSP translation towards the PSS server. These functions can be incorporated into the SCF, the PSS Server, or a new stand-alone entity. The PSS Adapter also supports the terminating User Agent (UA)[User Agent] functional entity. Comment by difazira: Define (or at least identify as part of the IMS infrastructure).
The PSS Server (Packet Switch Streaming Server) is responsible for media control and media delivery functions.
The IMS-based PSS system uses SIP for initial signalling and setup. After the initial phase, the UE uses RTSP for streaming control and RTP for streaming data.
Adaptive Streaming in PSS
In order to provide best content quality with interrupt-free playback, adaptive streaming is introduced in the PSS system. The goal is to maintain the receiver buffer as full as possible without overflowing or under flowing it. Transmitting at a bit rate higher than the available link is capable of transporting causes congestion-induced packet loss. If the player consumes at a higher rate than what is being received, it will cause underflow.
The solution for adaptive streaming is based on server side control. Clients send RTCP messages to the PSS server about the receiver buffer status and reception characteristics. Based on these reports, the server determines the bit rate to be used for the streamed media.
This solution provides the server with excellent capabilities to adapt the media stream, since it knows how the content is encoded, and the low cost associated in processing required byfor the mobile client is quite low. The drawback of this approach is that client-related information needs to be sent to the server periodically and throughout the whole session.
HTTP in PSS
The HTTP protocol is gradually becoming popular as a multimedia streaming protocol.
Datagram protocols, such as User Datagram Protocol (UDP), send the media stream as a series of small packets. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. The RTSP/RTCP protocol adds a mechanism for reliable delivery over UDP.
Reliable protocols, such as the Transmission Control Protocol (TCP), guarantee correct delivery of each bit in the media stream. However, it is accomplished with a system of timeouts and retries that . This mmakes it complicated the protocol complex to implement. It also means that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. HTTP is a stateless protocol and designed to work with TCP. This inherently makes HTTP not suitable for multimedia streaming.
Work is being done to adapt HTTP for streaming services over the internet, e.g., HTTP live streaming, Adaptive HTTP streaming, Bi-directional HTTP, etc. These modifications to HTTP, and the capability to traverse through NAT and firewalls, is making HTTP streaming very popular. HTTP streaming also allows reuse of existing infrastructure like HTTP servers and clients. There are many examples of this trend today, with community video sharing sites that employ only HTTP progressive download for media delivery. HTTP- based media streaming is considered in 3GPP PSS systems as an alternative to RTSP/RTP/RTCP based system. In the latest developments, 3GPP is considering developing “Adaptive HTTP Streaming” for the PSS system.
HTTP Adaptive Streaming
A cCommon form of media delivery on the Web today is progressive download, which is nothing more than a simple file download from an HTTP Web server. Progressive download is supported by most media players and platforms, including Adobe Flash, Silverlight, and Windows Media Player. The term "progressive" stems from the fact that most player clients allow the media file to be played back while the download is still in progress—before the entire file has been fully written to disk (typically to the Web browser cache). Clients that support the HTTP 1.1 specification can also seek positions in the media file that haven't been downloaded yet by performing byte range requests to the Web server.
Adaptive streaming is a hybrid delivery method that acts like streaming, but based on HTTP progressive download. In a typical adaptive streaming implementation, the video/audio source is cut into many short segments ("chunks") and encoded to the desired delivery format. Chunks are typically 2-to-4-seconds long. At the video codec level, this typically means that each chunk is cut along video GOP (Group of Pictures) boundaries (each chunk starts with a key frame), and has no dependencies on past or future chunks/GOPs. This allows each chunk to be decoded later independently of other chunks. The encoded chunks are hosted on an HTTP Web server. A client requests the chunks from the Web server in a linear fashion, and downloads them using plain HTTP progressive download. As the chunks are downloaded to the client, the client plays back the sequence of chunks in linear order. Because the chunks are carefully encoded without any gaps or overlaps between them, the chunks play back as a seamless video. The "adaptive" part of the solution comes into play when the video/audio source is encoded at multiple bit rates, generating multiple chunks of various sizes for each 2-to-4-seconds of video. The client can now choose between chunks of different sizes. Because Web servers usually deliver data as fast as network bandwidth allows them to, the client can easily estimate user bandwidth and decide to download larger or smaller chunks ahead of time. The size of the playback/download buffer is fully customizable.
The HTTP Adaptive streaming is a client-based solution, because the client chooses the media chunk to be downloaded based on the available bandwidth, buffer status, etc. Compared to the previous server-based solutions, it eliminates the requirement for the client to continuously send information about the network characteristics to the server. As a result, it is more dynamic and adapts quicklyfast to the changing conditions.
Fig. 4. 3GPP HTTP Adaptive Streaming
3GPP defines the interface for HTTP Adaptive Streaming as shown in the above diagram. The media streaming uses two data structures:
The Media Presentation Description (MPD)
One or several alternate data representations, i.e., actual media content
It is assumed that the UE has access to a Media Presentation Description (MPD). The MPD is a description list describing of all of the media chunks, which make up the whole media. The MPD contains information about the other representations of the same media encoded into different bit rates. An MPD provides sufficient information for the HTTP-Streaming Client to provide streaming service such as the “Url” from where the media can be downloaded, the encoding rate, sequence number, etc. It also provides the timing relation among the chunks to maintain time synchronization. 3GPP is standardizing ansuch MPD structure which can be universally recognized by all media clients and server. Figure 5The following diagram shows the structure of an MPD.
Fig. 5. 3GPP definition of MPD for HTTP Adaptive Streaming
The HTTP-Streaming client requests and downloads media presentation data, which is used to present the streaming service to the user. Using the MPD, the streaming client can form HTTP GET request message to download the media.
The alternative representations referred to in the MPD, consists of one or several files. These can be 3GP files or parts of 3GP files, referred to as segments, containing media data. A media presentation contains R alternative representations that are each uniquely identified by an index r. Furthermore, each representation contains n_r segments. These segments are indexed from 1 to n_r and sequential indexing is applied. Comment by difazira: Might want to define a 3GP file.
T. Schierl, et al in  developed a numerical model for adaptive multimedia streaming using RTSP/RTP/RTCP. The analytical model consists of buffer at the network and on the client. The key performance characteristic is the status of the “Client buffer”. The adaptive algorithm manages the client buffer to prevent underflow and overflow. RTCP Receiver Reports (RR), with the buffer report extension and NACK reports, are used to collect client feedback on buffer status/ time-related information. The Highest Received Sequence Number (HRSN) and the Next Sequence Number to be decoded (NSN) are used for determining the buffer levels of the network and the client.
The server knows about the buffer status in RTCP RR, say B. There is a pre-agreed threshold, T. If B < T, the average receiving rate (R) at the client is estimated. Then the transmission is scaled down to lower data rate tracks R(x), so that R(x) < R. The time window, over which R is averaged, is the time span between two RTSP RRs sent by the client. It is assumed that the RRs and Sender Reports (SRs) are sent in a fixed time interval. The exact receiving time at the client, is recalculated using the parameter ‘delay since last Sender Report (SR)’, which is the time difference between sending the current RR and receiving the last RTCP SR, and the ‘last SR timestamp,’, .
The “average receiving rate, R” at the client is determined according to:
Based on this model, P.Frojdh et al in , evaluated the performance characteristics of the adaptive streaming. The result from their work is shown in Figure 6below.
Fig. 6. Performance characteristics of 3GPP RTSP adaptive streaming Comment by difazira: The left and right figures need labels to point out what is different between them.
For adaptive streaming, a video sequence was encoded with H.264/AVC at four different rates of 32, 56, 80, and 90 kb/s streaming. For non-adaptive streaming, a fixed video rate of 90 kb/s was used. During the pre-buffering time period, the buffer fill levels increase. After media play out starts and the link rate drops to 64 kb/s, the buffer fill levels rapidly decrease. At around 23 s, a client buffer under run happens.
This results in a re-buffering event, where the media presentation is stopped for a certain time and the client buffer fill level recovers. Later, the media presentation is continued. For adaptive streaming, a continuous play out of audio and video is provided.
P. de Cuetos, et al in  developed another numerical model for adaptive streaming, using multiple versions of the same file over TCP.
Fig. 7. Model for TCP based streaming
In this analysis, the client buffer status is also considered as a key performance characteristic. Two video files, and encoded at and , are streamed from the server. The server sees a TCP friendly bandwidth of X(t). (t) are indicated as the pre-fetched buffer status at time t for the two different videos. The server begins by streaming . To minimize the risk of buffer starvation, the server should not stream, unless a reserve of future data exists in either of the two versions. When X(t) exceeds , buffer starts building up, or data is pre-fetched in the client storage. The server will switch to , only when buffered data at the client is enough to avoid buffer starvation during the next C seconds. The server switches to at time s if:
These two previous works are very similar and do not take into consideration the details of transport protocol, such as UDP/TCP. In both models, the decision is made at the server, based on reports received from the client. We generalized this approach and applied it to our case of adaptive HTTP streaming. We will also assumed TCP friendly bandwidth, and the availability of multiple versions of the same media at the server, encoded at different rates. The client makes the decision to switch to a different rate and sends the request to the server. The conditions under which the client makes the decision are:
Switch to a higher rate greater than r, when buffer content B estimated buffer accumulation at the current receiving rate in next T seconds
Switch to a lower rate less than r, when buffer content B < estimated buffer accumulation at the current receiving rate in next T seconds
T denotes the time measured at the client side to receive all the packets within a single TCP window. The average receiving rate r, at the client can be expressed as:
rcvdPacket denotes the packets received by the client within the TCP window.
The “sequence number” field in the TCP header can be used by the client to keep track of the received packets. The server sets the sequence number for the first byte in the complete TCP segment. The client starts from the sequence number and increments it by the number of bytes in the segment to predict the next sequence number in another TCP segment. Say the client’s window size is 3k bytes and each segment contains 1k bytes. In case the second segment received by the client does not start at the expected sequence number, e.g., 1001 and starts at 2000, the client know that it has lost 1k bytes of data. From server’s perspective, it has already sent data equal to the window size. The client starts calculating the average receiving rate, when it detects packet loss. The client maintains a time stamp for the first and last segments received, which will provide the value for . This is a fair approximation for the TCP time window, used by the server, to send all the bytes.
The generalized model developed so far is very similar to the RTCP/RTP and TCP models. In both the previous cases, the estimation of buffer accumulation was done at the server, based on client reports. In the case of Adaptive HTTP Streaming, it is done instantaneously on the client side, based on TCP sequence numbers. Thus, the time window over which the received packets are averaged is different in the two cases. The client-based reporting models the time window includinged an additional amount of time equal to the time to generate and send the report to the server. The server takes a decision for the situation that occurred in the past, which is not accounted.
If we apply the generic model developed so far to the RTSP-based streaming, then most of the characteristics will remain the same. In order to apply the same model to the Adaptive HTTP streaming case and compare performance characteristics, we subtract the extra time spent ( in reporting/communicating with the server. The time , can be calculated, based on the RTT calculation, by the TCP time stamping mechanism. This helps us to reuse and re-estimate the performance characteristics as described in .
We go back to the performance metrics developed, apply the modified model and re-estimate the a) video rate, b) reported video buffer time for adaptive HTTP streaming (see figure 8).
Fig. 8. Performance estimation for HTTP Adaptive Streaming Comment by difazira: A new section of the legend is needed for the new data.
The video rate (red dotted line) is switched down earlier than the previous model because the client sends the request earlier in time than the time the server will take to identify and take action.
The video rate is not switched down as low as with the original approachlevel, because the estimated rate is higher due to the shorter “r” will be calculated high as the denominator, i.e., time window is less now.
The “reported video buffer time” (blue dotted line) follows the “video rate” curve and it is ahead in the time scale compared to the original model. The video buffer recovers/builds faster, minimizing “low video” play time. This estimation shows that Adaptive HTTP streaming will provide smoother video transition, quick recovery from channel degradation, and better user experience.
next steps in http streaming
In order to develop HTTP as a fully functional streaming protocol, it is desirable to develop support all the RTSP/VCR-like features such as Play, Pause, Resume, Stop, etc. This requires a In order to implement all these features, capability to “Bookmark” the HTTP stream stream will be required. Though this is not yet considered by 3GPP, it is worth it to start thinking about these features. The next few sections describe the various methods by which bookmarking of HTTP streaming can be done, managed, and sent in collaboration with Service provider/ user. Comment by difazira: Do you need to say this?
In order to exactly position the bookmark, the byte position in the stream needs to be located and addressed. For the progressive download case, the locator can be within the complete stream. On the other hand, for the adaptive streaming case, the units to be processed are the stream “chunks.”.
Bookmarking of HTTP stream
The bookmark is assumed to be the number of bytes already played by the media player and . The bookmark may be extended to other units of measurements, e.g., time based, chunk-based, etc.
The HTTP client/player may restart the download in many different ways, depending on the bookmark format
The bookmark is the first byte to be downloaded (byte number) after the byte that has been played by the media player, e.g., Aassuming, for example, a chunk of 500 bytes with a pause at byte 300, download and resumption of play starts from chunk 301 - 800. It is assumed that the HTTP client receives that information from the application, e.g., Media Player.
The bookmark can also be treated as the current chunk (range). A portion of the streaming data may be re-played/re-sent to the media player, e.g., assuming a chunk of 500 bytes with pause at byte 300, re-download chunk 1-500 again.
The “Pause time” can also be considered as the bookmark. The download resumes at the same time as the Pause was done, e.g., assuming a URI of 10 seconds and a, pause time atis 3 seconds, re-download the same URI and resume playing at 3 seconds. This may also be the time related to the whole stream (addition of all URIs duration). It is assumed that the HTTP client receives that information from the application, e.g., Media Player.
The algorithm to deduce the byte number or start time can be part of the media player or HTTP client. Other information elements, which may be needed in addition to the bookmark to support the procedure, are Current URI (where media has been paused), Playlist (containing list of URIs to play, if it exists), Pause time (actual time when Pause has been done), Bookmark format (byte number or time-based), etc.
Bookmarking of the HTTP stream can be done in various ways. It can be fully Client-based or Server-based.
A client-based solution will determine the bookmark location by itself. If required, the client may request information from the server. Then, the bookmark is stored locally in the client.
In server based solutions, the client can send commands to the server to store a bookmark. The server can determine the exact position of the bookmark and store the bookmark in a database associated with the user id.
The following diagramFigure 9 describes the client-based procedure for HTTP stream bookmarking. Since most of the bookmark generation process happens in the client, there is much less interaction between the client and the server.
Fig. 9. Client based bookmarking procedure
The user presses the PLAY button and normal HTTP streaming procedure starts. The Media Player sends HTTP GET request and the servers sends the media chunks. The media player sends GET REQUEST for each chunk.
When the user presses the PAUSE button, the client stops playing the media and starts the bookmark generation procedure. It stops fetching media chunks and determines the current media chunk where it stopped. The media player has the knowledge of the byte range that has been played. This combination of media URI and byte range is stored by the media player as the bookmark. Alternatively, the media player can also store the entire media chunk downloaded, the played time with regard to the current URI, or the total time played may be used as bookmark.
Later, the user resumes the media session by pressing the RESUME button. The client searches for the stored bookmark. It finds the bookmark information, which includes the mediaURI and the byte range. The client can start downloading a media chunk from the mediaURI information in the bookmark. Once the mediaURI is received, the client does not start playing from the beginning. Based on the stored byte range information, the player can appropriately position the start point within the chunk.
A server-based bookmarking procedure uses the Media Server or the HTTP server to control and store all bookmarking-related procedures and information. The bookmark information is saved on the server side and retrieved later on. The client uses the bookmark information to resume the download. The advantage of doing this is that the bookmark information is stored in a central place, accessible by other media clients. This will enable scenarios where the user wants to resume playback in a device other than the original one. This solution can also be used with IUT (Inter-Unit Transfer) since another device may retrieve the bookmark information from the server and continue the download. The problem with this approach is that since HTTP is stateless, it will require some additional logic in the server side to store such state information.
Fig. 10. Server based bookmarking procedure
The above diagramFigure 10 shows the generation and storing of the bookmark information in the media server. The user watching a streaming session presses the PAUSE button. The media player stops playing at that point and stores the identification of the current media chunk that is being played. This is described as “mediaURI.”. To achieve finer granularity, it is also proposed that the player stores the next byte after the current byte, or the elapsed time since the start of the play. This information is denoted as “markerURI.”. Since HTTP is stateless, an additional logic is proposed on the server side, which can receive all this information and store it in a database for future retrieval. This is denoted as “bookmarkURI.”. The media player requesting to store this bookmark information is aware of the new resource and shall be able to use this while retrieving. Along with all this information, the current “playlistURI” is also sent to the server.
HTTP method “PUT” is used to upload all the bookmark information, described above, on the server. A resource is created/updated on the HTTP server. The “Request-URI” is used to identify the entity (resource). The entity contains the bookmark information. The “requestURI” is sent back to the media player as part of the HTTP RESPONSE. The “requestURI” will be used in the future when the user wants to resume the paused media session.
Fig. 11. Server based bookmarking retrieval
The user resumes the media session from the bookmarked location by pressing the RESUME button. The media player sends requests to obtain all the bookmark information by sending the “requestURI” as part of the HTTP GET method. The media server sends the “requestURI” back to the media player. The media player analyses the information received from the server and determines the specific chunk of data to be downloaded from a playlist. It also determines the exact offset within the chunk from where to download and start playing. The determination of the offset within the chunk can be based on information such as bytes played till the pause command, time elapsed when it paused, etc., available in the “requestURI.”. The media player uses HTTP method “GET + Range” to download specific chunks of data (resume download). The play is resumed from the exact offset after the download is done.
The “requestURI” has to be known by the media player trying to resume the session stopped by the user. If the device doing the information retrieval is the same as the one that has saved the information, then the “requestURI” is known to the device. If the device that is doing the information retrieval is a different device than the one which has done the save, then the “requestURI” needs to be known by the new device. This information may be sent to the new device in various ways, such as inter-device communication, locally configured “requestURI,”, part of a user profile retrieved from the server or from elsewhere, retrieved from the server based on session identifier, etc.
HTTP adaptive streaming is an important technology for 3GPP PSS service that. It enables smooth and un-interrupted playback on client devices. The unavailability of VCR-like features may be an obstacle for its wide adoption. The capability enhancements described incorporate VCR-like features that can make HTTP streaming above attempt to bridge those gaps, by making it comparable to the capabilities of other streaming technologies and enhance its widespread adoptionwhat is available for streaming today.
H. Montes et al., “Deployment of IP Multimedia Streaming Services in Third-Generation Mobile Networks,” IEEE Wireless Commun., vol. 9, no. 5, Oct. 2002, pp. 84–92..
I. Elsen et al., “Streaming Technology in 3G Mobile Communication Systems,”IEEE Comp., vol. 34, Sept. 2001..
3GPP TS 26.234, “Transparent End-To-End Packet-Switched Streaming Service(PSS): Protocols and Codecs (Release 9),” http://www.3gpp.org/ftp/Specs/archive/26_series/26.234/26234-910.zip.
B. Vandalore et al., “A Survey of Application Layer Techniques for Adaptive Streaming of Multimedia,” J. Real-Time Sys., Special Issue on Adaptive Multimedia, Jan. 2000.
D. Wu et al., “Streaming Video over the Internet: Approaches and Directions,” IEEE Trans. Circuits Sys. Video Tech., vol. 11, no. 3, Mar. 2001, pp. 282–300.
3GPP TS 22.146, “Technical Specification Group Services and System Aspects; Multimedia Broadcast/ Multimedia Service; Stage 1,” http://www.3gpp.org/ftp/Specs/archive/22_series/22.146/22146-660.zip.
P.Frojdh, U. Horn, M.Kampmann, A Nohlgren, M.Westerlund, “Adaptive Streaming within the 3GPP Packet Switched Streaming Service,” IEEE Networks, pp. 34-40, April 2006.
T. Schierl, M. Kampmann, and T. Wiegand, “3GPP Compliant Adaptive Wireless Video Streaming Using H.264/AVC,” IEEE Int’l. Conf. Image Proc.,Genova, Italy, Sept. 2005.
P. de Cuetos, D. Saparilla, and K. W. Ross, “Adaptive Streaming of Stored Video in a TCP-Friendly Context: Multiple Versions or Multiple Layers ?” In Proc. Of the International Packet Video Workshop, Kyongju, Korea, May 2001.
H. Schulzrinne, A. Rao, and R. Lanphier, “Real-Time Streaming Protocol(RTSP),” IETF RFC 2326, Apr. 1998.
M. Handley and V. Jacobson, “SDP: Session Description Protocol,” IETF RFC2327, Apr. 1998.
H. Schulzrinne et al., “RTP: A Transport Protocol for Real-Time Applications,” IETF RFC 3550, July 2003.
T. Friedman, R. Caceres, and A. Clark, “RTP Control Protocol Extended Reports (RTCP XR),” IETF RFC 3611, Nov. 2003.
J. Peisa and M. Meyer, “An Analytical Model for TCP File Transfers over UMTS,” Int’l. Conf. 3G Wireless and Beyond 2001, San Francisco, CA, May 2001.
N. Baldo et al., “RTCP Feedback Based Transmission Rate Control for 3G Wireless Multimedia Streaming,” Int’l. Symp. Pers., Indoor and Mobile Radio Commun. 2004, Barcelona, Spain, Sept. 2004.
3GPP TS 26.233, "Transparent end-to-end packet switched streaming service (PSS);General description (Release 9)", http://www.3gpp.org/ftp/Specs/archive/26_series/26.233/26233-900.zip.
3GPP TS 26.237, "IP Multimedia Subsystem (IMS) based
Packet Switch Streaming (PSS) and Multimedia Broadcast/Multicast Service (MBMS) User Service; Protocols(Release 9)", http://www.3gpp.org/ftp/Specs/archive/26_series/26.237/26237-910.zip.
PSS & MBMS Clients
Core network and Radio Access Network
Sources, Live Encoders and On demand Content
HTTP Streaming[TS 23.264]
System Of Client Pre-fetched Buffers
Bookmark Generation and Storage
HTTP GET[mediaURI, Range]
HTTP PUT[bookmarkURI,playlistURI, mediaURI, markerURI ]
HTTP GET[mediaURI, RANGE ]