1 Mauzshura

Rtsp Request Header Example For Essay

Table of Contents

1 Description

This document describes the input and output protocols provided by the libavformat library.

2 Protocol Options

The libavformat library provides some generic global options, which can be set on all the protocols. In addition each protocol may support so-called private options, which are specific for that component.

Options may be set by specifying - in the FFmpeg tools, or by setting the value explicitly in the options or using the API for programmatic use.

The list of supported options follows:

Set a ","-separated list of allowed protocols. "ALL" matches all protocols. Protocols prefixed by "-" are disabled. All protocols are allowed by default but protocols used by an another protocol (nested protocols) are restricted to a per protocol subset.

3 Protocols

Protocols are configured elements in FFmpeg that enable access to resources that require specific protocols.

When you configure your FFmpeg build, all the supported protocols are enabled by default. You can list all available ones using the configure option "–list-protocols".

You can disable all the protocols using the configure option "–disable-protocols", and selectively enable a protocol using the option "–enable-protocol=", or you can disable a particular protocol using the option "–disable-protocol=".

The option "-protocols" of the ff* tools will display the list of supported protocols.

All protocols accept the following options:

Maximum time to wait for (network) read/write operations to complete, in microseconds.

A description of the currently available protocols follows.

3.1 async

Asynchronous data filling wrapper for input stream.

Fill data in a background thread, to decouple I/O operation from demux thread.

async: async:http://host/resource async:cache:http://host/resource

3.2 bluray

Read BluRay playlist.

The accepted options are:

BluRay angle

Start chapter (1...N)

Playlist to read (BDMV/PLAYLIST/?????.mpls)

Examples:

Read longest playlist from BluRay mounted to /mnt/bluray:

Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2:

-playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray

3.3 cache

Caching wrapper for input stream.

Cache the input stream to temporary file. It brings seeking capability to live streams.

3.4 concat

Physical concatenation protocol.

Read and seek from many resources in sequence as if they were a unique resource.

A URL accepted by this protocol has the syntax:

concat:||...|

where , , ..., are the urls of the resource to be concatenated, each one possibly specifying a distinct protocol.

For example to read a sequence of files , , with use the command:

ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg

Note that you may need to escape the character "|" which is special for many shells.

3.5 crypto

AES-encrypted stream reading protocol.

The accepted options are:

Set the AES decryption key binary block from given hexadecimal representation.

Set the AES decryption initialization vector binary block from given hexadecimal representation.

Accepted URL formats:

3.6 data

Data in-line in the URI. See http://en.wikipedia.org/wiki/Data_URI_scheme.

For example, to convert a GIF file given inline with :

ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png

3.7 file

File access protocol.

Read from or write to a file.

A file URL can have the form:

where is the path of the file to read.

An URL that does not have a protocol prefix will be assumed to be a file URL. Depending on the build, an URL that looks like a Windows path with the drive letter at the beginning will also be assumed to be a file URL (usually not the case in builds for unix-like systems).

For example to read from a file with use the command:

ffmpeg -i file:input.mpeg output.mpeg

This protocol accepts the following options:

Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.

Set I/O operation maximum block size, in bytes. Default value is , which results in not limiting the requested block size. Setting this value reasonably low improves user termination request reaction time, which is valuable for files on slow medium.

3.8 ftp

FTP (File Transfer Protocol).

Read from or write to remote resources using FTP protocol.

Following syntax is required.

ftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg

This protocol accepts the following options.

Set timeout in microseconds of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.

Password used when login as anonymous user. Typically an e-mail address should be used.

Control seekability of connection during encoding. If set to 1 the resource is supposed to be seekable, if set to 0 it is assumed not to be seekable. Default value is 0.

NOTE: Protocol can be used as output, but it is recommended to not do it, unless special care is taken (tests, customized server configuration etc.). Different FTP servers behave in different way during seek operation. ff* tools may produce incomplete content due to server limitations.

This protocol accepts the following options:

If set to 1, the protocol will retry reading at the end of the file, allowing reading files that still are being written. In order for this to terminate, you either need to use the rw_timeout option, or use the interrupt callback (for API users).

3.9 gopher

Gopher protocol.

3.10 hls

Read Apple HTTP Live Streaming compliant segmented stream as a uniform one. The M3U8 playlists describing the segments can be remote HTTP resources or local files, accessed using the standard file protocol. The nested protocol is declared by specifying "+" after the hls URI scheme name, where is either "file" or "http".

hls+http://host/path/to/remote/resource.m3u8 hls+file://path/to/local/resource.m3u8

Using this protocol is discouraged - the hls demuxer should work just as well (if not, please report the issues) and is more complete. To use the hls demuxer instead, simply use the direct URLs to the m3u8 files.

3.11 http

HTTP (Hyper Text Transfer Protocol).

This protocol accepts the following options:

Control seekability of connection. If set to 1 the resource is supposed to be seekable, if set to 0 it is assumed not to be seekable, if set to -1 it will try to autodetect if it is seekable. Default value is -1.

If set to 1 use chunked Transfer-Encoding for posts, default is 1.

Set a specific content type for the POST messages or for listen mode.

set HTTP proxy to tunnel through e.g. http://example.com:1234

Set custom HTTP headers, can override built in default headers. The value must be a string encoding the headers.

Use persistent connections if set to 1, default is 0.

Set custom HTTP post data.

Set the Referer header. Include ’Referer: URL’ header in HTTP request.

Override the User-Agent header. If not specified the protocol will use a string describing the libavformat build. ("Lavf/<version>")

This is a deprecated option, you can use user_agent instead it.

Set timeout in microseconds of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.

If set then eof is treated like an error and causes reconnection, this is useful for live / endless streams.

If set then even streamed/non seekable streams will be reconnected on errors.

Sets the maximum delay in seconds after which to give up reconnecting

Export the MIME type.

Exports the HTTP response version number. Usually "1.0" or "1.1".

If set to 1 request ICY (SHOUTcast) metadata from the server. If the server supports this, the metadata has to be retrieved by the application by reading the and options. The default is 1.

If the server supports ICY metadata, this contains the ICY-specific HTTP reply headers, separated by newline characters.

If the server supports ICY metadata, and was set to 1, this contains the last non-empty metadata packet sent by the server. It should be polled in regular intervals by applications interested in mid-stream metadata updates.

Set the cookies to be sent in future requests. The format of each cookie is the same as the value of a Set-Cookie HTTP response field. Multiple cookies can be delimited by a newline character.

Set initial byte offset.

Try to limit the request to bytes preceding this offset.

When used as a client option it sets the HTTP method for the request.

When used as a server option it sets the HTTP method that is going to be expected from the client(s). If the expected and the received HTTP method do not match the client will be given a Bad Request response. When unset the HTTP method is not checked for now. This will be replaced by autodetection in the future.

If set to 1 enables experimental HTTP server. This can be used to send data when used as an output option, or read data from a client with HTTP POST when used as an input option. If set to 2 enables experimental multi-client HTTP server. This is not yet implemented in ffmpeg.c and thus must not be used as a command line option.

# Server side (sending): ffmpeg -i somefile.ogg -c copy -listen 1 -f ogg http://: # Client side (receiving): ffmpeg -i http://: -c copy somefile.ogg # Client can also be done with wget: wget http://: -O somefile.ogg # Server side (receiving): ffmpeg -listen 1 -i http://: -c copy somefile.ogg # Client side (sending): ffmpeg -i somefile.ogg -chunked_post 0 -c copy -f ogg http://: # Client can also be done with wget: wget --post-file=somefile.ogg http://:

3.11.1 HTTP Cookies

Some HTTP requests will be denied unless cookie values are passed in with the request. The option allows these cookies to be specified. At the very least, each cookie must specify a value along with a path and domain. HTTP requests that match both the domain and path will automatically include the cookie value in the HTTP Cookie header field. Multiple cookies can be delimited by a newline.

The required syntax to play a stream specifying a cookie is:

ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8

3.12 Icecast

Icecast protocol (stream to Icecast servers)

This protocol accepts the following options:

Set the stream genre.

Set the stream name.

Set the stream description.

Set the stream website URL.

Set if the stream should be public. The default is 0 (not public).

Override the User-Agent header. If not specified a string of the form "Lavf/<version>" will be used.

Set the Icecast mountpoint password.

Set the stream content type. This must be set if it is different from audio/mpeg.

This enables support for Icecast versions < 2.4.0, that do not support the HTTP PUT method but the SOURCE method.

icecast://[[:]@]:/

3.13 mmst

MMS (Microsoft Media Server) protocol over TCP.

3.14 mmsh

MMS (Microsoft Media Server) protocol over HTTP.

The required syntax is:

mmsh://[:][/][/]

3.15 md5

MD5 output protocol.

Computes the MD5 hash of the data to be written, and on close writes this to the designated output or stdout if none is specified. It can be used to test muxers without writing an actual file.

Some examples follow.

# Write the MD5 hash of the encoded AVI file to the file output.avi.md5. ffmpeg -i input.flv -f avi -y md5:output.avi.md5 # Write the MD5 hash of the encoded AVI file to stdout. ffmpeg -i input.flv -f avi -y md5:

Note that some formats (typically MOV) require the output protocol to be seekable, so they will fail with the MD5 output protocol.

3.16 pipe

UNIX pipe access protocol.

Read and write from UNIX pipes.

The accepted syntax is:

is the number corresponding to the file descriptor of the pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.

For example to read from stdin with :

cat test.wav | ffmpeg -i pipe:0 # ...this is the same as... cat test.wav | ffmpeg -i pipe:

For writing to stdout with :

ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi # ...this is the same as... ffmpeg -i test.wav -f avi pipe: | cat > test.avi

This protocol accepts the following options:

Set I/O operation maximum block size, in bytes. Default value is , which results in not limiting the requested block size. Setting this value reasonably low improves user termination request reaction time, which is valuable if data transmission is slow.

Note that some formats (typically MOV), require the output protocol to be seekable, so they will fail with the pipe output protocol.

3.17 prompeg

Pro-MPEG Code of Practice #3 Release 2 FEC protocol.

The Pro-MPEG CoP#3 FEC is a 2D parity-check forward error correction mechanism for MPEG-2 Transport Streams sent over RTP.

This protocol must be used in conjunction with the muxer and the protocol.

The required syntax is:

-f rtp_mpegts -fec prompeg==... rtp://:

The destination UDP ports are for the column FEC stream and for the row FEC stream.

This protocol accepts the following options:

The number of columns (4-20, LxD <= 100)

The number of rows (4-20, LxD <= 100)

Example usage:

-f rtp_mpegts -fec prompeg=l=8:d=4 rtp://:

3.18 rtmp

Real-Time Messaging Protocol.

The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia content across a TCP/IP network.

The required syntax is:

rtmp://[:@][:][/][/][/]

The accepted parameters are:

An optional username (mostly for publishing).

An optional password (mostly for publishing).

The address of the RTMP server.

The number of the TCP port to use (by default is 1935).

It is the name of the application to access. It usually corresponds to the path where the application is installed on the RTMP server (e.g. , , etc.). You can override the value parsed from the URI through the option, too.

It is the path or name of the resource to play with reference to the application specified in , may be prefixed by "mp4:". You can override the value parsed from the URI through the option, too.

Act as a server, listening for an incoming connection.

Maximum time to wait for the incoming connection. Implies listen.

Additionally, the following parameters can be set via command line options (or in code via s):

Name of application to connect on the RTMP server. This option overrides the parameter specified in the URI.

Set the client buffer time in milliseconds. The default is 3000.

Extra arbitrary AMF connection parameters, parsed from a string, e.g. like . Each value is prefixed by a single character denoting the type, B for Boolean, N for number, S for string, O for object, or Z for null, followed by a colon. For Booleans the data must be either 0 or 1 for FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or 1 to end or begin an object, respectively. Data items in subobjects may be named, by prefixing the type with ’N’ and specifying the name before the value (i.e. ). This option may be used multiple times to construct arbitrary AMF sequences.

Version of the Flash plugin used to run the SWF player. The default is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible; <libavformat version>).)

Number of packets flushed in the same request (RTMPT only). The default is 10.

Specify that the media is a live stream. No resuming or seeking in live streams is possible. The default value is , which means the subscriber first tries to play the live stream specified in the playpath. If a live stream of that name is not found, it plays the recorded stream. The other possible values are and .

URL of the web page in which the media was embedded. By default no value will be sent.

Stream identifier to play or to publish. This option overrides the parameter specified in the URI.

Name of live stream to subscribe to. By default no value will be sent. It is only sent if the option is specified or if rtmp_live is set to live.

SHA256 hash of the decompressed SWF file (32 bytes).

Size of the decompressed SWF file, required for SWFVerification.

URL of the SWF player for the media. By default no value will be sent.

URL to player swf file, compute hash/size automatically.

URL of the target stream. Defaults to proto://host[:port]/app.

For example to read with a multimedia resource named "sample" from the application "vod" from an RTMP server "myserver":

ffplay rtmp://myserver/vod/sample

To publish to a password protected server, passing the playpath and app names separately:

ffmpeg -re -i <input> -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@myserver/

3.19 rtmpe

Encrypted Real-Time Messaging Protocol.

The Encrypted Real-Time Messaging Protocol (RTMPE) is used for streaming multimedia content within standard cryptographic primitives, consisting of Diffie-Hellman key exchange and HMACSHA256, generating a pair of RC4 keys.

3.20 rtmps

Real-Time Messaging Protocol over a secure SSL connection.

The Real-Time Messaging Protocol (RTMPS) is used for streaming multimedia content across an encrypted connection.

3.21 rtmpt

Real-Time Messaging Protocol tunneled through HTTP.

The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used for streaming multimedia content within HTTP requests to traverse firewalls.

3.22 rtmpte

Encrypted Real-Time Messaging Protocol tunneled through HTTP.

The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE) is used for streaming multimedia content within HTTP requests to traverse firewalls.

3.23 rtmpts

Real-Time Messaging Protocol tunneled through HTTPS.

The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used for streaming multimedia content within HTTPS requests to traverse firewalls.

3.24 libsmbclient

libsmbclient permits one to manipulate CIFS/SMB network resources.

Following syntax is required.

smb://[[domain:]user[:password@]]server[/share[/path[/file]]]

This protocol accepts the following options.

Set timeout in milliseconds of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.

Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.

Set the workgroup used for making connections. By default workgroup is not specified.

For more information see: http://www.samba.org/.

3.25 libssh

Secure File Transfer Protocol via libssh

Read from or write to remote resources using SFTP protocol.

Following syntax is required.

sftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg

This protocol accepts the following options.

Set timeout of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.

Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.

Specify the path of the file containing private key to use during authorization. By default libssh searches for keys in the directory.

Example: Play a file stored on remote server.

ffplay sftp://user:password@server_address:22/home/user/resource.mpeg

3.26 librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte

Real-Time Messaging Protocol and its variants supported through librtmp.

Requires the presence of the librtmp headers and library during configuration. You need to explicitly configure the build with "–enable-librtmp". If enabled this will replace the native RTMP protocol.

This protocol provides most client functions and a few server functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT), encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled variants of these encrypted types (RTMPTE, RTMPTS).

The required syntax is:

://[:][/][/]

where is one of the strings "rtmp", "rtmpt", "rtmpe", "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and , , and have the same meaning as specified for the RTMP native protocol. contains a list of space-separated options of the form =.

See the librtmp manual page (man 3 librtmp) for more information.

For example, to stream a file in real-time to an RTMP server using :

ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream

To play the same stream using :

ffplay "rtmp://myserver/live/mystream live=1"

3.27 rtp

Real-time Transport Protocol.

The required syntax for an RTP URL is: rtp://[:][?=...]

specifies the RTP port to use.

The following URL options are supported:

Set the TTL (Time-To-Live) value (for multicast only).

Set the remote RTCP port to .

Set the local RTP port to .

Set the local RTCP port to .

Set max packet size (in bytes) to .

Do a on the UDP socket (if set to 1) or not (if set to 0).

List allowed source IP addresses.

List disallowed (blocked) source IP addresses.

Send packets to the source address of the latest received packet (if set to 1) or to a default remote address (if set to 0).

Set the local RTP port to .

This is a deprecated option. Instead, should be used.

Important notes:

  1. If is not set the RTCP port will be set to the RTP port value plus 1.
  2. If (the local RTP port) is not set any available port will be used for the local RTP and RTCP ports.
  3. If (the local RTCP port) is not set it will be set to the local RTP port value plus 1.

3.28 rtsp

Real-Time Streaming Protocol.

RTSP is not technically a protocol handler in libavformat, it is a demuxer and muxer. The demuxer supports both normal RTSP (with data transferred over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with data transferred over RDT).

The muxer can be used to send a stream using RTSP ANNOUNCE to a server supporting it (currently Darwin Streaming Server and Mischa Spiegelmock’s RTSP server).

The required syntax for a RTSP url is:

rtsp://[:]/

Options can be set on the / command line, or set in code via s or in .

The following options are supported.

Do not start playing the stream immediately if set to 1. Default value is 0.

Set RTSP transport protocols.

It accepts the following values:

‘’

Use UDP as lower transport protocol.

‘’

Use TCP (interleaving within the RTSP control channel) as lower transport protocol.

‘’

Use UDP multicast as lower transport protocol.

‘’

Use HTTP tunneling as lower transport protocol, which is useful for passing proxies.

Multiple lower transport protocols may be specified, in that case they are tried one at a time (if the setup of one fails, the next one is tried). For the muxer, only the ‘’ and ‘’ options are supported.

Set RTSP flags.

The following values are accepted:

‘’

Accept packets only from negotiated peer address and port.

‘’

Act as a server, listening for an incoming connection.

‘’

Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.

Default value is ‘’.

Set media types to accept from the server.

The following flags are accepted:

‘’
‘’
‘’

By default it accepts all media types.

Set minimum local UDP port. Default value is 5000.

Set maximum local UDP port. Default value is 65000.

Set maximum timeout (in seconds) to wait for incoming connections.

A value of -1 means infinite (default). This option implies the set to ‘’.

Set number of packets to buffer for handling of reordered packets.

Set socket TCP I/O timeout in microseconds.

Override User-Agent header. If not specified, it defaults to the libavformat identifier string.

When receiving data over UDP, the demuxer tries to reorder received packets (since they may arrive out of order, or packets may get lost totally). This can be disabled by setting the maximum demuxing delay to zero (via the field of AVFormatContext).

When watching multi-bitrate Real-RTSP streams with , the streams to display can be chosen with and for video and audio respectively, and can be switched on the fly by pressing and .

3.28.1 Examples

The following examples all make use of the and tools.

  • Watch a stream over UDP, with a max reordering delay of 0.5 seconds:
    ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
  • Watch a stream tunneled over HTTP:
    ffplay -rtsp_transport http rtsp://server/video.mp4
  • Send a stream in realtime to a RTSP server, for others to watch:
    ffmpeg -re -i -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
  • Receive a stream in realtime:
    ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp

3.29 sap

Session Announcement Protocol (RFC 2974). This is not technically a protocol handler in libavformat, it is a muxer and demuxer. It is used for signalling of RTP streams, by announcing the SDP for the streams regularly on a separate port.

3.29.1 Muxer

The syntax for a SAP url given to the muxer is:

sap://[:][?]

The RTP packets are sent to on port , or to port 5004 if no port is specified. is a -separated list. The following options are supported:

Specify the destination IP address for sending the announcements to. If omitted, the announcements are sent to the commonly used SAP announcement multicast address 224.2.127.254 (sap.mcast.net), or ff0e::2:7ffe if is an IPv6 address.

Specify the port to send the announcements on, defaults to 9875 if not specified.

Specify the time to live value for the announcements and RTP packets, defaults to 255.

If set to 1, send all RTP streams on the same port pair. If zero (the default), all streams are sent on unique ports, with each stream on a port 2 numbers higher than the previous. VLC/Live555 requires this to be set to 1, to be able to receive the stream. The RTP stack in libavformat for receiving requires all streams to be sent on unique ports.

Example command lines follow.

To broadcast a stream on the local subnet, for watching in VLC:

ffmpeg -re -i -f sap sap://224.0.0.255?same_port=1

Similarly, for watching in :

ffmpeg -re -i -f sap sap://224.0.0.255

And for watching in , over IPv6:

ffmpeg -re -i -f sap sap://[ff0e::1:2:3:4]

3.29.2 Demuxer

The syntax for a SAP url given to the demuxer is:

is the multicast address to listen for announcements on, if omitted, the default 224.2.127.254 (sap.mcast.net) is used. is the port that is listened on, 9875 if omitted.

The demuxers listens for announcements on the given address and port. Once an announcement is received, it tries to receive that particular stream.

Example command lines follow.

To play back the first stream announced on the normal SAP multicast address:

To play back the first stream announced on one the default IPv6 SAP multicast address:

ffplay sap://[ff0e::2:7ffe]

3.30 sctp

Stream Control Transmission Protocol.

The accepted URL syntax is:

sctp://:[?]

The protocol accepts the following options:

If set to any value, listen for an incoming connection. Outgoing connection is done by default.

Set the maximum number of streams. By default no limit is set.

3.31 srtp

Secure Real-time Transport Protocol.

The accepted options are:

Select input and output encoding suites.

Supported values:

‘’
‘’
‘’
‘’

Set input and output encoding parameters, which are expressed by a base64-encoded representation of a binary block. The first 16 bytes of this binary block are used as master key, the following 14 bytes are used as master salt.

3.32 subfile

Virtually extract a segment of a file or another stream. The underlying stream must be seekable.

Accepted options:

Start offset of the extracted segment, in bytes.

End offset of the extracted segment, in bytes. If set to 0, extract till end of file.

Examples:

Extract a chapter from a DVD VOB file (start and end sectors obtained externally and multiplied by 2048):

subfile,,start,153391104,end,268142592,,:/media/dvd/VIDEO_TS/VTS_08_1.VOB

Play an AVI file directly from a TAR archive:

subfile,,start,183241728,end,366490624,,:archive.tar

Play a MPEG-TS file from start offset till end:

subfile,,start,32815239,end,0,,:video.ts

3.33 tee

Writes the output to multiple protocols. The individual outputs are separated by |

tee:file://path/to/local/this.avi|file://path/to/local/that.avi

3.34 tcp

Transmission Control Protocol.

The required syntax for a TCP url is:

tcp://:[?]

contains a list of &-separated options of the form =.

The list of supported options follows.

Listen for an incoming connection. Default value is 0.

Set raise error timeout, expressed in microseconds.

This option is only relevant in read mode: if no data arrived in more than this time interval, raise error.

Set listen timeout, expressed in milliseconds.

Set receive buffer size, expressed bytes.

Set send buffer size, expressed bytes.

Set TCP_NODELAY to disable Nagle’s algorithm. Default value is 0.

The following example shows how to setup a listening TCP connection with , which is then accessed with :

ffmpeg -i -f tcp://:?listen ffplay tcp://:

3.35 tls

Transport Layer Security (TLS) / Secure Sockets Layer (SSL)

The required syntax for a TLS/SSL url is:

tls://:[?]

The following parameters can be set via command line options (or in code via s):

A file containing certificate authority (CA) root certificates to treat as trusted. If the linked TLS library contains a default this might not need to be specified for verification to work, but not all libraries and setups have defaults built in. The file must be in OpenSSL PEM format.

If enabled, try to verify the peer that we are communicating with. Note, if using OpenSSL, this currently only makes sure that the peer certificate is signed by one of the root certificates in the CA database, but it does not validate that the certificate actually matches the host name we are trying to connect to. (With other backends, the host name is validated as well.)

This is disabled by default since it requires a CA database to be provided by the caller in many cases.

A file containing a certificate to use in the handshake with the peer. (When operating as server, in listen mode, this is more often required by the peer, while client certificates only are mandated in certain setups.)

A file containing the private key for the certificate.

If enabled, listen for connections on the provided port, and assume the server role in the handshake instead of the client role.

Example command lines:

To create a TLS/SSL server that serves an input stream.

ffmpeg -i -f tls://:?listen&cert=&key=

To play back a stream from the TLS/SSL server using :

ffplay tls://:

3.36 udp

User Datagram Protocol.

The required syntax for an UDP URL is:

udp://:[?]

contains a list of &-separated options of the form =.

In case threading is enabled on the system, a circular buffer is used to store the incoming data, which allows one to reduce loss of data due to UDP socket buffer overruns. The and options are related to this buffer.

The list of supported options follows.

Set the UDP maximum socket buffer size in bytes. This is used to set either the receive or send buffer size, depending on what the socket is used for. Default is 64KB. See also .

If set to nonzero, the output will have the specified constant bitrate if the input has enough packets to sustain it.

When using this specifies the maximum number of bits in packet bursts.

Override the local UDP port to bind with.

Choose the local IP address. This is useful e.g. if sending multicast and the host has multiple interfaces, where the user can choose which interface to send on by specifying the IP address of that interface.

Set the size in bytes of UDP packets.

Explicitly allow or disallow reusing UDP sockets.

Set the time to live value (for multicast only).

Initialize the UDP socket with . In this case, the destination address can’t be changed with ff_udp_set_remote_url later. If the destination address isn’t known at the start, this option can be specified in ff_udp_set_remote_url, too. This allows finding out the source address for the packets with getsockname, and makes writes return with AVERROR(ECONNREFUSED) if "destination unreachable" is received. For receiving, this gives the benefit of only receiving packets from the specified peer address/port.

Only receive packets sent to the multicast group from one of the specified sender IP addresses.

Ignore packets sent to the multicast group from the specified sender IP addresses.

Set the UDP receiving circular buffer size, expressed as a number of packets with size of 188 bytes. If not specified defaults to 7*4096.

Survive in case of UDP receiving circular buffer overrun. Default value is 0.

Set raise error timeout, expressed in microseconds.

This option is only relevant in read mode: if no data arrived in more than this time interval, raise error.

Explicitly allow or disallow UDP broadcasting.

Note that broadcasting may not work properly on networks having a broadcast storm protection.

3.36.1 Examples

  • Use to stream over UDP to a remote endpoint:
    ffmpeg -i -f udp://:
  • Use to stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
    ffmpeg -i -f mpegts udp://:?pkt_size=188&buffer_size=65535
  • Use to receive over UDP from a remote endpoint:
    ffmpeg -i udp://[]: ...

3.37 unix

Unix local socket

The required syntax for a Unix socket URL is:

The following parameters can be set via command line options (or in code via s):

Timeout in ms.

Create the Unix socket in listening mode.

4 See Also

ffmpeg, ffplay, ffprobe, libavformat

5 Authors

The FFmpeg developers.

For details about the authorship, see the Git history of the project (git://source.ffmpeg.org/ffmpeg), e.g. by typing the command in the FFmpeg source directory, or browsing the online repository at http://source.ffmpeg.org.

Maintainers for the specific components are listed in the file in the source code tree.

This document was generated on March 10, 2018 using makeinfo.

[Docs] [txt|pdf] [draft-ietf-mmus...] [Tracker] [Diff1] [Diff2] [IPR] [Errata]

Obsoleted by: 7826 PROPOSED STANDARD
Errata Exist
Network Working Group H. Schulzrinne Request for Comments: 2326 Columbia U. Category: Standards Track A. Rao Netscape R. Lanphier RealNetworks April 1998 Real Time Streaming Protocol (RTSP) Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (1998). All Rights Reserved. Abstract The Real Time Streaming Protocol, or RTSP, is an application-level protocol for control over the delivery of data with real-time properties. RTSP provides an extensible framework to enable controlled, on-demand delivery of real-time data, such as audio and video. Sources of data can include both live data feeds and stored clips. This protocol is intended to control multiple data delivery sessions, provide a means for choosing delivery channels such as UDP, multicast UDP and TCP, and provide a means for choosing delivery mechanisms based upon RTP (RFC 1889). Table of Contents * 1 Introduction ................................................. 5 + 1.1 Purpose ............................................... 5 + 1.2 Requirements .......................................... 6 + 1.3 Terminology ........................................... 6 + 1.4 Protocol Properties ................................... 9 + 1.5 Extending RTSP ........................................ 11 + 1.6 Overall Operation ..................................... 11 + 1.7 RTSP States ........................................... 12 + 1.8 Relationship with Other Protocols ..................... 13 * 2 Notational Conventions ....................................... 14 * 3 Protocol Parameters .......................................... 14 + 3.1 RTSP Version .......................................... 14Schulzrinne, et. al. Standards Track [Page 1]
RFC 2326 Real Time Streaming Protocol April 1998 + 3.2 RTSP URL .............................................. 14 + 3.3 Conference Identifiers ................................ 16 + 3.4 Session Identifiers ................................... 16 + 3.5 SMPTE Relative Timestamps ............................. 16 + 3.6 Normal Play Time ...................................... 17 + 3.7 Absolute Time ......................................... 18 + 3.8 Option Tags ........................................... 18 o 3.8.1 Registering New Option Tags with IANA .......... 18 * 4 RTSP Message ................................................. 19 + 4.1 Message Types ......................................... 19 + 4.2 Message Headers ....................................... 19 + 4.3 Message Body .......................................... 19 + 4.4 Message Length ........................................ 20 * 5 General Header Fields ........................................ 20 * 6 Request ...................................................... 20 + 6.1 Request Line .......................................... 21 + 6.2 Request Header Fields ................................. 21 * 7 Response ..................................................... 22 + 7.1 Status-Line ........................................... 22 o 7.1.1 Status Code and Reason Phrase .................. 22 o 7.1.2 Response Header Fields ......................... 26 * 8 Entity ....................................................... 27 + 8.1 Entity Header Fields .................................. 27 + 8.2 Entity Body ........................................... 28 * 9 Connections .................................................. 28 + 9.1 Pipelining ............................................ 28 + 9.2 Reliability and Acknowledgements ...................... 28 * 10 Method Definitions .......................................... 29 + 10.1 OPTIONS .............................................. 30 + 10.2 DESCRIBE ............................................. 31 + 10.3 ANNOUNCE ............................................. 32 + 10.4 SETUP ................................................ 33 + 10.5 PLAY ................................................. 34 + 10.6 PAUSE ................................................ 36 + 10.7 TEARDOWN ............................................. 37 + 10.8 GET_PARAMETER ........................................ 37 + 10.9 SET_PARAMETER ........................................ 38 + 10.10 REDIRECT ............................................ 39 + 10.11 RECORD .............................................. 39 + 10.12 Embedded (Interleaved) Binary Data .................. 40 * 11 Status Code Definitions ..................................... 41 + 11.1 Success 2xx .......................................... 41 o 11.1.1 250 Low on Storage Space ...................... 41 + 11.2 Redirection 3xx ...................................... 41 + 11.3 Client Error 4xx ..................................... 42 o 11.3.1 405 Method Not Allowed ........................ 42 o 11.3.2 451 Parameter Not Understood .................. 42 o 11.3.3 452 Conference Not Found ...................... 42Schulzrinne, et. al. Standards Track [Page 2]
RFC 2326 Real Time Streaming Protocol April 1998 o 11.3.4 453 Not Enough Bandwidth ...................... 42 o 11.3.5 454 Session Not Found ......................... 42 o 11.3.6 455 Method Not Valid in This State ............ 42 o 11.3.7 456 Header Field Not Valid for Resource ....... 42 o 11.3.8 457 Invalid Range ............................. 43 o 11.3.9 458 Parameter Is Read-Only .................... 43 o 11.3.10 459 Aggregate Operation Not Allowed .......... 43 o 11.3.11 460 Only Aggregate Operation Allowed ......... 43 o 11.3.12 461 Unsupported Transport .................... 43 o 11.3.13 462 Destination Unreachable .................. 43 o 11.3.14 551 Option not supported ..................... 43 * 12 Header Field Definitions .................................... 44 + 12.1 Accept ............................................... 46 + 12.2 Accept-Encoding ...................................... 46 + 12.3 Accept-Language ...................................... 46 + 12.4 Allow ................................................ 46 + 12.5 Authorization ........................................ 46 + 12.6 Bandwidth ............................................ 46 + 12.7 Blocksize ............................................ 47 + 12.8 Cache-Control ........................................ 47 + 12.9 Conference ........................................... 49 + 12.10 Connection .......................................... 49 + 12.11 Content-Base ........................................ 49 + 12.12 Content-Encoding .................................... 49 + 12.13 Content-Language .................................... 50 + 12.14 Content-Length ...................................... 50 + 12.15 Content-Location .................................... 50 + 12.16 Content-Type ........................................ 50 + 12.17 CSeq ................................................ 50 + 12.18 Date ................................................ 50 + 12.19 Expires ............................................. 50 + 12.20 From ................................................ 51 + 12.21 Host ................................................ 51 + 12.22 If-Match ............................................ 51 + 12.23 If-Modified-Since ................................... 52 + 12.24 Last-Modified........................................ 52 + 12.25 Location ............................................ 52 + 12.26 Proxy-Authenticate .................................. 52 + 12.27 Proxy-Require ....................................... 52 + 12.28 Public .............................................. 53 + 12.29 Range ............................................... 53 + 12.30 Referer ............................................. 54 + 12.31 Retry-After ......................................... 54 + 12.32 Require ............................................. 54 + 12.33 RTP-Info ............................................ 55 + 12.34 Scale ............................................... 56 + 12.35 Speed ............................................... 57 + 12.36 Server .............................................. 57Schulzrinne, et. al. Standards Track [Page 3]
RFC 2326 Real Time Streaming Protocol April 1998 + 12.37 Session ............................................. 57 + 12.38 Timestamp ........................................... 58 + 12.39 Transport ........................................... 58 + 12.40 Unsupported ......................................... 62 + 12.41 User-Agent .......................................... 62 + 12.42 Vary ................................................ 62 + 12.43 Via ................................................. 62 + 12.44 WWW-Authenticate .................................... 62 * 13 Caching ..................................................... 62 * 14 Examples .................................................... 63 + 14.1 Media on Demand (Unicast) ............................ 63 + 14.2 Streaming of a Container file ........................ 65 + 14.3 Single Stream Container Files ........................ 67 + 14.4 Live Media Presentation Using Multicast .............. 69 + 14.5 Playing media into an existing session ............... 70 + 14.6 Recording ............................................ 71 * 15 Syntax ...................................................... 72 + 15.1 Base Syntax .......................................... 72 * 16 Security Considerations ..................................... 73 * A RTSP Protocol State Machines ................................. 76 + A.1 Client State Machine .................................. 76 + A.2 Server State Machine .................................. 77 * B Interaction with RTP ......................................... 79 * C Use of SDP for RTSP Session Descriptions ..................... 80 + C.1 Definitions ........................................... 80 o C.1.1 Control URL .................................... 80 o C.1.2 Media streams .................................. 81 o C.1.3 Payload type(s) ................................ 81 o C.1.4 Format-specific parameters ..................... 81 o C.1.5 Range of presentation .......................... 82 o C.1.6 Time of availability ........................... 82 o C.1.7 Connection Information ......................... 82 o C.1.8 Entity Tag ..................................... 82 + C.2 Aggregate Control Not Available ....................... 83 + C.3 Aggregate Control Available ........................... 83 * D Minimal RTSP implementation .................................. 85 + D.1 Client ................................................ 85 o D.1.1 Basic Playback ................................. 86 o D.1.2 Authentication-enabled ......................... 86 + D.2 Server ................................................ 86 o D.2.1 Basic Playback ................................. 87 o D.2.2 Authentication-enabled ......................... 87 * E Authors' Addresses ........................................... 88 * F Acknowledgements ............................................. 89 * References ..................................................... 90 * Full Copyright Statement ....................................... 92Schulzrinne, et. al. Standards Track [Page 4]
RFC 2326 Real Time Streaming Protocol April 19981 Introduction1.1 Purpose The Real-Time Streaming Protocol (RTSP) establishes and controls either a single or several time-synchronized streams of continuous media such as audio and video. It does not typically deliver the continuous streams itself, although interleaving of the continuous media stream with the control stream is possible (see Section 10.12). In other words, RTSP acts as a "network remote control" for multimedia servers. The set of streams to be controlled is defined by a presentation description. This memorandum does not define a format for a presentation description. There is no notion of an RTSP connection; instead, a server maintains a session labeled by an identifier. An RTSP session is in no way tied to a transport-level connection such as a TCP connection. During an RTSP session, an RTSP client may open and close many reliable transport connections to the server to issue RTSP requests. Alternatively, it may use a connectionless transport protocol such as UDP. The streams controlled by RTSP may use RTP [1], but the operation of RTSP does not depend on the transport mechanism used to carry continuous media. The protocol is intentionally similar in syntax and operation to HTTP/1.1 [2] so that extension mechanisms to HTTP can in most cases also be added to RTSP. However, RTSP differs in a number of important aspects from HTTP: * RTSP introduces a number of new methods and has a different protocol identifier. * An RTSP server needs to maintain state by default in almost all cases, as opposed to the stateless nature of HTTP. * Both an RTSP server and client can issue requests. * Data is carried out-of-band by a different protocol. (There is an exception to this.) * RTSP is defined to use ISO 10646 (UTF-8) rather than ISO 8859-1, consistent with current HTML internationalization efforts [3]. * The Request-URI always contains the absolute URI. Because of backward compatibility with a historical blunder, HTTP/1.1 [2] carries only the absolute path in the request and puts the host name in a separate header field. This makes "virtual hosting" easier, where a single host with one IP address hosts several document trees. Schulzrinne, et. al. Standards Track [Page 5]
RFC 2326 Real Time Streaming Protocol April 1998 The protocol supports the following operations: Retrieval of media from media server: The client can request a presentation description via HTTP or some other method. If the presentation is being multicast, the presentation description contains the multicast addresses and ports to be used for the continuous media. If the presentation is to be sent only to the client via unicast, the client provides the destination for security reasons. Invitation of a media server to a conference: A media server can be "invited" to join an existing conference, either to play back media into the presentation or to record all or a subset of the media in a presentation. This mode is useful for distributed teaching applications. Several parties in the conference may take turns "pushing the remote control buttons." Addition of media to an existing presentation: Particularly for live presentations, it is useful if the server can tell the client about additional media becoming available. RTSP requests may be handled by proxies, tunnels and caches as in HTTP/1.1 [2]. 1.2 Requirements The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [4]. 1.3 Terminology Some of the terminology has been adopted from HTTP/1.1 [2]. Terms not listed here are defined as in HTTP/1.1. Aggregate control: The control of the multiple streams using a single timeline by the server. For audio/video feeds, this means that the client may issue a single play or pause message to control both the audio and video feeds. Conference: a multiparty, multimedia presentation, where "multi" implies greater than or equal to one. Schulzrinne, et. al. Standards Track [Page 6]
RFC 2326 Real Time Streaming Protocol April 1998 Client: The client requests continuous media data from the media server. Connection: A transport layer virtual circuit established between two programs for the purpose of communication. Container file: A file which may contain multiple media streams which often comprise a presentation when played together. RTSP servers may offer aggregate control on these files, though the concept of a container file is not embedded in the protocol. Continuous media: Data where there is a timing relationship between source and sink; that is, the sink must reproduce the timing relationship that existed at the source. The most common examples of continuous media are audio and motion video. Continuous media can be real-time (interactive), where there is a "tight" timing relationship between source and sink, or streaming (playback), where the relationship is less strict. Entity: The information transferred as the payload of a request or response. An entity consists of metainformation in the form of entity-header fields and content in the form of an entity- body, as described in Section 8. Media initialization: Datatype/codec specific initialization. This includes such things as clockrates, color tables, etc. Any transport- independent information which is required by a client for playback of a media stream occurs in the media initialization phase of stream setup. Media parameter: Parameter specific to a media type that may be changed before or during stream playback. Media server: The server providing playback or recording services for one or more media streams. Different media streams within a presentation may originate from different media servers. A media server may reside on the same or a different host as the web server the presentation is invoked from. Schulzrinne, et. al. Standards Track [Page 7]
RFC 2326 Real Time Streaming Protocol April 1998 Media server indirection: Redirection of a media client to a different media server. (Media) stream: A single media instance, e.g., an audio stream or a video stream as well as a single whiteboard or shared application group. When using RTP, a stream consists of all RTP and RTCP packets created by a source within an RTP session. This is equivalent to the definition of a DSM-CC stream([5]). Message: The basic unit of RTSP communication, consisting of a structured sequence of octets matching the syntax defined in Section 15 and transmitted via a connection or a connectionless protocol. Participant: Member of a conference. A participant may be a machine, e.g., a media record or playback server. Presentation: A set of one or more streams presented to the client as a complete media feed, using a presentation description as defined below. In most cases in the RTSP context, this implies aggregate control of those streams, but does not have to. Presentation description: A presentation description contains information about one or more media streams within a presentation, such as the set of encodings, network addresses and information about the content. Other IETF protocols such as SDP (RFC 2327 [6]) use the term "session" for a live presentation. The presentation description may take several different formats, including but not limited to the session description format SDP. Response: An RTSP response. If an HTTP response is meant, that is indicated explicitly. Request: An RTSP request. If an HTTP request is meant, that is indicated explicitly. RTSP session: A complete RTSP "transaction", e.g., the viewing of a movie. A session typically consists of a client setting up a transport mechanism for the continuous media stream (SETUP), starting the stream with PLAY or RECORD, and closing the Schulzrinne, et. al. Standards Track [Page 8]
RFC 2326 Real Time Streaming Protocol April 1998 stream with TEARDOWN. Transport initialization: The negotiation of transport information (e.g., port numbers, transport protocols) between the client and the server. 1.4 Protocol Properties RTSP has the following properties: Extendable: New methods and parameters can be easily added to RTSP. Easy to parse: RTSP can be parsed by standard HTTP or MIME parsers. Secure: RTSP re-uses web security mechanisms. All HTTP authentication mechanisms such as basic (RFC 2068 [2, Section 11.1]) and digest authentication (RFC 2069 [8]) are directly applicable. One may also reuse transport or network layer security mechanisms. Transport-independent: RTSP may use either an unreliable datagram protocol (UDP) (RFC768 [9]), a reliable datagram protocol (RDP, RFC 1151, not widely used [10]) or a reliable stream protocol such as TCP (RFC 793 [11]) as it implements application-level reliability. Multi-server capable: Each media stream within a presentation can reside on a different server. The client automatically establishes several concurrent control sessions with the different media servers. Media synchronization is performed at the transport level. Control of recording devices: The protocol can control both recording and playback devices, as well as devices that can alternate between the two modes ("VCR"). Separation of stream control and conference initiation: Stream control is divorced from inviting a media server to a conference. The only requirement is that the conference initiation protocol either provides or can be used to create a unique conference identifier. In particular, SIP [12] or H.323 [13] may be used to invite a server to a conference. Schulzrinne, et. al. Standards Track [Page 9]
RFC 2326 Real Time Streaming Protocol April 1998 Suitable for professional applications: RTSP supports frame-level accuracy through SMPTE time stamps to allow remote digital editing. Presentation description neutral: The protocol does not impose a particular presentation description or metafile format and can convey the type of format to be used. However, the presentation description must contain at least one RTSP URI. Proxy and firewall friendly: The protocol should be readily handled by both application and transport-layer (SOCKS [14]) firewalls. A firewall may need to understand the SETUP method to open a "hole" for the UDP media stream. HTTP-friendly: Where sensible, RTSP reuses HTTP concepts, so that the existing infrastructure can be reused. This infrastructure includes PICS (Platform for Internet Content Selection [15,16]) for associating labels with content. However, RTSP does not just add methods to HTTP since the controlling continuous media requires server state in most cases. Appropriate server control: If a client can start a stream, it must be able to stop a stream. Servers should not start streaming to clients in such a way that clients cannot stop the stream. Transport negotiation: The client can negotiate the transport method prior to actually needing to process a continuous media stream. Capability negotiation: If basic features are disabled, there must be some clean mechanism for the client to determine which methods are not going to be implemented. This allows clients to present the appropriate user interface. For example, if seeking is not allowed, the user interface must be able to disallow moving a sliding position indicator. An earlier requirement in RTSP was multi-client capability. However, it was determined that a better approach was to make sure that the protocol is easily extensible to the multi-client scenario. Stream identifiers can be used by several control streams, so that "passing the remote" would be possible. The protocol would not address how several clients negotiate access; this is left to either a "social protocol" or some other floor Schulzrinne, et. al. Standards Track [Page 10]
RFC 2326 Real Time Streaming Protocol April 1998 control mechanism. 1.5 Extending RTSP Since not all media servers have the same functionality, media servers by necessity will support different sets of requests. For example: * A server may only be capable of playback thus has no need to support the RECORD request. * A server may not be capable of seeking (absolute positioning) if it is to support live events only. * Some servers may not support setting stream parameters and thus not support GET_PARAMETER and SET_PARAMETER. A server SHOULD implement all header fields described in Section 12. It is up to the creators of presentation descriptions not to ask the impossible of a server. This situation is similar in HTTP/1.1 [2], where the methods described in [H19.6] are not likely to be supported across all servers. RTSP can be extended in three ways, listed here in order of the magnitude of changes supported: * Existing methods can be extended with new parameters, as long as these parameters can be safely ignored by the recipient. (This is equivalent to adding new parameters to an HTML tag.) If the client needs negative acknowledgement when a method extension is not supported, a tag corresponding to the extension may be added in the Require: field (see Section 12.32). * New methods can be added. If the recipient of the message does not understand the request, it responds with error code 501 (Not implemented) and the sender should not attempt to use this method again. A client may also use the OPTIONS method to inquire about methods supported by the server. The server SHOULD list the methods it supports using the Public response header. * A new version of the protocol can be defined, allowing almost all aspects (except the position of the protocol version number) to change. 1.6 Overall Operation Each presentation and media stream may be identified by an RTSP URL. The overall presentation and the properties of the media the presentation is made up of are defined by a presentation description file, the format of which is outside the scope of this specification. The presentation description file may be obtained by the client using Schulzrinne, et. al. Standards Track [Page 11]
RFC 2326 Real Time Streaming Protocol April 1998 HTTP or other means such as email and may not necessarily be stored on the media server. For the purposes of this specification, a presentation description is assumed to describe one or more presentations, each of which maintains a common time axis. For simplicity of exposition and without loss of generality, it is assumed that the presentation description contains exactly one such presentation. A presentation may contain several media streams. The presentation description file contains a description of the media streams making up the presentation, including their encodings, language, and other parameters that enable the client to choose the most appropriate combination of media. In this presentation description, each media stream that is individually controllable by RTSP is identified by an RTSP URL, which points to the media server handling that particular media stream and names the stream stored on that server. Several media streams can be located on different servers; for example, audio and video streams can be split across servers for load sharing. The description also enumerates which transport methods the server is capable of. Besides the media parameters, the network destination address and port need to be determined. Several modes of operation can be distinguished: Unicast: The media is transmitted to the source of the RTSP request, with the port number chosen by the client. Alternatively, the media is transmitted on the same reliable stream as RTSP. Multicast, server chooses address: The media server picks the multicast address and port. This is the typical case for a live or near-media-on-demand transmission. Multicast, client chooses address: If the server is to participate in an existing multicast conference, the multicast address, port and encryption key are given by the conference description, established by means outside the scope of this specification. 1.7 RTSP States RTSP controls a stream which may be sent via a separate protocol, independent of the control channel. For example, RTSP control may occur on a TCP connection while the data flows via UDP. Thus, data delivery continues even if no RTSP requests are received by the media Schulzrinne, et. al. Standards Track [Page 12]
RFC 2326 Real Time Streaming Protocol April 1998 server. Also, during its lifetime, a single media stream may be controlled by RTSP requests issued sequentially on different TCP connections. Therefore, the server needs to maintain "session state" to be able to correlate RTSP requests with a stream. The state transitions are described in Section A. Many methods in RTSP do not contribute to state. However, the following play a central role in defining the allocation and usage of stream resources on the server: SETUP, PLAY, RECORD, PAUSE, and TEARDOWN. SETUP: Causes the server to allocate resources for a stream and start an RTSP session. PLAY and RECORD: Starts data transmission on a stream allocated via SETUP. PAUSE: Temporarily halts a stream without freeing server resources. TEARDOWN: Frees resources associated with the stream. The RTSP session ceases to exist on the server. RTSP methods that contribute to state use the Session header field (Section 12.37) to identify the RTSP session whose state is being manipulated. The server generates session identifiers in response to SETUP requests (Section 10.4). 1.8 Relationship with Other Protocols RTSP has some overlap in functionality with HTTP. It also may interact with HTTP in that the initial contact with streaming content is often to be made through a web page. The current protocol specification aims to allow different hand-off points between a web server and the media server implementing RTSP. For example, the presentation description can be retrieved using HTTP or RTSP, which reduces roundtrips in web-browser-based scenarios, yet also allows for standalone RTSP servers and clients which do not rely on HTTP at all. However, RTSP differs fundamentally from HTTP in that data delivery takes place out-of-band in a different protocol. HTTP is an asymmetric protocol where the client issues requests and the server responds. In RTSP, both the media client and media server can issue requests. RTSP requests are also not stateless; they may set parameters and continue to control a media stream long after the Schulzrinne, et. al. Standards Track [Page 13]
RFC 2326 Real Time Streaming Protocol April 1998 request has been acknowledged. Re-using HTTP functionality has advantages in at least two areas, namely security and proxies. The requirements are very similar, so having the ability to adopt HTTP work on caches, proxies and authentication is valuable. While most real-time media will use RTP as a transport protocol, RTSP is not tied to RTP. RTSP assumes the existence of a presentation description format that can express both static and temporal properties of a presentation containing several media streams. 2 Notational Conventions Since many of the definitions and syntax are identical to HTTP/1.1, this specification only points to the section where they are defined rather than copying it. For brevity, [HX.Y] is to be taken to refer to Section X.Y of the current HTTP/1.1 specification (RFC 2068 [2]). All the mechanisms specified in this document are described in both prose and an augmented Backus-Naur form (BNF) similar to that used in [H2.1]. It is described in detail in RFC 2234 [17], with the difference that this RTSP specification maintains the "1#" notation for comma-separated lists. In this memo, we use indented and smaller-type paragraphs to provide background and motivation. This is intended to give readers who were not involved with the formulation of the specification an understanding of why things are the way that they are in RTSP. 3 Protocol Parameters3.1 RTSP Version [H3.1] applies, with HTTP replaced by RTSP. 3.2 RTSP URL The "rtsp" and "rtspu" schemes are used to refer to network resources via the RTSP protocol. This section defines the scheme-specific syntax and semantics for RTSP URLs. rtsp_URL = ( "rtsp:" | "rtspu:" ) "//" host [ ":" port ] [ abs_path ] host = <A legal Internet host domain name of IP address (in dotted decimal form), as defined by Section 2.1Schulzrinne, et. al. Standards Track [Page 14]
RFC 2326 Real Time Streaming Protocol April 1998 of RFC 1123 \cite{rfc1123}> port = *DIGIT abs_path is defined in [H3.2.1]. Note that fragment and query identifiers do not have a well-defined meaning at this time, with the interpretation left to the RTSP server. The scheme rtsp requires that commands are issued via a reliable protocol (within the Internet, TCP), while the scheme rtspu identifies an unreliable protocol (within the Internet, UDP). If the port is empty or not given, port 554 is assumed. The semantics are that the identified resource can be controlled by RTSP at the server listening for TCP (scheme "rtsp") connections or UDP (scheme "rtspu") packets on that port of host, and the Request-URI for the resource is rtsp_URL. The use of IP addresses in URLs SHOULD be avoided whenever possible (see RFC 1924 [19]). A presentation or a stream is identified by a textual media identifier, using the character set and escape conventions [H3.2] of URLs (RFC 1738 [20]). URLs may refer to a stream or an aggregate of streams, i.e., a presentation. Accordingly, requests described in Section 10 can apply to either the whole presentation or an individual stream within the presentation. Note that some request methods can only be applied to streams, not presentations and vice versa. For example, the RTSP URL: rtsp://media.example.com:554/twister/audiotrack identifies the audio stream within the presentation "twister", which can be controlled via RTSP requests issued over a TCP connection to port 554 of host media.example.com. Also, the RTSP URL: rtsp://media.example.com:554/twister identifies the presentation "twister", which may be composed of audio and video streams. This does not imply a standard way to reference streams in URLs. The presentation description defines the hierarchical relationships in the presentation and the URLs for the individual streams. A presentation description may name a stream "a.mov" and the whole presentation "b.mov". Schulzrinne, et. al. Standards Track [Page 15]
RFC 2326 Real Time Streaming Protocol April 1998 The path components of the RTSP URL are opaque to the client and do not imply any particular file system structure for the server. This decoupling also allows presentation descriptions to be used with non-RTSP media control protocols simply by replacing the scheme in the URL. 3.3 Conference Identifiers Conference identifiers are opaque to RTSP and are encoded using standard URI encoding methods (i.e., LWS is escaped with %). They can contain any octet value. The conference identifier MUST be globally unique. For H.323, the conferenceID value is to be used. conference-id = 1*xchar Conference identifiers are used to allow RTSP sessions to obtain parameters from multimedia conferences the media server is participating in. These conferences are created by protocols outside the scope of this specification, e.g., H.323 [13] or SIP [12]. Instead of the RTSP client explicitly providing transport information, for example, it asks the media server to use the values in the conference description instead. 3.4 Session Identifiers Session identifiers are opaque strings of arbitrary length. Linear white space must be URL-escaped. A session identifier MUST be chosen randomly and MUST be at least eight octets long to make guessing it more difficult. (See Section 16.) session-id = 1*( ALPHA | DIGIT | safe ) 3.5 SMPTE Relative Timestamps A SMPTE relative timestamp expresses time relative to the start of the clip. Relative timestamps are expressed as SMPTE time codes for frame-level access accuracy. The time code has the format hours:minutes:seconds:frames.subframes, with the origin at the start of the clip. The default smpte format is "SMPTE 30 drop" format, with frame rate is 29.97 frames per second. Other SMPTE codes MAY be supported (such as "SMPTE 25") through the use of alternative use of "smpte time". For the "frames" field in the time value can assume the values 0 through 29. The difference between 30 and 29.97 frames per second is handled by dropping the first two frame indices (values 00 and 01) of every minute, except every tenth minute. If the frame value is zero, it may be omitted. Subframes are measured in one-hundredth of a frame. Schulzrinne, et. al. Standards Track [Page 16]
RFC 2326 Real Time Streaming Protocol April 1998 smpte-range = smpte-type "=" smpte-time "-" [ smpte-time ] smpte-type = "smpte" | "smpte-30-drop" | "smpte-25" ; other timecodes may be added smpte-time = 1*2DIGIT ":" 1*2DIGIT ":" 1*2DIGIT [ ":" 1*2DIGIT ] [ "." 1*2DIGIT ] Examples: smpte=10:12:33:20- smpte=10:07:33- smpte=10:07:00-10:07:33:05.01 smpte-25=10:07:00-10:07:33:05.01 3.6 Normal Play Time Normal play time (NPT) indicates the stream absolute position relative to the beginning of the presentation. The timestamp consists of a decimal fraction. The part left of the decimal may be expressed in either seconds or hours, minutes, and seconds. The part right of the decimal point measures fractions of a second. The beginning of a presentation corresponds to 0.0 seconds. Negative values are not defined. The special constant now is defined as the current instant of a live event. It may be used only for live events. NPT is defined as in DSM-CC: "Intuitively, NPT is the clock the viewer associates with a program. It is often digitally displayed on a VCR. NPT advances normally when in normal play mode (scale = 1), advances at a faster rate when in fast scan forward (high positive scale ratio), decrements when in scan reverse (high negative scale ratio) and is fixed in pause mode. NPT is (logically) equivalent to SMPTE time codes." [5] npt-range = ( npt-time "-" [ npt-time ] ) | ( "-" npt-time ) npt-time = "now" | npt-sec | npt-hhmmss npt-sec = 1*DIGIT [ "." *DIGIT ] npt-hhmmss = npt-hh ":" npt-mm ":" npt-ss [ "." *DIGIT ] npt-hh = 1*DIGIT ; any positive number npt-mm = 1*2DIGIT ; 0-59 npt-ss = 1*2DIGIT ; 0-59 Examples: npt=123.45-125 npt=12:05:35.3- npt=now- The syntax conforms to ISO 8601. The npt-sec notation is optimized for automatic generation, the ntp-hhmmss notation for consumption by human readers. The "now" constant allows clients to request to Schulzrinne, et. al. Standards Track [Page 17]
RFC 2326 Real Time Streaming Protocol April 1998 receive the live feed rather than the stored or time-delayed version. This is needed since neither absolute time nor zero time are appropriate for this case. 3.7 Absolute Time Absolute time is expressed as ISO 8601 timestamps, using UTC (GMT). Fractions of a second may be indicated. utc-range = "clock" "=" utc-time "-" [ utc-time ] utc-time = utc-date "T" utc-time "Z" utc-date = 8DIGIT ; < YYYYMMDD > utc-time = 6DIGIT [ "." fraction ] ; < HHMMSS.fraction > Example for November 8, 1996 at 14h37 and 20 and a quarter seconds UTC: 19961108T143720.25Z 3.8 Option Tags Option tags are unique identifiers used to designate new options in RTSP. These tags are used in Require (Section 12.32) and Proxy- Require (Section 12.27) header fields. Syntax: option-tag = 1*xchar The creator of a new RTSP option should either prefix the option with a reverse domain name (e.g., "com.foo.mynewfeature" is an apt name for a feature whose inventor can be reached at "foo.com"), or register the new option with the Internet Assigned Numbers Authority (IANA). 3.8.1 Registering New Option Tags with IANA When registering a new RTSP option, the following information should be provided: * Name and description of option. The name may be of any length, but SHOULD be no more than twenty characters long. The name MUST not contain any spaces, control characters or periods. * Indication of who has change control over the option (for example, IETF, ISO, ITU-T, other international standardization bodies, a consortium or a particular company or group of companies); Schulzrinne, et. al. Standards Track [Page 18]
RFC 2326 Real Time Streaming Protocol April 1998 * A reference to a further description, if available, for example (in order of preference) an RFC, a published paper, a patent filing, a technical report, documented source code or a computer manual; * For proprietary options, contact information (postal and email address); 4 RTSP Message RTSP is a text-based protocol and uses the ISO 10646 character set in UTF-8 encoding (RFC 2279 [21]). Lines are terminated by CRLF, but receivers should be prepared to also interpret CR and LF by themselves as line terminators. Text-based protocols make it easier to add optional parameters in a self-describing manner. Since the number of parameters and the frequency of commands is low, processing efficiency is not a concern. Text-based protocols, if done carefully, also allow easy implementation of research prototypes in scripting languages such as Tcl, Visual Basic and Perl. The 10646 character set avoids tricky character set switching, but is invisible to the application as long as US-ASCII is being used. This is also the encoding used for RTCP. ISO 8859-1 translates directly into Unicode with a high-order octet of zero. ISO 8859-1 characters with the most-significant bit set are represented as 1100001x 10xxxxxx. (See RFC 2279 [21]) RTSP messages can be carried over any lower-layer transport protocol that is 8-bit clean. Requests contain methods, the object the method is operating upon and parameters to further describe the method. Methods are idempotent, unless otherwise noted. Methods are also designed to require little or no state maintenance at the media server. 4.1 Message Types See [H4.1] 4.2 Message Headers See [H4.2] 4.3 Message Body See [H4.3] Schulzrinne, et. al. Standards Track [Page 19]
RFC 2326 Real Time Streaming Protocol April 19984.4 Message Length When a message body is included with a message, the length of that body is determined by one of the following (in order of precedence): 1. Any response message which MUST NOT include a message body (such as the 1xx, 204, and 304 responses) is always terminated by the first empty line after the header fields, regardless of the entity-header fields present in the message. (Note: An empty line consists of only CRLF.) 2. If a Content-Length header field (section 12.14) is present, its value in bytes represents the length of the message-body. If this header field is not present, a value of zero is assumed. 3. By the server closing the connection. (Closing the connection cannot be used to indicate the end of a request body, since that would leave no possibility for the server to send back a response.) Note that RTSP does not (at present) support the HTTP/1.1 "chunked" transfer coding(see [H3.6]) and requires the presence of the Content-Length header field. Given the moderate length of presentation descriptions returned, the server should always be able to determine its length, even if it is generated dynamically, making the chunked transfer encoding unnecessary. Even though Content-Length must be present if there is any entity body, the rules ensure reasonable behavior even if the length is not given explicitly. 5 General Header Fields See [H4.5], except that Pragma, Transfer-Encoding and Upgrade headers are not defined: general-header = Cache-Control ; Section 12.8 | Connection ; Section 12.10 | Date ; Section 12.18 | Via ; Section 12.436 Request A request message from a client to a server or vice versa includes, within the first line of that message, the method to be applied to the resource, the identifier of the resource, and the protocol version in use. Schulzrinne, et. al. Standards Track [Page 20]
RFC 2326 Real Time Streaming Protocol April 1998 Request = Request-Line ; Section 6.1 *( general-header ; Section 5 | request-header ; Section 6.2 | entity-header ) ; Section 8.1 CRLF [ message-body ] ; Section 4.36.1 Request Line Request-Line = Method SP Request-URI SP RTSP-Version CRLF Method = "DESCRIBE" ; Section 10.2 | "ANNOUNCE" ; Section 10.3 | "GET_PARAMETER" ; Section 10.8 | "OPTIONS" ; Section 10.1 | "PAUSE" ; Section 10.6 | "PLAY" ; Section 10.5 | "RECORD" ; Section 10.11 | "REDIRECT" ; Section 10.10 | "SETUP" ; Section 10.4 | "SET_PARAMETER" ; Section 10.9 | "TEARDOWN" ; Section 10.7 | extension-method extension-method = token Request-URI = "*" | absolute_URI RTSP-Version = "RTSP" "/" 1*DIGIT "." 1*DIGIT 6.2 Request Header Fields request-header = Accept ; Section 12.1 | Accept-Encoding ; Section 12.2 | Accept-Language ; Section 12.3 | Authorization ; Section 12.5 | From ; Section 12.20 | If-Modified-Since ; Section 12.23 | Range ; Section 12.29 | Referer ; Section 12.30 | User-Agent ; Section 12.41 Note that in contrast to HTTP/1.1 [2], RTSP requests always contain the absolute URL (that is, including the scheme, host and port) rather than just the absolute path. Schulzrinne, et. al. Standards Track [Page 21]
RFC 2326 Real Time Streaming Protocol April 1998 HTTP/1.1 requires servers to understand the absolute URL, but clients are supposed to use the Host request header. This is purely needed for backward-compatibility with HTTP/1.0 servers, a consideration that does not apply to RTSP. The asterisk "*" in the Request-URI means that the request does not apply to a particular resource, but to the server itself, and is only allowed when the method used does not necessarily apply to a resource. One example would be: OPTIONS * RTSP/1.0 7 Response [H6] applies except that HTTP-Version is replaced by RTSP-Version. Also, RTSP defines additional status codes and does not define some HTTP codes. The valid response codes and the methods they can be used with are defined in Table 1. After receiving and interpreting a request message, the recipient responds with an RTSP response message. Response = Status-Line ; Section 7.1 *( general-header ; Section 5 | response-header ; Section 7.1.2 | entity-header ) ; Section 8.1 CRLF [ message-body ] ; Section 4.37.1 Status-Line The first line of a Response message is the Status-Line, consisting of the protocol version followed by a numeric status code, and the textual phrase associated with the status code, with each element separated by SP characters. No CR or LF is allowed except in the final CRLF sequence. Status-Line = RTSP-Version SP Status-Code SP Reason-Phrase CRLF 7.1.1 Status Code and Reason Phrase The Status-Code element is a 3-digit integer result code of the attempt to understand and satisfy the request. These codes are fully defined in Section 11. The Reason-Phrase is intended to give a short textual description of the Status-Code. The Status-Code is intended for use by automata and the Reason-Phrase is intended for the human user. The client is not required to examine or display the Reason- Phrase. Schulzrinne, et. al. Standards Track [Page 22]
RFC 2326 Real Time Streaming Protocol April 1998 The first digit of the Status-Code defines the class of response. The last two digits do not have any categorization role. There are 5 values for the first digit: * 1xx: Informational - Request received, continuing process * 2xx: Success - The action was successfully received, understood, and accepted * 3xx: Redirection - Further action must be taken in order to complete the request * 4xx: Client Error - The request contains bad syntax or cannot be fulfilled * 5xx: Server Error - The server failed to fulfill an apparently valid request The individual values of the numeric status codes defined for RTSP/1.0, and an example set of corresponding Reason-Phrase's, are presented below. The reason phrases listed here are only recommended - they may be replaced by local equivalents without affecting the protocol. Note that RTSP adopts most HTTP/1.1 [2] status codes and adds RTSP-specific status codes starting at x50 to avoid conflicts with newly defined HTTP status codes. Schulzrinne, et. al. Standards Track [Page 23]
RFC 2326 Real Time Streaming Protocol April 1998 Status-Code = "100" ; Continue | "200" ; OK | "201" ; Created | "250" ; Low on Storage Space | "300" ; Multiple Choices | "301" ; Moved Permanently | "302" ; Moved Temporarily | "303" ; See Other | "304" ; Not Modified | "305" ; Use Proxy | "400" ; Bad Request | "401" ; Unauthorized | "402" ; Payment Required | "403" ; Forbidden | "404" ; Not Found | "405" ; Method Not Allowed | "406" ; Not Acceptable | "407" ; Proxy Authentication Required | "408" ; Request Time-out | "410" ; Gone | "411" ; Length Required | "412" ; Precondition Failed | "413" ; Request Entity Too Large | "414" ; Request-URI Too Large | "415" ; Unsupported Media Type | "451" ; Parameter Not Understood | "452" ; Conference Not Found | "453" ; Not Enough Bandwidth | "454" ; Session Not Found | "455" ; Method Not Valid in This State | "456" ; Header Field Not Valid for Resource | "457" ; Invalid Range | "458" ; Parameter Is Read-Only | "459" ; Aggregate operation not allowed | "460" ; Only aggregate operation allowed | "461" ; Unsupported transport | "462" ; Destination unreachable | "500" ; Internal Server Error | "501" ; Not Implemented | "502" ; Bad Gateway | "503" ; Service Unavailable | "504" ; Gateway Time-out | "505" ; RTSP Version not supported | "551" ; Option not supported | extension-code Schulzrinne, et. al. Standards Track [Page 24]
RFC 2326 Real Time Streaming Protocol April 1998 extension-code = 3DIGIT Reason-Phrase = *<TEXT, excluding CR, LF> RTSP status codes are extensible. RTSP applications are not required to understand the meaning of all registered status codes, though such understanding is obviously desirable. However, applications MUST understand the class of any status code, as indicated by the first digit, and treat any unrecognized response as being equivalent to the x00 status code of that class, with the exception that an unrecognized response MUST NOT be cached. For example, if an unrecognized status code of 431 is received by the client, it can safely assume that there was something wrong with its request and treat the response as if it had received a 400 status code. In such cases, user agents SHOULD present to the user the entity returned with the response, since that entity is likely to include human- readable information which will explain the unusual status. Code reason 100 Continue all 200 OK all 201 Created RECORD 250 Low on Storage Space RECORD 300 Multiple Choices all 301 Moved Permanently all 302 Moved Temporarily all 303 See Other all 305 Use Proxy all Schulzrinne, et. al. Standards Track [Page 25]
RFC 2326 Real Time Streaming Protocol April 1998 400 Bad Request all 401 Unauthorized all 402 Payment Required all 403 Forbidden all 404 Not Found all 405 Method Not Allowed all 406 Not Acceptable all 407 Proxy Authentication Required all 408 Request Timeout all 410 Gone all 411 Length Required all 412 Precondition Failed DESCRIBE, SETUP 413 Request Entity Too Large all 414 Request-URI Too Long all 415 Unsupported Media Type all 451 Invalid parameter SETUP 452 Illegal Conference Identifier SETUP 453 Not Enough Bandwidth SETUP 454 Session Not Found all 455 Method Not Valid In This State all 456 Header Field Not Valid all 457 Invalid Range PLAY 458 Parameter Is Read-Only SET_PARAMETER 459 Aggregate Operation Not Allowed all 460 Only Aggregate Operation Allowed all 461 Unsupported Transport all 462 Destination Unreachable all 500 Internal Server Error all 501 Not Implemented all 502 Bad Gateway all 503 Service Unavailable all 504 Gateway Timeout all 505 RTSP Version Not Supported all 551 Option not support all Table 1: Status codes and their usage with RTSP methods 7.1.2 Response Header Fields The response-header fields allow the request recipient to pass additional information about the response which cannot be placed in the Status-Line. These header fields give information about the server and about further access to the resource identified by the Request-URI. Schulzrinne, et. al. Standards Track [Page 26]
RFC 2326 Real Time Streaming Protocol April 1998 response-header = Location ; Section 12.25 | Proxy-Authenticate ; Section 12.26 | Public ; Section 12.28 | Retry-After ; Section 12.31 | Server ; Section 12.36 | Vary ; Section 12.42 | WWW-Authenticate ; Section 12.44 Response-header field names can be extended reliably only in combination with a change in the protocol version. However, new or experimental header fields MAY be given the semantics of response- header fields if all parties in the communication recognize them to be response-header fields. Unrecognized header fields are treated as entity-header fields. 8 Entity Request and Response messages MAY transfer an entity if not otherwise restricted by the request method or response status code. An entity consists of entity-header fields and an entity-body, although some responses will only include the entity-headers. In this section, both sender and recipient refer to either the client or the server, depending on who sends and who receives the entity. 8.1 Entity Header Fields Entity-header fields define optional metainformation about the entity-body or, if no body is present, about the resource identified by the request. entity-header = Allow ; Section 12.4 | Content-Base ; Section 12.11 | Content-Encoding ; Section 12.12 | Content-Language ; Section 12.13 | Content-Length ; Section 12.14 | Content-Location ; Section 12.15 | Content-Type ; Section 12.16 | Expires ; Section 12.19 | Last-Modified ; Section 12.24 | extension-header extension-header = message-header The extension-header mechanism allows additional entity-header fields to be defined without changing the protocol, but these fields cannot be assumed to be recognizable by the recipient. Unrecognized header fields SHOULD be ignored by the recipient and forwarded by proxies. Schulzrinne, et. al. Standards Track [Page 27]
RFC 2326 Real Time Streaming Protocol April 19988.2 Entity Body See [H7.2] 9 Connections RTSP requests can be transmitted in several different ways: * persistent transport connections used for several request-response transactions; * one connection per request/response transaction; * connectionless mode. The type of transport connection is defined by the RTSP URI (Section3.2). For the scheme "rtsp", a persistent connection is assumed, while the scheme "rtspu" calls for RTSP requests to be sent without setting up a connection. Unlike HTTP, RTSP allows the media server to send requests to the media client. However, this is only supported for persistent connections, as the media server otherwise has no reliable way of reaching the client. Also, this is the only way that requests from media server to client are likely to traverse firewalls. 9.1 Pipelining A client that supports persistent connections or connectionless mode MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received. 9.2 Reliability and Acknowledgements Requests are acknowledged by the receiver unless they are sent to a multicast group. If there is no acknowledgement, the sender may resend the same message after a timeout of one round-trip time (RTT). The round-trip time is estimated as in TCP (RFC 1123) [18], with an initial round-trip value of 500 ms. An implementation MAY cache the last RTT measurement as the initial value for future connections. If a reliable transport protocol is used to carry RTSP, requests MUST NOT be retransmitted; the RTSP application MUST instead rely on the underlying transport to provide reliability. If both the underlying reliable transport such as TCP and the RTSP application retransmit requests, it is possible that each packet loss results in two retransmissions. The receiver cannot typically take advantage of the application-layer retransmission since the Schulzrinne, et. al. Standards Track [Page 28]
RFC 2326 Real Time Streaming Protocol April 1998 transport stack will not deliver the application-layer retransmission before the first attempt has reached the receiver. If the packet loss is caused by congestion, multiple retransmissions at different layers will exacerbate the congestion. If RTSP is used over a small-RTT LAN, standard procedures for optimizing initial TCP round trip estimates, such as those used in T/TCP (RFC 1644) [22], can be beneficial. The Timestamp header (Section 12.38) is used to avoid the retransmission ambiguity problem [23, p. 301] and obviates the need for Karn's algorithm. Each request carries a sequence number in the CSeq header (Section12.17), which is incremented by one for each distinct request transmitted. If a request is repeated because of lack of acknowledgement, the request MUST carry the original sequence number (i.e., the sequence number is not incremented). Systems implementing RTSP MUST support carrying RTSP over TCP and MAY support UDP. The default port for the RTSP server is 554 for both UDP and TCP. A number of RTSP packets destined for the same control end point may be packed into a single lower-layer PDU or encapsulated into a TCP stream. RTSP data MAY be interleaved with RTP and RTCP packets. Unlike HTTP, an RTSP message MUST contain a Content-Length header whenever that message contains a payload. Otherwise, an RTSP packet is terminated with an empty line immediately following the last message header. 10 Method Definitions The method token indicates the method to be performed on the resource identified by the Request-URI. The method is case-sensitive. New methods may be defined in the future. Method names may not start with a $ character (decimal 24) and must be a token. Methods are summarized in Table 2. Schulzrinne, et. al. Standards Track [Page 29]
RFC 2326 Real Time Streaming Protocol April 1998 method direction object requirement DESCRIBE C->S P,S recommended ANNOUNCE C->S, S->C P,S optional GET_PARAMETER C->S, S->C P,S optional OPTIONS C->S, S->C P,S required (S->C: optional) PAUSE C->S P,S recommended PLAY C->S P,S required RECORD C->S P,S optional REDIRECT S->C P,S optional SETUP C->S S required SET_PARAMETER C->S, S->C P,S optional TEARDOWN C->S P,S required Table 2: Overview of RTSP methods, their direction, and what objects (P: presentation, S: stream) they operate on Notes on Table 2: PAUSE is recommended, but not required in that a fully functional server can be built that does not support this method, for example, for live feeds. If a server does not support a particular method, it MUST return "501 Not Implemented" and a client SHOULD not try this method again for this server. 10.1 OPTIONS The behavior is equivalent to that described in [H9.2]. An OPTIONS request may be issued at any time, e.g., if the client is about to try a nonstandard request. It does not influence server state. Example: C->S: OPTIONS * RTSP/1.0 CSeq: 1 Require: implicit-play Proxy-Require: gzipped-messages S->C: RTSP/1.0 200 OK CSeq: 1 Public: DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE Note that these are necessarily fictional features (one would hope that we would not purposefully overlook a truly useful feature just so that we could have a strong example in this section). Schulzrinne, et. al. Standards Track [Page 30]
RFC 2326 Real Time Streaming Protocol April 199810.2 DESCRIBE The DESCRIBE method retrieves the description of a presentation or media object identified by the request URL from a server. It may use the Accept header to specify the description formats that the client understands. The server responds with a description of the requested resource. The DESCRIBE reply-response pair constitutes the media initialization phase of RTSP. Example: C->S: DESCRIBE rtsp://server.example.com/fizzle/foo RTSP/1.0 CSeq: 312 Accept: application/sdp, application/rtsl, application/mheg S->C: RTSP/1.0 200 OK CSeq: 312 Date: 23 Jan 1997 15:35:06 GMT Content-Type: application/sdp Content-Length: 376 v=0 o=mhandley 2890844526 2890842807 IN IP4 126.16.64.4 s=SDP Seminar i=A Seminar on the session description protocol u=http://www.cs.ucl.ac.uk/staff/M.Handley/sdp.03.ps e=mjh@isi.edu (Mark Handley) c=IN IP4 224.2.17.12/127 t=2873397496 2873404696 a=recvonly m=audio 3456 RTP/AVP 0 m=video 2232 RTP/AVP 31 m=whiteboard 32416 UDP WB a=orient:portrait The DESCRIBE response MUST contain all media initialization information for the resource(s) that it describes. If a media client obtains a presentation description from a source other than DESCRIBE and that description contains a complete set of media initialization parameters, the client SHOULD use those parameters and not then request a description for the same media via RTSP. Additionally, servers SHOULD NOT use the DESCRIBE response as a means of media indirection. Clear ground rules need to be established so that clients have an unambiguous means of knowing when to request media initialization information via DESCRIBE, and when not to. By forcing a DESCRIBE Schulzrinne, et. al. Standards Track [Page 31]
RFC 2326 Real Time Streaming Protocol April 1998 response to contain all media initialization for the set of streams that it describes, and discouraging use of DESCRIBE for media indirection, we avoid looping problems that might result from other approaches. Media initialization is a requirement for any RTSP-based system, but the RTSP specification does not dictate that this must be done via the DESCRIBE method. There are three ways that an RTSP client may receive initialization information: * via RTSP's DESCRIBE method; * via some other protocol (HTTP, email attachment, etc.); * via the command line or standard input (thus working as a browser helper application launched with an SDP file or other media initialization format). In the interest of practical interoperability, it is highly recommended that minimal servers support the DESCRIBE method, and highly recommended that minimal clients support the ability to act as a "helper application" that accepts a media initialization file from standard input, command line, and/or other means that are appropriate to the operating environment of the client. 10.3 ANNOUNCE The ANNOUNCE method serves two purposes: When sent from client to server, ANNOUNCE posts the description of a presentation or media object identified by the request URL to a server. When sent from server to client, ANNOUNCE updates the session description in real-time. If a new media stream is added to a presentation (e.g., during a live presentation), the whole presentation description should be sent again, rather than just the additional components, so that components can be deleted. Example: C->S: ANNOUNCE rtsp://server.example.com/fizzle/foo RTSP/1.0 CSeq: 312 Date: 23 Jan 1997 15:35:06 GMT Session: 47112344 Content-Type: application/sdp Content-Length: 332 v=0 o=mhandley 2890844526 2890845468 IN IP4 126.16.64.4 Schulzrinne, et. al. Standards Track [Page 32]
RFC 2326 Real Time Streaming Protocol April 1998 s=SDP Seminar i=A Seminar on the session description protocol u=http://www.cs.ucl.ac.uk/staff/M.Handley/sdp.03.ps e=mjh@isi.edu (Mark Handley) c=IN IP4 224.2.17.12/127 t=2873397496 2873404696 a=recvonly m=audio 3456 RTP/AVP 0 m=video 2232 RTP/AVP 31 S->C: RTSP/1.0 200 OK CSeq: 312 10.4 SETUP The SETUP request for a URI specifies the transport mechanism to be used for the streamed media. A client can issue a SETUP request for a stream that is already playing to change transport parameters, which a server MAY allow. If it does not allow this, it MUST respond with error "455 Method Not Valid In This State". For the benefit of any intervening firewalls, a client must indicate the transport parameters even if it has no influence over these parameters, for example, where the server advertises a fixed multicast address. Since SETUP includes all transport initialization information, firewalls and other intermediate network devices (which need this information) are spared the more arduous task of parsing the DESCRIBE response, which has been reserved for media initialization. The Transport header specifies the transport parameters acceptable to the client for data transmission; the response will contain the transport parameters selected by the server. C->S: SETUP rtsp://example.com/foo/bar/baz.rm RTSP/1.0 CSeq: 302 Transport: RTP/AVP;unicast;client_port=4588-4589 S->C: RTSP/1.0 200 OK CSeq: 302 Date: 23 Jan 1997 15:35:06 GMT Session: 47112344 Transport: RTP/AVP;unicast; client_port=4588-4589;server_port=6256-6257 The server generates session identifiers in response to SETUP requests. If a SETUP request to a server includes a session identifier, the server MUST bundle this setup request into the Schulzrinne, et. al. Standards Track [Page 33]
RFC 2326 Real Time Streaming Protocol April 1998 existing session or return error "459 Aggregate Operation Not Allowed" (see Section 11.3.10). 10.5 PLAY The PLAY method tells the server to start sending data via the mechanism specified in SETUP. A client MUST NOT issue a PLAY request until any outstanding SETUP requests have been acknowledged as successful. The PLAY request positions the normal play time to the beginning of the range specified and delivers stream data until the end of the range is reached. PLAY requests may be pipelined (queued); a server MUST queue PLAY requests to be executed in order. That is, a PLAY request arriving while a previous PLAY request is still active is delayed until the first has been completed. This allows precise editing. For example, regardless of how closely spaced the two PLAY requests in the example below arrive, the server will first play seconds 10 through 15, then, immediately following, seconds 20 to 25, and finally seconds 30 through the end. C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 835 Session: 12345678 Range: npt=10-15 C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 836 Session: 12345678 Range: npt=20-25 C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 837 Session: 12345678 Range: npt=30- See the description of the PAUSE request for further examples. A PLAY request without a Range header is legal. It starts playing a stream from the beginning unless the stream has been paused. If a stream has been paused via PAUSE, stream delivery resumes at the pause point. If a stream is playing, such a PLAY request causes no further action and can be used by the client to test server liveness. Schulzrinne, et. al. Standards Track [Page 34]
RFC 2326 Real Time Streaming Protocol April 1998 The Range header may also contain a time parameter. This parameter specifies a time in UTC at which the playback should start. If the message is received after the specified time, playback is started immediately. The time parameter may be used to aid in synchronization of streams obtained from different sources. For a on-demand stream, the server replies with the actual range that will be played back. This may differ from the requested range if alignment of the requested range to valid frame boundaries is required for the media source. If no range is specified in the request, the current position is returned in the reply. The unit of the range in the reply is the same as that in the request. After playing the desired range, the presentation is automatically paused, as if a PAUSE request had been issued. The following example plays the whole presentation starting at SMPTE time code 0:10:20 until the end of the clip. The playback is to start at 15:36 on 23 Jan 1997. C->S: PLAY rtsp://audio.example.com/twister.en RTSP/1.0 CSeq: 833 Session: 12345678 Range: smpte=0:10:20-;time=19970123T153600Z S->C: RTSP/1.0 200 OK CSeq: 833 Date: 23 Jan 1997 15:35:06 GMT Range: smpte=0:10:22-;time=19970123T153600Z For playing back a recording of a live presentation, it may be desirable to use clock units: C->S: PLAY rtsp://audio.example.com/meeting.en RTSP/1.0 CSeq: 835 Session: 12345678 Range: clock=19961108T142300Z-19961108T143520Z S->C: RTSP/1.0 200 OK CSeq: 835 Date: 23 Jan 1997 15:35:06 GMT A media server only supporting playback MUST support the npt format and MAY support the clock and smpte formats. Schulzrinne, et. al. Standards Track [Page 35]
RFC 2326 Real Time Streaming Protocol April 199810.6 PAUSE

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *