<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
  <!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
  <!ENTITY RFC5888 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5888.xml">
  <!ENTITY RFC4566 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4566.xml">
  <!ENTITY RFC3550 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3550.xml">
  <!ENTITY RFC3261 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3261.xml">
  <!ENTITY RFC7205 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7205.xml">
  <!ENTITY RFC5234 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5234.xml">
  <!ENTITY RFC8126 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8126.xml">
  <!ENTITY RFC8126 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8126.xml">
  <!ENTITY RFC8446 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8446.xml">
  <!ENTITY RFC8550 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8550.xml">


<!--
<!ENTITY RFC3524 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3524.xml">
<!ENTITY RFC8445 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8445.xml">
<!ENTITY RFC5583 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5583.xml">
<!ENTITY RFC5956 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5956.xml">
-->

]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- used by XSLT processors -->
<!-- OPTIONS, known as processing instructions (PIs) go here. -->
<!-- For a complete list and description of PIs,
     please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable PIs that most I-Ds might want to use. -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC): -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="2"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references: -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space:
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of popular PIs -->
<rfc category="std" docName="draft-abhishek-mmusic-overlay-grouping-00" ipr="trust200902">
   <front>
      <title abbrev="Overlay Group Semantic">SDP Overlay Grouping framework for immersive telepresence media streams</title>
      <!-- Authors -->
      <author fullname="Rohit Abhishek" initials="R." surname="Abhishek">
         <organization>Tencent</organization>
         <address>
            <postal>
               <street>2747 Park Blvd</street>
               <city>Palo Alto</city>
               <region />
               <code>94588</code>
               <country>USA</country>
            </postal>
            <email>rabhishek@rabhishek.com</email>
         </address>
      </author>
      <!--
     
-->
      <date year="2020" />
      <area>art</area>
    <workgroup>mmusic</workgroup>
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <abstract>
         <t>
            This document defines semantics that allow for signalling a new SDP group "OL" for overlays in an immersive telepresence session.
          The "OL" attribute can be used by the application to relate all the overlay media streams enabling them to be added as overlay on top of the immersive video. The overlay grouping semantics is required, if the media data is seperate and transported via different protocols.
            <!-- It can be added to the any particular media stream indicating the stream to be superimposed on top of the immersive video. -->
         </t>
      </abstract>
   </front>
   <middle>
      <section title="Introduction">
         <t>
            Telepresence
            <xref target="RFC7205" />
            can be described as a technology that allows a person the experience of "being present" at a remote location for video as well as audio telepresence sessions, so as to enable the users sense of realism and presence
            <xref target="TS26.223" />
            .
            SDP
            <xref target="RFC4566" />
            is being predominantly used for describing the format for multimedia communication session for telepresence conferencing. These use open standards such as RTP
            <xref target="RFC3550" />
            and SIP
            <xref target="RFC3261" />
            .
         </t>
         <t>An SDP session may contain more than one media lines with each media line identified by "m"=line. Each line denotes a single media stream. If multiple media lines are present in a session, a receiver needs to identify relationship between those media lines.</t>
         <t>
            Overlay media stream can be defined as a piece of visual media which can be rendered over an immersive video or image or over a viewport
            <xref target="ISO23090" />
            . When an overlay is transmitted, its media stream needs to be uniquely identified across multiple SDP descriptions exchanged with different receivers so that the streams can be identified in terms of its role in the session irrespective of its media type and transport protocol.
         </t>
         <t>In an immersive telepresence session, one media  is streamed as an immersive stream whereas other media streams are overlaid on top of the immersive video/image. An end user can stream more than one overlay, subject to its decoding capacity. When multiple overlay streams are transmitted within a session, the end application upon receiving, needs to be able to relate the media streams to each other. This can be achieved by SDP grouping framework by using the "group" attribute that groups different "m" lines in a session.
            However, the current SDP signalling framework does not provide such grouping semantics for overlays.</t>
         <t>
            This document describes a new SDP group semantics for grouping the overlays when an immersive media stream is transmitted for telepresence conferencing.
            SDP session description consists of one or multiple media lines know as "m" lines which can be identified by a token carried in a "mid" attribute. The SDP session describes a session-level group level attributes that groups different media lines using a defined group semantics. The semantics defined in this memo is to be used in conjuction with
            <xref target="RFC5888" />
            titled "The Session Description Protocol (SDP) Grouping Framework".
         </t>
      </section>
      
      
      
      <section anchor="Discussion" title="Discussion Venue for this draft">
         <t>
             (Note to RFC Editor - if this document ever reaches you, please
              remove this section) </t>


           <t>   Substantial discussion of this document should take place on the MMUSIC
              working group mailing list ( mmusic@ietf.org).  Subscription and archive
              details are at https://www.ietf.org/mailman/listinfo/mmusic.
         </t>
      </section>
      
      
      
      <section anchor="Terminology" title="Terminology">
         <t>
            The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
               "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
               this document are to be interpreted as described in <xref target="RFC2119" />.
         </t>
      </section>
      <!-- section anchor="Definition" title="Definition" -->
      <!--
      <section anchor="Motivation" title="Motivation for new Overlay Label">
    <t>
In an immersive telepresence session, one media  is streamed as an immersive stream whereas other media streams are overlaid on top of the immersive video/image. An end user can stream more than one overlay subject to its decoding capacity.
    </t>
    <t>
        Currently, SDP <xref target="RFC4566" /> does not provide any grouping label for overlays for a CLUE session in a teleconference (eg. , slides, video, image) in a form which can be understood by the application. Although the overlays can be signalled by the server and the end user can see and read the content of the media streams, it is up to the application to determine what to do with them <xref target="RFC5888"/>. When multiple overlays are signalled by SDP, it is necessary to group all overlays within a session description to relate them to each other.
         </t>
      </section>
      -->
      <section anchor="Overview" title="Overview of Operation">
         <t>
            A non-normative description of SDP overlay group semantics is described in this section.
             An immersive stream for a telepresence session may consist of one or more conference rooms with a 360-degree camera and the remote users using head mounted display for streaming. "Participant cameras" are used to capture the conference participants whereas "presentation cameras" or "content cameras" can be used for document display
            <xref target="RFC7205" />
            .
            <!-- CLUE framework describes the protocols for spatially related multi media streams in an IP Multimedia session
             <xref target="I-D.ietf-clue-data-model-schema"/>
             <xref target="I-D.ietf-clue-datachannel"/>
             <xref target="I-D.ietf-clue-framework"/>
             <xref target="I-D.ietf-clue-protocol"/>
             <xref target="I-D.ietf-clue-rtp-mapping"/>
             <xref target="I-D.ietf-clue-signaling"/>.
             -->
            <!--Rohit: -->
            The remote participant can stream any of the available immersive video in the session as background whereas other available streams such as the presentation stream or 2D video from any other room or participant can be used as an overlay on top of the immersive video/image.
         </t>
         <!-- Talk about SDP limitations ...https://datatracker.ietf.org/wg/mmusic/about/ -->
         <t>
            A user with a head mounted display may stream more than one overlay in a single SDP session.
            These overlay streams are transmitted via "m" line in SDP session description. Each "m" line in the session description is identified by a token carried via the "mid" attribute. When multiple overlay streams are transmitted within a session, the end application upon receiving, needs to be able to relate the media streams to each other. This is achieved by using the SDP grouping framework
            <xref target="RFC5888" />.
            The session descriptions carries session-level "group" attribute for the overlays which groups different "m" lines using overlay(OL) group semantics.
         </t>
         <!--
         <figure>
           <preamble></preamble>

           <artwork><![CDATA[
v=0
o=Alice 292742730 29277831 IN IP4 131.163.72.4
c=IN IP4 131.164.74.2
t=0 0
a=group:OL 1 2
m=video 30000 RTP/AVP 31
a=mid:1
m=video 30002 RTP/AVP 31
a=mid:2
           ]]></artwork>

         <postamble></postamble>
       </figure>
 -->
      </section>
      <section anchor="OverlayStreamGroup" title="Overlay Stream Group Identification Attribute">
         <t>
            The "overlay media stream identification" attribute is used to identify overlay media streams within a session description. In a overlay group, the media lines MAY have different media contents.
            Its formatting in SDP
            <xref target="RFC4566" />
            is described by the following Augmented Backus-Naur Form (ABNF)
            <xref target="RFC5234" />
            :
         </t>
         <figure>
            <preamble />
            <artwork><![CDATA[mid-attribute = "a=mid:" identification-tag
identification-tag = token
                     ; token is defined in RFC4566]]></artwork>
            <postamble />
         </figure>
         <t>This documents defines a new group semantics "OL" identification media attribute, which is used to identify overlay group media streams within a session description. It is used for grouping the media streams for different overlays together within a session. An application that receives a session description that contains "m" lines grouped together using "OL" semantics MUST overlay the corresponding media streams on top of the immersive media stream.</t>
         <!-->This document defines a standard semantics: Overlay. Semantics extensions follow the Standards Action policy [RFC8126].<-->
      </section>
      <section anchor="groupuse" title="Use of group and mid">
         <t>
            All group and mid attributes MUST follow the rules defined in
            <xref target="RFC5888" />. The "mid" attribute should be used for all "m" lines within a session description . If for any "m" lines within a session, no "mid" attribute is identified for a session description, the application MUST NOT perform any media line grouping. If the identification-tags associated with "a=group" lines do not map to any "m" lines, it MUST be ignored.
         </t>
         <figure>
            <preamble />
            <artwork><![CDATA[group-attribute ="a=group:" semantics
                  *(SP identification-tag)
semantics = "OL" / semantics-extension
semantics-extension = token
                      ; token is defined in RFC4566]]></artwork>
            <postamble />
         </figure>
      </section>
      <section anchor="OL" title="Example of OL">
         <t>The following two examples show a session description for overlays in an immersive telepresence conference. The "group" line indicates that the "m" lines with tokens 1 and 2 are grouped for the purpose of overlays and intended to be overlaid on top of the immersive video. </t>
         
         <t>In the first example shown below, two overlays are being transmitted. The first media stream (mid:1) carries the video stream, and the second stream (mid:2) contains an audio stream.</t>
         <figure>
            <preamble />
            <artwork><![CDATA[
    v=0
    o=Alice 292742730 29277831 IN IP4 233.252.0.74
    c=IN IP4 233.252.0.79
    t=0 0
    a=group:OL 1 2
    m=video 30000 RTP/AVP 31
    a=mid:1
    m=audio 30002 RTP/AVP 31
    a=mid:2
            ]]></artwork>
            <postamble />
         </figure>
         
         
<t> The second example, below, uses 'content' attribute with the media streams which are transmitted for overlay purpose.  </t>
         
    
         <figure>
            <preamble />
            <artwork><![CDATA[
    v=0
    o=Alice 292742730 29277831 IN IP4 233.252.0.74
    c=IN IP4 233.252.0.79
    t=0 0
    a=group:OL 1 2
    m=video 30000 RTP/AVP 31
    a= content:slides
    a=mid:1
    m=video 30002 RTP/AVP 31
    a=content:speaker
    a=mid:2
            ]]></artwork>
            <postamble />
         </figure>
    
    
    
    
         
         <!--
         a= group:OL 1 2
         m=slide: 30000 RTP/AVP 31 a=mid:1
         m= video 30000 RTP/AVP 31 a=mid:2
         m= video 30000 RTP/AVP 31 a=mid:3
         -->
      </section>
      <section anchor="Security" title="Security Considerations">
         <t>
            All security considerations as defined in
            <xref target="RFC5888" />
            apply:
         </t>
         <t>Using the "group" parameter with FID semantics, an entity that managed to modify the session descriptions exchanged between the participants to establish a multimedia session could force the participants to send a copy of the media to any destination of its choosing.</t>
         <t>
            Integrity mechanisms provided by protocols used to exchange session descriptions and media encryption can be used to prevent this attack. In SIP, Secure/Multipurpose Internet Mail Extensions (S/MIME)
            <xref target="RFC8550" />
            and Transport Layer Security (TLS)
            <xref target="RFC8446" />
            can be used to protect session description exchanges in an end-to-end and a hop-byhop fashion, respectively.
         </t>
      </section>
      <section anchor="IANA" title="IANA Considerations">
         <!-- Summary of the feature indicated  -->
         <t>The following contact information shall be used for all registrations included here:</t>
         <figure>
            <preamble />
            <artwork><![CDATA[Contact:         Rohit Abhishek
                 email: rabhishek@rabhishek.com
                 tel  : +1-816-585-7500]]></artwork>
            <postamble />
         </figure>
         <t>This document defines a new SDP group semantics for overlays for a immersive telepresence session.
                 This attribute can be used by the application to group all the overlays in a session.
            Semantics values to be used with this framework should be registered by the IANA following the Standards Action policy
            <xref target="RFC8126" />. This document adds a new group semantics and follows the registry group defined in <xref target="RFC5888" />.
         </t>
         <t>The following semantics needs to be registered by IANA in Semantics for the "group" SDP Attribute under SDP Parameters.</t>
         <figure>
            <preamble />
            <artwork><![CDATA[Semantics             Token          Reference
----------------------------------------------
Overlay               OL              RFCXXXX]]></artwork>
            <postamble />
         </figure>
         <t>
            The "OL" attribute is used to group different media streams to be rendered as overlays. Its format is defined in
            <xref target="OverlayStreamGroup" />
            .
         </t>
         
         <!--
         
         Detailed description of the feature value meaning,
          and of the format and meaning of the feature tag values
          for the alternative results.
         
         
         
         The feature tag is intended primarily for use in the following
         applications, protocols, services, or negotiation mechanisms:
         
         -->
         <t>The IANA Considerations section of the RFC MUST include the following information, which appears in the IANA registry along with the RFC number of the publication.</t>
         <t>
            <list style="symbols">
               <t>A brief description of the semantics.</t>
               <t>Token to be used within the "group" attribute. This token may be of any length, but SHOULD be no more than four characters long.</t>
               <t>Reference to a standards track RFC.</t>
            </list>
         </t>
         <!--
      <figure>
        <preamble>The following are the current entries in the registry:</preamble>
        <artwork><![CDATA[
 Semantics                         Token   Reference
 
 Lip Synchronization                LS     [RFC5888]
 Flow Identification                FID    [RFC5888]
 Single Reservation Flow            SRF    [RFC3524]
 Alternative Network Address Types  ANAT   [RFC8445]
 Forward Error Correction           FEC    [RFC5956]
 Decoding Dependency                DDP    [RFC5583]
]]></artwork>
      <postamble></postamble>
    </figure>
   -->
      </section>
   </middle>
   <back>
      <references title="Normative References">&RFC2119;
      &RFC5888;
      &RFC4566;
      &RFC3550;
      &RFC3261;
      
      &RFC5234;
      &RFC8126;
      &RFC8446;
      &RFC8550;
      
      
      <!--
      &RFC3524;
      &RFC8445;
      &RFC5583;
      &RFC5956;
      
      -->

      
      </references>
      <references title="Informative References">
          &RFC7205;
         <reference anchor="ISO23090">
            <front>
               <title>Information technology — Coded representation of immersive media — Part 2: Omnidirectional MediA Format (OMAF) 2nd Edition</title>
               <author initials="" surname="" fullname="">
                  <organization />
               </author>
               <date month="February" year="2020" />
               <abstract>
                  <t />
               </abstract>
            </front>
            <seriesInfo name="ISO" value="ISO 23090-2:2020(E)" />
            <format type="TXT" target="https://www.iso.org/standard/73310.html" />
         </reference>
         
         
         <reference anchor="TS26.223">
            <front>
               <title>3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Telepresence using the IP Multimedia Subsystem (IMS); Media Handling and Interaction</title>
               <author initials="" surname="" fullname="">
                  <organization />
               </author>
               <date month="March" year="2020" />
               <abstract>
                  <t>This Technical Specification has been produced by the 3 rd Generation Partnership Project (3GPP).</t>
               </abstract>
            </front>
            <seriesInfo name="3GPP" value="TS26.223" />
            <format type="TXT" target="https://www.3gpp.org/ftp//Specs/archive/26_series/26.223/" />
         </reference>
         
         
         <!--
         <reference anchor="I-D.ietf-clue-framework">
            <front>
               <title>Framework for Telepresence Multi-Streams</title>
               <author initials="M" surname="Duckworth" fullname="Mark Duckworth">
                  <organization />
               </author>
               <author initials="A" surname="Pepperell" fullname="Andrew Pepperell">
                  <organization />
               </author>
               <author initials="S" surname="Wenger" fullname="Stephan Wenger">
                  <organization />
               </author>
               <date month="January" day="8" year="2016" />
               <abstract>
                  <t>This document defines a framework for a protocol to enable devices in a telepresence conference to interoperate. The protocol enables communication of information about multiple media streams so a sending system and receiving system can make reasonable decisions about transmitting, selecting and rendering the media streams. This protocol is used in addition to SIP signaling and SDP negotiation for setting up a telepresence session.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-framework-25" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-framework-25.txt" />
         </reference>
         
         
         <reference anchor="I-D.ietf-clue-signaling">
            <front>
               <title>CLUE Signaling</title>
               <author initials="P" surname="Kyzivat" fullname="Paul Kyzivat">
                  <organization />
               </author>
               <author initials="L" surname="Xiao" fullname="Lennard Xiao">
                  <organization />
               </author>
               <author initials="C" surname="Groves" fullname="Christian Groves">
                  <organization />
               </author>
               <author initials="R" surname="Hansen" fullname="Robert Hansen">
                  <organization />
               </author>
               <date month="August" day="5" year="2015" />
               <abstract>
                  <t>This document specifies how CLUE-specific signaling such as the CLUE protocol [I-D.ietf-clue-protocol] and the CLUE data channel [I-D.ietf-clue-datachannel] are used with each other and with existing signaling mechanisms such as SIP and SDP to produce a telepresence call.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-signaling-06" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-signaling-06.txt" />
         </reference>
         
         
         
         <reference anchor="I-D.ietf-clue-datachannel">
            <front>
               <title>CLUE Protocol data channel</title>
               <author initials="C" surname="Holmberg" fullname="Christer Holmberg">
                  <organization />
               </author>
               <date month="September" day="7" year="2015" />
               <abstract>
                  <t>This document defines how to use the WebRTC data channel mechanism in order to realize a data channel, referred to as a CLUE data channel, for transporting CLUE protocol messages between two CLUE entities. The document defines how to describe the SCTPoDTLS association used to realize the CLUE data channel using the Session Description Protocol (SDP), and defines usage of SDP-based "SCTP over DTLS" data channel negotiation mechanism for establishing a CLUE data channel. Details and procedures associated with the CLUE protocol, and the SDP Offer/Answer procedures for negotiating usage of a CLUE data channel, are outside the scope of this document.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-datachannel-10" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-datachannel-10.txt" />
         </reference>
         
         
         <reference anchor="I-D.ietf-clue-data-model-schema">
            <front>
               <title>An XML Schema for the CLUE data model</title>
               <author initials="R" surname="Presta" fullname="Roberta Presta">
                  <organization />
               </author>
               <author initials="S" surname="Romano" fullname="Simon Romano">
                  <organization />
               </author>
               <date month="June" day="29" year="2015" />
               <abstract>
                  <t>This document provides an XML schema file for the definition of CLUE data model types.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-data-model-schema-10" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-data-model-schema-10.txt" />
         </reference>
         
         
         
         <reference anchor="I-D.ietf-clue-protocol">
            <front>
               <title>CLUE protocol</title>
               <author initials="R" surname="Presta" fullname="Roberta Presta">
                  <organization />
               </author>
               <author initials="S" surname="Romano" fullname="Simon Romano">
                  <organization />
               </author>
               <date month="October" day="19" year="2015" />
               <abstract>
                  <t>The CLUE protocol is an application protocol conceived for the description and negotiation of a CLUE telepresence session. The design of the CLUE protocol takes into account the requirements and the framework defined, respectively, in [I-D.ietf-clue-framework] and [RFC7262]. The companion document [I-D.ietf-clue-signaling] delves into CLUE signaling details, as well as on the SIP/SDP session establishment phase. CLUE messages flow upon the CLUE data channel, based on reliable and ordered SCTP over DTLS transport, as described in [I-D.ietf-clue-datachannel]. Message details, together with the behavior of CLUE Participants acting as Media Providers and/or Media Consumers, are herein discussed.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-protocol-06" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-protocol-06.txt" />
         </reference>
         
         
         <reference anchor="I-D.ietf-clue-rtp-mapping">
            <front>
               <title>Mapping RTP streams to CLUE media captures</title>
               <author initials="R" surname="Even" fullname="Roni Even">
                  <organization />
               </author>
               <author initials="J" surname="Lennox" fullname="Jonathan Lennox">
                  <organization />
               </author>
               <date month="October" day="18" year="2015" />
               <abstract>
                  <t>This document describes how the Real Time transport Protocol (RTP) is used in the context of the CLUE protocol. It also describes the mechanisms and recommended practice for mapping RTP media streams defined in SDP to CLUE media captures.</t>
               </abstract>
            </front>
            <seriesInfo name="Internet-Draft" value="draft-ietf-clue-rtp-mapping-05" />
            <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-rtp-mapping-05.txt" />
            <format type="PDF" target="http://www.ietf.org/internet-drafts/draft-ietf-clue-rtp-mapping-05.pdf" />
         </reference>
         
         -->
         
         
         
         
      </references>
   </back>
</rfc>
