<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
  <!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
  <!ENTITY RFC5888 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5888.xml">
  <!ENTITY RFC8866 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8866.xml">
  <!ENTITY RFC3550 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3550.xml">
  <!ENTITY RFC3261 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3261.xml">
  <!ENTITY RFC7205 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7205.xml">
  <!ENTITY RFC5234 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5234.xml">
  <!ENTITY RFC8126 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8126.xml">
  <!ENTITY RFC8126 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8126.xml">
  <!ENTITY RFC8446 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8446.xml">
  <!ENTITY RFC8550 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8550.xml">
  <!ENTITY RFC8174 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8174.xml">
<!ENTITY RFC8845 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8845.xml">
<!ENTITY RFC8859 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8859.xml">



<!--
<!ENTITY RFC3524 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3524.xml">
<!ENTITY RFC8445 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8445.xml">
<!ENTITY RFC5583 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5583.xml">
<!ENTITY RFC5956 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5956.xml">
-->

]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- used by XSLT processors -->
<!-- OPTIONS, known as processing instructions (PIs) go here. -->
<!-- For a complete list and description of PIs,
     please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable PIs that most I-Ds might want to use. -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC): -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="2"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references: -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space:
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of popular PIs -->
<rfc category="std" docName="draft-abhishek-mmusic-superimposition-grouping-02" ipr="trust200902">
   
   <front>
      <title abbrev="Superimposition Group Semantic">SDP Superimposition Grouping framework</title>
      <!-- Authors -->
      <author fullname="Rohit Abhishek" initials="R." surname="Abhishek">
         <organization>Tencent</organization>
         <address>
            <postal>
               <street>2747 Park Blvd</street>
               <city>Palo Alto</city>
               <region />
               <code>94588</code>
               <country>USA</country>
            </postal>
            <email>rabhishek@rabhishek.com</email>
         </address>
      </author>
      
      
      <author fullname="Stephan Wenger" initials="S." surname="Wenger">
         <organization>Tencent</organization>
         <address>
            <postal>
               <street>2747 Park Blvd</street>
               <city>Palo Alto</city>
               <region />
               <code>94588</code>
               <country>USA</country>
            </postal>
            <email>stewe@stewe.org</email>
         </address>
      </author>
      <!--
     
-->
      <date year="2021" />
      <area>art</area>
    <workgroup>mmusic</workgroup>
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <!-- <keyword/> -->
      <abstract>
         <t>
             This document defines semantics that allow for signaling a new SDP group "supim" for superimposed media in an SDP session. 
			 The "supim" attribute can be used by the application to relate all the fully or partly superimposed visual media streams enabling them to be added as an overlay on top of any one or more background visual media streams. 
			 The superimposition grouping semantics is helpful if the media stream data is separate and transported via different sessions.

            <!-- It can be added to the any particular media stream indicating the stream to be superimposed on top of the immersive video. -->
         </t>
      </abstract>
   </front>
 
   <middle>
      <section title="Introduction">
		  
		  <t>
              This document defines semantics that allow for signaling a new SDP group "supim" for superimposed media in an SDP session. 
 			 The "supim" attribute can be used by the application to relate all the fully or partly superimposed visual media streams enabling them to be added as an overlay on top of any one or more background visual media streams. 
 			 The superimposition grouping semantics is helpful if the media stream data is separate and transported via different sessions.
		  </t>
		  
         <t>
             Media superimposition herein is defined to be a visual media stream (video/image/text) that is fully or partly superimposed on top of an already existing visual media stream such that the resulting foreground and background media can be displayed simultaneously. Superimposition can be recursive in that visual media that is superimposed against its background can, in turn, be the background of another superimposed visual media. The superimposed visual media displayed over a background media content may be anywhere between opaque and transparent. 
			 
	Examples of applications for video superimposition include real-time multi-party gaming, where these superimposed media may be used to provide additional details or stats about each player, or multi-party teleconferencing where visual media from users in the teleconference may be superimposed over a background media or over each other.	
			 
             </t>
             
             <t>
                 This document describes new SDP group semantics for grouping the superimposition in an SDP session.	An SDP session description consists of one or multiple media lines known as "m" lines which can be identified by a token carried in a "mid" attribute.	The SDP session describes a session-level group-level attribute that groups different media lines using a defined group semantics.	The semantics defined in this memo are to be used in conjunction with "The Session Description Protocol (SDP) Grouping Framework" <xref target="RFC5888" />. <!--The transparency of the superimposed media is currently out of this draft’s scope.-->
             </t>
             
             <t>			  
				 We have studied the existing specifications, including the CLUE framework <xref target="RFC8845" /> and work in MPEG, and found that such work is not covering our intended application space; please refer to <xref target="ExistingSpecification"/> for details. The superimposition grouping as described below enables a compliant receiver/renderer implementation to know the relative relevance of the visual media as coded by the sender(s) and, in a compliant implementation, observed by the renderer through superimposition when needed.  
				 

                </t>

          

          
          
      </section>
      
      
    <!--
      <section anchor="Discussion" title="Discussion Venue for this draft">
         <t>
             (Note to RFC Editor - if this document ever reaches you, please
              remove this section) </t>


           <t>   Substantial discussion of this document should take place on the MMUSIC
              working group mailing list ( mmusic@ietf.org).  Subscription and archive
              details are at https://www.ietf.org/mailman/listinfo/mmusic.
         </t>
      </section>
      -->
    
    
    
      
      
      <section anchor="Terminology" title="Terminology">
         <t>
	         The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
	         NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
	         "MAY", and "OPTIONAL" in this document are to be interpreted as
	         described in BCP 14 <xref target="RFC2119" /> <xref target="RFC8174" /> when, and only when, they
	         appear in all capitals, as shown here.  
         </t>
      </section>



      <section anchor="media_superimpostion" title="Media Superimposition in SDP">
        
			
			<t>	
				
                SDP is predominantly used for describing the format for multimedia communication sessions. Many SDP-based systems use open standards such as RTP <xref target=" RFC3550"/> for media transport and SIP <xref target=" RFC3261"/> for session setup and control. An SDP session may contain more than one media description, with each media description identified by "m"=line. Each line denotes a single media stream. If multiple visual media lines are present in a session, at present, rendering aspects, including their possible superimposition (foreground/ background), relationship at the rendering device is undefined.	This memo introduces a mechanism in which certain rendering information becomes available.	
				 The rendering information herein is limited to the foreground/background relationship of each grouped media  to other media streams through a layer order value, and optionally a transparency value. Where, spatially, the media is rendered is not covered by this memo, and is in many application scenarios a function of the user interface. 
					 An example is shown in <xref target="media_superimposition"/>, where three foreground media streams have been superimposed over a background media stream, with  Media B being partly superimposed over Media C.
           
		             <figure anchor="media_superimposition" title="A example of media superimposition">
		             <preamble></preamble>
		             <artwork><![CDATA[
			 _____________________________________
			| =================                   |
			| ==== Media A ====                   |
			| =================                   |
			| =================                   |
			|                   +++++++++++++++++ |
			|                   ++++ Media B ++++ |
			|       ############+++++++++++++++++ |
			|       ############+++++++++++++++++ |
			|       #### Media C ####             |	
			|       #################             |		
			|_____________________________________|
             
		             ]]></artwork>
		                <postamble></postamble> </figure>
			
					 
				 Of course, assuming sufficient screen real-estate, a renderer may not have to rely on superimposition mechanisms at all-when there is enough screen real-estate available, a valid display strategy may well be to show all media without overlapping and hence without superimposition.  However, when the screen real-estate becomes insufficient, then the information provided by the mechanisms defined in this memo can be used to order (in the sense of foreground to background) the visual media according to a hierarchy chosen by the sender or a MANE (media-aware network element), and based on their application knowledge.
				 
			 </t>
				 
				 
	                <t>
	
          
	             When multiple superimposed streams are transmitted within a session, the receiver needs to be able to relate the media streams to each other. This is achieved by the SDP grouping framework <xref target="RFC5888" /> by using the "group" attribute that groups different "m" lines in a session. By using a new superimpose group semantic defined in this memo, a group’s media streams can be uniquely identified across multiple SDP descriptions exchanged with different receivers, thereby identifying the streams in terms of their role in the session irrespective of their media type and transport protocol. These superimposed streams within the group may be multiplexed based on the guidelines defined in <xref target="draft-ietf-avtcore-multiplex-guidelines-12" />.
	             </t>
 
      </section>





         
      <section anchor="SuperStreamGroup" title="Superimposition Group Identification Attribute">
         <t>
            The "superimposition media stream identification" attribute, “supim”,  is used to identify the relationship of superimposed media streams within a session description. 
			In a superimposition group, the media lines MAY have different media formats. There is no defined behavior for the rendering of non- visual media being grouped in a superimposition group. 
			
			It is assumed that all the media streams are that need to be time- synchronized are time-synchronized.
			
			<!-- The media streams MAY or MAY NOT be time-synchronized. If time-synchronization is REQUIRED, lipsynch (LS) group attribute as defined in <xref target="RFC5888" /> maybe used. -->
             Its formatting follows <xref target="RFC5888" /> in the use of the 'mid' attribute to identify the media line to be included in the superimposition.
             </t>
             
 <!--
             Its formatting in SDP <xref target="RFC8866" /> is described by the following
            Augmented Backus-Naur Form (ABNF) <xref target="RFC5234" />:

         </t>
         <figure>
            <preamble />
            <artwork><![CDATA[mid-attribute = "a=mid:" identification-tag
identification-tag = token
                     ; token is defined in RFC8866]]></artwork>
            <postamble />
         </figure>
 -->
         <t>  It is used for grouping the foreground and the background media streams intended for the purpose of composition with foreground media to be superimposed over the background media stream.
            A media player  that chooses to implement the extension and receives a session description that contains  "m" lines grouped together using "supim" semantics is able to  superimpose the foreground media streams on top of the background media stream in cases where there is overlap. 
			
			For  non-supporting devices, these media streams are treated as independent media streams.
</t>
         <!-->This document defines a standard semantics: Overlay. Semantics extensions follow the Standards Action policy [RFC8126].<-->
      </section>
      


      
      
   
      
      
      
      
      
      <section anchor="groupuse" title="Use of group and mid">
         <t>
             All group and mid attributes MUST follow the rules defined in <xref target="RFC5888" />. The "mid" attribute MUST be used for all "m" lines covering visual media within a session description for which a foreground/background relationship is to be defined. 
			 The foreground/background relationship of visual media within a session description that is not covered in a group is undefined. Multiple groups MUST not be used within one session.
			 If the identification-tags associated with "a=group" lines do not map to any "m" lines, the identification-tags MUST be ignored.
         </t>
         <figure>
            <preamble />
            <artwork><![CDATA[
    semantics = "supim" /; semantics extension
                          as defined in RFC5888
]]></artwork>
            <postamble />
         </figure>
      </section>
      
      
      
      <section anchor="superimpostion" title=' "superimposition" Attribute for Superimposition Group Identification Attribute'>
          <t>
          This memo defines a new media-level attribute, "superimposition", with the following ABNF <xref target="RFC5234"/>. The identification-tag is defined in <xref target="RFC5888" />.
          </t>
          
          <figure>
             <preamble />
             <artwork><![CDATA[
	superimposition-attribute =
		"superimposition:" super-opt *(SP super-opt)
	super-opt = super-trans / super-layer
	super-trans = "transparency:" super-trans-val
	super-layer = "layer:" super-layer-val
	super-trans-val = signed-integer ; range [-128, 127]
	super-layer-val = signed-integer ; range [0, 255]

	signed-integer =
		<zero-based-integer defined in RFC8866>
			/ "-" <integer defined in RFC8866>
	attribute = <attribute defined in RFC4566>
	attribute =/ superimposition-attribute

                      ]]></artwork>
             <postamble />
          </figure>
          
          <t>
             The transparency for the media stream is identified by its super-trans-val values in the super-trans attribute. The value MUST be an ASCII representation of an 8 bit signed integer with values between "-128" and "127", and linear weighting between the two extremes. A value of -128 means the media stream is opaque, and the highest value of 127 means it is transparent. Further details of interpretion is to be left open to the implementer. The layering order value for the media stream is identified by super-layer-val. It MUST be an integer value between 0 and n, where the value 0 represents the deepest background layer. For each k within 0..n, a reconstructed sample of the k-th media is superimposed (while perhaps applying an super-trans-val value) on the 0 to k-th reconstructed samples in the same spatial position. Each "m" line in a session MUST NOT contain more than one instance of super-opt attribute.
          <!-- A lower order means the media stream is a background media and higher value means the media stream is a foreground media. -->
          </t>
          
          
          </section>
      
      

      <section anchor="supim" title="Example of Supim">
         <t>The following example shows a session description for superimposed media streams in
             an SDP session. The "group" line indicates that the "m" lines with tokens 1, 2 and 3 are grouped for the purpose of
             superimposition. </t>
         
         <t>In the example shown below, three media streams are being transmitted for superimposition. The background media stream along with the foreground media streams are grouped together using "supim". All media streams are videos with "superimposition" attribute. The media stream with layer order value 0 is intended for background.
         </t>
        

         <figure>
            <preamble />
            <artwork><![CDATA[
    v=0
    o=Alice 292742730 29277831 IN IP4 233.252.0.74
    c=IN IP4 233.252.0.79
    t=0 0
    a=group:supim 1 2 3
    m=video 30000 RTP/AVP 31
    a=mid:1
    a= superimposition:transparency= -128, layer=0
    m=video 30002 RTP/AVP 31
    a=mid:2
    a= superimposition:transparency=35, layer=1
    m=video 30003 RTP/AVP 31
    a=mid:3
    a= superimposition:transparency=75, layer=2
            ]]></artwork>
            <postamble />
         </figure>
   
  <t> The transparency value is used for composing the foreground with the background media <xref target="Wiki.Alpha-compositing" />. This value itself does not define the transparency of each pixel but is applied to each pixel within a frame and defines the factor by which the transparency of each pixel within a frame is to be increased or decreased. The "layer" value is relevant when two or more media streams are to be composed. When the transparency value of the foreground is -128, the composed image will be the foreground image, as it is being displayed as opaque. Similarly, if the transparency value for the foreground media is 127, the resulting image will be the background media, as the foreground media stream is being presented fully transparent, hence invisible. 
  The details of the weighting of foreground and background sample values based on a given super-trans value is left to the implementation, beyond the abstract definition that value equal to -128 means opaque, and value equal to 127 means transparent, and the weighting is to be implemented such that it is visually linear for the values in between. We do not define a weighting formula in this specification as these formulae would depend on many factors such as the colorspace and the sampling structure of the media.  </t>
    </section>
      
    
    
      
      
      
      <section anchor="ExistingSpecification" title="Relationship with Existing Specifications (informative)">
          <t> Edt. Note: maybe we remove this section later once there is a general understanding why the existing specifications in its current form is unsuitable. The CLUE framework <xref target="RFC8845" /> is the IETF’s chosen technology for the applications requiring defining multiple “captures” (camera views), and their geo-spatial relationship to each.	However, information pertaining to display/rendering is outside of CLUE’s scope. While many CLUE-capable receivers infer appropriate rendering strategies from the information offered by CLUE, the CLUE framework has generally assumed non-overlapped rendering of transmitted and reconstructed video streams from the multiple captures, often on different physical rendering devices.  Insofar, we concluded that the CLUE framework neither supports the application we contemplate in this memo, nor would it be sensible to enhance the CLUE specifications with rendering-related mechanisms. 

There are certain technologies from standards bodies such as MPEG <xref target="MPEG-4" />, often described as “scene descriptions”, that to a certain extent can address the applications we contemplate. We evaluated the technologies we are aware of and concluded that something different is required. We base our assumption on a) the complexity of these mechanisms, and b) their design as a metadata media stream, which in the IETF context would be conveyed in RTP sessions or similar, rather than a static or semi-static stream description that is best conveyed at session setup or renegotiation using SDP. </t>


      </section>
      
      
      <section anchor="Security" title="Security Considerations">
         <t>
            All security considerations as defined in
            <xref target="RFC5888" />
            apply:
         </t>
         <t>Using the "group" parameter with FID semantics, an entity that managed to modify the session descriptions exchanged between the participants to establish a multimedia session could force the participants to send a copy of the media to any destination of its choosing.</t>
         <t>
            Integrity mechanisms provided by protocols used to exchange session descriptions and media encryption can be used to prevent this attack. In SIP, Secure/Multipurpose Internet Mail Extensions (S/MIME)
            <xref target="RFC8550" />
            and Transport Layer Security (TLS)
            <xref target="RFC8446" />
            can be used to protect session description exchanges in an end-to-end and a hop-byhop fashion, respectively.
         </t>
      </section>
      
      
      
      <section anchor="IANA" title="IANA Considerations">
         <!-- Summary of the feature indicated  -->
         <t>   The following contact information shall be used for all registrations included here:</t>
         <figure>
            <preamble />
            <artwork><![CDATA[
      Rohit Abhishek  <rabhishek@rabhishek.com>
      Stephan Wenger <stewe@stewe.org>
      The IETF MMUSIC working group <mmusic@ietf.org> or its successor
                                             as designated by the IESG.
            ]]></artwork>
        <postamble />
     </figure>
   
   
         <t>   This document defines a new SDP group semantics value for media superimposition for a
             SDP session.  This attribute can be used by the
             application to group the foreground and the background media streams to be superimposed together in a session.
            Semantics values to be used with this framework should be registered by the IANA following the Standards Action policy
            <xref target="RFC8126" />. This document adds a new group semantics value to the sdp-paramters registry group defined in <xref target="RFC5888" /> <xref target="RFC8859" />.
         </t>
         <t>IANA is requested to register the following semantics value in the "sdp-parameters" in the registry.</t>
         <figure>
            <preamble />
            <artwork><![CDATA[
Semantics             Token          Reference
----------------------------------------------
Superimposition       supim          RFCXXXX]]></artwork>
            <postamble />
         </figure>
         <t>
             The "supim" attribute is used to group different media streams to be superimposed together with one background media stream and the rest foreground streams. Its format is defined in
            <xref target="SuperStreamGroup" />.
         </t>
         
         
         <t>
             IANA is requested to register the semantics value for SDP media-level attribute "superimposition" for "sdp-attributes(media-level only)". The registration procedure  in <xref target="RFC8866" /> applies.
         </t>
         
         
                 <figure>
              <preamble>  SDP Attribute ("sdp-attributes(media level only)"):    </preamble>
                    <artwork><![CDATA[
      Attribute name: superimposition: transparency, layer
      Long form: superimposition transparency, superimposition layer
      Type of name: att-field
      Type of attribute: media level only
      Subject to charset: no
      Purpose: RFC 5583
      Reference: RFC 5583
      Values: super-trans-val, super-layer-val
                    ]]></artwork>
                                <postamble />
                             </figure>
        
             
         
         <!--
         
         
         
        
         <t>The IANA Considerations section of the RFC MUST include the following information, which appears in the IANA registry along with the RFC number of the publication.</t>
         <t>
            <list style="symbols">
               <t>A brief description of the semantics.</t>
               <t>Token to be used within the "group" attribute. This token may be of any length, but SHOULD be no more than four characters long.</t>
               <t>Reference to a standards track RFC.</t>
            </list>
         </t>
       
   -->
      </section>
      
      
      <section anchor="Acknowledgements" title="Acknowledgements">
         <t>
           The authors would like to thank Christer Holmberg and Paul Kyzivat for reviewing the draft and providing key ideas.
         </t>
      </section>
      
      
      
      
   </middle>
   <back>
      <references title="Normative References">&RFC2119;
      &RFC5888;
      &RFC8866;
      &RFC3550;
      &RFC3261;
      &RFC5234;
      &RFC8126;
      &RFC8446;
      &RFC8550;
	  &RFC8174;
	  &RFC8859;
      
      
      <!--
      &RFC3524;
      &RFC8445;
      &RFC5583;
      &RFC5956;
      
      -->

      
      </references>
      <references title="Informative References">
          &RFC8845;
         <!-- &RFC7205; -->
          
          <reference anchor="draft-ietf-avtcore-multiplex-guidelines-12">
             <front>
                <title>Guidelines for using the Multiplexing Features of RTP to Support
                    Multiple Media Streams</title>
                <author initials="M" surname="Westerlund" fullname="Magnus Westerlund">
                   <organization />
                </author>
                <author initials="B" surname="Burman" fullname="Bo Burman">
                   <organization />
                </author>
                <author initials="C" surname="Perkins" fullname="Colin Perkins">
                   <organization />
                </author>
                <author initials="H" surname="Alvestrand" fullname="Harald Tveit Alvestrand">
                   <organization />
                </author>
                <author initials="R" surname="Even" fullname=" Roni Even">
                   <organization />
                </author>
                
                
                <date month="June" day="16" year="2020" />
                <abstract>
                   <t>   The Real-time Transport Protocol (RTP) is a flexible protocol that
                       can be used in a wide range of applications, networks, and system
                       topologies.  That flexibility makes for wide applicability, but can
                       complicate the application design process.  One particular design
                       question that has received much attention is how to support multiple
                       media streams in RTP.  This memo discusses the available options and
                       design trade-offs, and provides guidelines on how to use the
                       multiplexing features of RTP to support multiple media streams.</t>
                </abstract>
             </front>
             <seriesInfo name="Internet-Draft" value="draft-ietf-avtcore-multiplex-guidelines-12" />
             <format type="TXT" target="https://tools.ietf.org/html/draft-ietf-avtcore-multiplex-guidelines-12.txt" />
          </reference>
          

          
          
          
          <reference anchor="Wiki.Alpha-compositing" target="https://en.wikipedia.org/wiki/Alpha_compositing">
            <front>
              <title>Alpha compositing</title>
              <author/>
              <date/>
            </front>
          </reference>
          
		  
		  
          <reference anchor="MPEG-4" target="https://mpeg.chiariglione.org/standards/mpeg-4/scene-description-and-application-engine">
            <front>
              <title>MPEG-4 Scene Description and Application Engine</title>
              <author/>
              <date/>
            </front>
          </reference>
          
          
          
          
      
          
          
          
          
          
          
          
         
         
         
      </references>
   </back>
</rfc>
