<?xml version="1.0" encoding="US-ASCII"?>
<!-- This template is for creating an Internet Draft using xml2rfc,
     which is available here: http://xml.resource.org. -->
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!-- One method to get references from the online citation libraries.
     There has to be one entity for each item to be referenced.
     An alternate method (rfc include) is described in the references. -->

<!ENTITY RFC0821 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.0821.xml">
<!ENTITY RFC2821 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2821.xml">
<!ENTITY RFC5321 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5321.xml">
<!-- <!ENTITY I-D.tomkinson-multilangcontent PUBLIC "" "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.tomkinson-multilangcontent.xml"> -->
]>

<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
     please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
     (Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->
<rfc 
     category="std"
     docName="draft-ietf-slim-negotiating-human-language-02"
     ipr="trust200902"
    >
  <!-- category values: std, bcp, info, exp, and historic
     ipr values: full3667, noModification3667, noDerivatives3667
     you can add the attributes updates="NNNN" and obsoletes="NNNN"
     they will automatically be output with "(if approved)" -->

  <!-- ***** FRONT MATTER ***** -->

  <front>
    <!-- The abbreviated title is used in the page header - it is only necessary if the
         full title is longer than 39 characters -->

    <title abbrev="Negotiating Human Language">Negotiating Human Language in Real-Time Communications
    </title>

    <author fullname="Randall Gellens" initials="R." 
            surname="Gellens">
      <organization>Core Technology Consulting</organization>

      <address>

        <email>rg+ietf@randy.pensive.org</email>
      </address>
    </author>

    <date  year="2016" />

    <!-- If the month and year are both specified and are the current ones, xml2rfc will fill
         in the current day for you. If only the current year is specified, xml2rfc will fill
     in the current day and month for you. If the year is not the current one, it is
     necessary to specify at least a month (xml2rfc assumes day="1" if not specified for the
     purpose of calculating the expiry date).  With drafts it is normally sufficient to
     specify just the year. -->

    <!-- Meta-data Declarations -->

    <area>ART</area>

    <workgroup>Network Working Group</workgroup>

    <keyword>SDP</keyword>
    <keyword>language</keyword>
    <keyword>human language</keyword>
    <keyword>SIP</keyword>
    <keyword>SLIM</keyword>

    <!-- Keywords will be incorporated into HTML output
         files in a meta tag but they have no effect on text or nroff
         output. If you submit your draft to the RFC Editor, the
         keywords will be used for the search engine. -->

    <abstract>
      <t>
      Users have various human (natural) language needs, abilities, and preferences regarding spoken, written, and signed languages.  When establishing interactive communication ("calls") there needs to be a way to negotiate (communicate and match) the caller's language and media needs with the capabilities of the called party.  This is especially important with emergency calls, where a call can be handled by a call taker capable of communicating with the user, or a translator or relay operator can be bridged into the call during setup, but this applies to non-emergency calls as well (as an example, when calling a company call center).
      </t>
      <t>
      This document describes the need and a solution using new SDP stream attributes.
      </t>

    </abstract>

  </front>

  <middle>

    <section title="Introduction">
    <t>
    A mutually comprehensible language is helpful for human communication.  This document addresses the real-time, interactive side of the issue.  A companion document on language selection in email <xref target="draft-tomkinson-multilangcontent"/> addresses the non-real-time side.
    </t>

      <t>
      When setting up interactive communication sessions (using SIP or other protocols), human (natural) language and media modality (voice, video, text) negotiation may be needed.  Unless the caller and callee know each other or there is contextual or out of band information from which the language(s) and media modalities can be determined, there is a need for spoken, signed, or written languages to be negotiated based on the caller's needs and the callee's capabilities.  This need applies to both emergency and non-emergency calls.  For various reasons, including the ability to establish multiple streams using different media (e.g., voice, text, video), it makes sense to use a per-stream negotiation mechanism, in this case, SDP.
      </t>
      <t>
      This approach has a number of benefits, including that it is generic (applies to all interactive communications negotiated using SDP) and not limited to emergency calls.  In some cases such a facility isn't needed, because the language is known from the context (such as when a caller places a call to a sign language relay center, to a friend, or colleague).  But it is clearly useful in many other cases.  For example, someone calling a company call center or a Public Safety Answering Point (PSAP) should be able to indicate if one or more specific signed, written, and/or spoken languages are preferred, the callee should be able to indicate its capabilities in this area, and the call proceed using in-common language(s) and media forms.
      </t>
      
      <t>
      Since this is a protocol mechanism, the user equipment (UE client) needs to know the user's preferred languages; a reasonable technique could include a configuration mechanism with a default of the language of the user interface.  In some cases, a UE could tie language and media preferences, such as a preference for a video stream using a signed language and/or a text or audio stream using a written/spoken language.
      </t>
      <t>
      Including the user's human (natural) language preferences in the session establishment negotiation is independent of the use of a relay service and is transparent to a voice service provider.  For example, assume a user within the United States who speaks Spanish but not English places a voice call.  The call could be an emergency call or perhaps to an airline reservation desk.  The language information is transparent to the voice service provider, but is part of the session negotiation between the UE and the terminating entity.  In the case of a call to e.g., an airline, the call could be automatically handled by a Spanish-speaking agent.  In the case of an emergency call, the Emergency Services IP network (ESInet) and the PSAP may choose to take the language and media preferences into account when determining how to process the call.
      </t>

    <t>
    By treating language as another attribute that is negotiated along with other aspects of a media stream, it becomes possible to accommodate a range of users' needs and called party facilities.  For example, some users may be able to speak several languages, but have a preference.  Some called parties may support some of those languages internally but require the use of a translation service for others, or may have a limited number of call takers able to use certain languages.  Another example would be a user who is able to speak but is deaf or hard-of-hearing and requires a voice stream plus a text stream (known as voice carry over).  Making language a media attribute allows the standard session negotiation mechanism to handle this by providing the information and mechanism for the endpoints to make appropriate decisions.
    </t>

    <t>
    Regarding relay services, in the case of an emergency call requiring sign language such as ASL, there are two common approaches: the caller initiates the call to a relay center, or the caller places the call to emergency services (e.g., 911 in the U.S. or 112 in Europe).  (In a variant of the second case, the voice service provider invokes a relay service as well as emergency services.)  In the former case, the language need is ancillary and supplemental.  In the non-variant second case, the ESInet and/or PSAP may take the need for sign language into account and bridge in a relay center.  In this case, the ESInet and PSAP have all the standard information available (such as location) but are able to bridge the relay sooner in the call processing.
    </t>

    <t>
    By making this facility part of the end-to-end negotiation, the question of which entity provides or engages the relay service becomes separate from the call processing mechanics; if the caller directs the call to a relay service then the human language negotiation facility provides extra information to the relay service but calls will still function without it; if the caller directs the call to emergency services, then the ESInet/PSAP are able to take the user's human language needs into account, e.g., by assigning to a specific queue or call taker or bridging in a relay service or translator.
    </t>
    
    <t>
    The term "negotiation" is used here rather than "indication" because human language (spoken/written/signed) is something that can be negotiated in the same way as which forms of media (audio/text/video) or which codecs.
    For example, if we think of non-emergency calls, such as a user calling an airline reservation center, the user may have a set of languages he or she speaks, with perhaps preferences for one or a few, while the airline reservation center will support a fixed set of languages.  Negotiation should select the user's most preferred language that is supported by the call center.  Both sides should be aware of which language was negotiated.  This is conceptually similar to the way other aspects of each media stream are negotiated using SDP (e.g., media type and codecs).
    </t>
    </section>
    
    <section title="Terminology">
      <t> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD
        NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
        described in <xref target="RFC2119">RFC 2119</xref>.</t>
    </section>

    <section title="Expected Use">
    <t>
    This facility may be used by NENA and 3GPP.  NENA has already referenced it in NENA 08-01 (i3 Stage 3 version 2) in describing attributes of calls presented to an ESInet, and may add further details in that or other documents.  3GPP may reference this mechanism in general call handling and emergency call handling.  Some CRs introduced in SA1 have anticipated this functionality being provided within SDP.
    </t>
    </section>
    
<!-- 
    <section title="Example Use Cases">
        <section title="Emergency Call from English Speaker in Spain">
        <t>
        Someone who speaks only English is visiting Spain and places an emergency (112) call.  The call offers an audio stream using English.  The ESInet and PSAP have policy-based routing rules that take into account the SDP language request when deciding how to route and process the call.  The ESInet routes the call to a PSAP within Spain where an English-speaking call taker is available, and the PSAP selects an English-speaking call taker to handle the call.  The PSAP answers the offer with an audio stream using English.  The call is established with an audio stream; the caller and call taker communicate in English.
        </t>
        <t>
        Alternatively, the ESInet routes the call to a cooperating PSAP within the U.K.  The PSAP answers the offer with an audio stream using English.  The call is established with an audio stream; the caller and call taker communicate in English.  (This approach is similar to that envisioned in REACH112 Total Conversation.)
        </t>
        </section>
        <section title="Emergency Call from Spanish/English Speaker in France">
        <t>
        Someone who speaks both Spanish and English (but prefers Spanish) is visiting France and places an emergency (112) call.  The call offers an audio stream listing first Spanish (meaning most preferred) and then English.  The ESInet and PSAP have policy-based routing rules that take into account the SDP language request when deciding how to route and process the call.  The ESInet routes the call to a PSAP within France where a Spanish-speaking call taker is available, and the PSAP selects a Spanish-speaking call taker to handle the call.  The PSAP answers the offer with an audio stream listing Spanish.  The call is established with an audio stream; the caller and call taker communicate in Spanish.
        </t>
        <t>
        Alternatively, the ESInet routes the call to a cooperating PSAP in Spain or England.  (This approach is similar to that envisioned in REACH112 Total Conversation.)
        </t>
        <t>
        Alternatively, there is no ESInet or the ESInet does not take language into account in its PBR.  The call is routed to a PSAP in France.  The PSAP ignores the language information in the SDP offer, and answers the offer with an audio stream with no language or with French.  The UE continues the call anyway.  The call taker answers in French, the user tries speaking Spanish and perhaps English.  The call taker bridges in a translation service or transfers the call to a multilingual call taker.
        </t>
        </section>
        <section title="Call to Call Center from Russian Speaker in U.S.">
        <t>
        A Russian speaker is visiting the U.S. and places a call to her airline reservation desk to inquire about her return flight.  The airline call processing system takes into account the SDP language request and decides to route the call to its call center within Russia.
        </t>
        <t>Alternatively, if the airline call processing system does not look at SDP, it uses the SIP "hint" if present.
        </t>
        </section>
        <section title="Emergency Call from speech-impaired caller in the U.S.">
        <t>
        Someone who uses English but is speech-impaired places an emergency (911) call.  The call offers an audio stream listing English and a real-time text stream also using English.  The ESInet and PSAP have policy-based routing rules that take into account the SDP language and media requests when deciding how to route and process the call.  The ESInet routes the call to a PSAP with real-time text capabilities.  The PSAP answers the offer with an audio stream listing English and a real-time text stream listing English.  The call is established with an audio and a real-time text stream; the caller and call taker communicate in English using voice from the call-taker to the caller and text from the caller to the call taker.  The audio stream is two-way, allowing the call taker to hear background sounds.
        </t>
        </section>
        <section title="Emergency Call from deaf caller in the U.S.">
        <t>
        A deaf caller who uses American Sign Language (ASL) places an emergency (911) call.  The call offers a video stream listing ASL and an audio stream with no language indicated.  The ESInet and PSAP have policy-based routing rules that take into account the SDP language and media needs when deciding how to route and process the call.  The ESInet routes the call to a PSAP.  The PSAP answers the offer with an audio stream listing English and a video stream listing ASL.  The PSAP bridges in a sign language interpreter.  The call is established with an audio and a video stream.
        </t>
        </section>
    </section>
 -->

    <section title="Desired Semantics">
    <t>
    The desired solution is a media attribute that may be used
    within an offer to indicate the preferred language of each
    media stream, and within an answer to indicate the accepted
    language.  The semantics of including multiple values for a media stream within
    an offer is that the languages are listed in order of
    preference.
    </t>
    
    <t>
    (While conversations among multilingual people sometimes involve multiple languages, the complexity of negotiating multiple simultaneous languages within an interactive media stream outweighs the usefulness of this as a general facility.)
    </t>
    </section>

    <section anchor="existing" title="The existing 'lang' attribute">
    
    <t>
    RFC 4566 <xref target="RFC4566"/> specifies an attribute 'lang' which appears similar to what is needed here, but in considering its use the group felt its semantics were ambiguous, noted that there was not much evidence of its use (and thus less likelihood of conflict or confusion in defining new attributes), and that there was value in being able to specify language per direction (sending and receiving).  This document therefore defines two new attributes.
    </t>

    </section>

    <section title="Proposed Solution">
    <t>
    An SDP attribute seems the natural choice to negotiate human (natural) language of an interactive media stream.  The attribute value should be a language tag per RFC 5646 <xref target="RFC5646"/>
    </t>
    
    <section title="Rationale">
    <t>The decision to base the proposal at the media negotiation level, and specifically to use SDP, came after significant debate and discussion.  From an engineering standpoint, it is possible to meet the objectives using a variety of mechanisms, but none are perfect.  None of the proposed alternatives was clearly better technically in enough ways to win over proponents of the others, and none were clearly so bad technically as to be easily rejected.  As is often the case in engineering, choosing the solution is a matter of balancing trade-offs, and ultimately more a matter of taste than technical merit.  The two main proposals were to use SDP and SIP.  SDP has the advantage that the language is negotiated with the media to which it applies, while SIP has the issue that the languages expressed may not match the SDP media negotiated (for example, a session could negotiate video at the SIP level but fail to negotiate any video media stream at the SDP layer).
    </t>
    <t>The mechanism described here for SDP can be adapted to media negotiation protocols other than SDP.
    </t>

    </section>
    
    <section anchor="new" title="New 'humintlang-send' and 'humintlang-recv' attributes">
    
    <t>
    Rather than re-use 'lang' we define two new media-level attributes starting with 'humintlang' (short for "human interactive language") to negotiate which human language is used in each (interactive) media stream.  There are two attributes, one ending in "-send" and the other in "-recv":
    </t>

    <t>
    <list>
        <t>
        a=humintlang-send:&lt;language tag&gt;
        </t>
        <t>
        a=humintlang-recv:&lt;language tag&gt;
        </t>
    </list>
    </t>

        <t>
        Each can appear multiple times in an offer for a media stream.
        </t>
        <t>
        In an offer, 'humintlang-send' indicates the language(s) the offerer is willing to use when sending using the media, and 'humintlang-recv' indicates the language(s) the offerer is willing to use when receiving using the media.  The values constitute a list of languages in preference order (first is most preferred).  When a media is intended for use in one direction only (such as a speech-impaired user sending using text and receiving using audio), either humintlang-send or humintlang-recv MAY be omitted.  When a media is not primarily intended for language (for example, a video or audio stream intended for background only) both SHOULD be omitted.  Otherwise, both SHOULD have the same values in the same order.  The two SHOULD NOT be set to languages which are difficult to match together (e.g., specifying a desire to send audio in Hungarian and receive audio in Portuguese will make it difficult to successfully complete the call).
        </t>
        <t>
        In an answer, 'humintlang-send' is the accepted language the answerer will send (which in most cases is one of the languages in the offer's 'humintlang-recv'), and 'humintlang-recv' is the accepted language the answerer expects to receive (which in most cases is one of the languages in the offer's 'humintlang-send').
        </t>
        <t>
        Each value MUST be a language tag per RFC 5646 <xref target="RFC5646"/>.  RFC 5646 describes mechanisms for matching language tags.  While RFC 5646 provides a mechanism accommodating increasingly fine-grained distinctions, in the interest of maximum interoperability for real-time interactive communications, each 'humintlang-send' and 'humintlang-recv' value SHOULD be restricted to the largest granularity of language tags; in other words, it is RECOMMENDED to specify only a Primary-subtag and NOT to include subtags (e.g., for region or dialect) unless the languages might be mutually incomprehensible without them.
        </t>
        <t>
        In an offer, each language tag value MAY have an asterisk appended as the last character (after the registry value).  The asterisk indicates a request by the caller to not fail the call if there is no language in common.  See <xref target="advisory"/> for more information and discussion.
        </t>
        <t>
        When placing an emergency call, and in any other case where the language cannot be assumed from context, each media stream in an offer primarily intended for human language communication SHOULD specify both (or in some cases, one of) the 'humintlang-send' and 'humintlang-recv' attributes.
        </t>
        <t>
        Note that while signed language tags are used with a video stream to indicate sign language, a spoken language tag for a video stream in parallel with an audio stream with the same spoken language tag indicates a request for a supplemental video stream to see the speaker.
        </t>
        
        <t>
        Clients acting on behalf of end users are expected to set one or both 'humintlang-send' and 'humintlang-recv' attributes on each media stream primarily intended for human communication in an offer when placing an outgoing session, and either ignore or take into consideration the attributes when receiving incoming calls, based on local configuration and capabilities.  Systems acting on behalf of call centers and PSAPs are expected to take into account the values when processing inbound calls.
        </t>
    </section>
    
    <section anchor="advisory" title="Advisory vs Required">
    <t>
    One important consideration with this mechanism is if the call fails if the callee does not support any of the languages requested by the caller.
    </t>
    <t>
    In order to provide for maximum likelihood of a successful communication session, especially in the case of emergency calling, the mechanism defined here provides a way for the caller to indicate a preference for the call failing or succeeding when there is no language in common.  However, the callee is NOT REQUIRED to honor this preference.  For example, a PSAP MAY choose to attempt the call even with no language in common, while a corporate call center MAY choose to fail the call.
    </t>
    <t>The mechanism for indicating this preference is that, in an offer, if the last character of any of the 'humintlang-recv' or 'humintlang-send' values is an asterisk, this indicates a request to not fail the call (similar to SIP Accept-Language syntax).  Either way, the called party MAY ignore this, e.g., for the emergency services use case, a PSAP will likely not fail the call.
    </t>
    </section>
    
    <section title="Silly States">
    <t>
    It is possible to specify a "silly state" where the language specified does not make sense for the media type, such as specifying a signed language for an audio media stream.
    </t>
    
    <t>
    An offer MUST NOT be created where the language does not make sense for the media type.  If such an offer is received, the receiver MAY reject the media, ignore the language specified, or attempt to interpret the intent (e.g., if American Sign Language is specified for an audio media stream, this might be interpreted as a desire to use spoken English). 
    </t>
    <t>
    A spoken language tag for a video stream in conjunction with an audio stream with the same language might indicate a request for supplemental video to see the speaker.
    </t>
    </section>

    </section>

    <section title="IANA Considerations">
    <t>
    IANA is kindly requested to add two entries to the 'att-field (media level only)' table of the SDP parameters registry:
    </t>
      
    <texttable anchor="att-field-registry" title="att-field (media level only)' entries">
        <ttcol align='center'>Type</ttcol>
        <ttcol align='center'>Name</ttcol>
        <ttcol align='center'>Reference</ttcol>

        <c>att-field (media level only)</c>
        <c>humintlang-send</c>
        <c>(this document)</c>

        <c>att-field (media level only)</c>
        <c>humintlang-recv</c>
        <c>(this document)</c>
    </texttable>
    </section>

    <section title="Security Considerations">
      <t>
      The Security Considerations of RFC 5646 <xref target="RFC5646"/> apply here (as a use of that RFC).  In addition, if the 'humintlang-send' or 'humintlang-recv' values are altered or deleted en route, the session could fail or languages incomprehensible to the caller could be selected; however, this is also a risk if any SDP parameters are modified en route.
      </t>
    </section>

    <section title="Changes from Previous Versions">

    <section title="Changes from draft-ietf-slim-...-01 to draft-ietf-slim-...-02">
        <t>
        <list style="symbols">
            <t>Deleted most of <xref target="existing"/> and replaced with a very short summary</t>
            <t>Replaced "wishes to" with "is willing to" in <xref target="new"/></t>
            <t>Reworded description of attribute usage to clarify when to set both, only one, or neither</t>
            <t>Deleted all uses of "IMS"</t>
            <t>Other editorial changes for clarity</t>
        </list>
        </t>
     </section>

    <section title="Changes from draft-ietf-slim-...-00 to draft-ietf-slim-...-01">
        <t>
        <list style="symbols">
            <t>Editorial changes to wording in Section 5.</t>
        </list>
        </t>
     </section>

    <section title="Changes from draft-gellens-slim-...-03 to draft-ietf-slim-...-00">
        <t>
        <list style="symbols">
            <t>Updated title to reflect WG adoption</t>
        </list>
        </t>
     </section>

    <section title="Changes from draft-gellens-slim-...-02 to draft-gellens-slim-...-03">
        <t>
        <list style="symbols">
            <t>Removed Use Cases section, per face-to-face discussion at IETF 93</t>
            <t>Removed discussion of routing, per face-to-face discussion at IETF 93</t>
        </list>
        </t>
     </section>
      <section title="Changes from draft-gellens-slim-...-01 to draft-gellens-slim-...-02">
        <t>
        <list style="symbols">
            <t>Updated NENA usage mention</t>
            <t>Removed background text reference to draft-saintandre-sip-xmpp-chat-04 since that draft expired</t>
        </list>
        </t>
     </section>

      <section title="Changes from draft-gellens-slim-...-00 to draft-gellens-slim-...-01">
        <t>
        <list style="symbols">
            <t>Revision to keep draft from expiring
            </t>
        </list>
        </t>
     </section>

      <section title="Changes from draft-gellens-mmusic-...-02 to draft-gellens-slim-...-00">
        <t>
        <list style="symbols">
            <t>Changed name from -mmusic- to -slim- to reflect proposed WG name
            </t>
            <t>As a result of the face-to-face discussion in Toronto, the SDP vs SIP issue was resolved by going back to SDP, taking out the SIP hint, and converting what had been a set of alternate proposals for various ways of doing it within SIP into an informative annex section which includes background on why SDP is the proposal
            </t>
            <t>Added mention that enabling a mutually comprehensible language is a general problem of which this document addresses the real-time side, with reference to <xref target="draft-tomkinson-multilangcontent" /> which addresses the non-real-time side.
            </t>
        </list>
        </t>
     </section>

      <section title="Changes from draft-gellens-mmusic-...-01 to -02">
        <t>
        <list style="symbols">
            <t>Added clarifying text on leaving attributes unset for media not primarily intended for human language communication (e.g., background audio or video).
            </t>
            <t>Added new section <xref target="CallerPrefs"/> ("Alternative Proposal: Caller-prefs") discussing use of SIP-level Caller-prefs instead of SDP-level.</t>
        </list>
        </t>
     </section>
     
      <section title="Changes from draft-gellens-mmusic-...-00 to -01">
        <t>
        <list style="symbols">
            <t>Relaxed language on setting -send and -receive to same values; added text on leaving on empty to indicate asymmetric usage.
            </t>
            <t>Added text that clients on behalf of end users are expected to set the attributes on outgoing calls and ignore on incoming calls while systems on behalf of call centers and PSAPs are expected to take the attributes into account when processing incoming calls.
            </t>
        </list>
        </t>
     </section>

      <section title="Changes from draft-gellens-...-02 to draft-gellens-mmusic-...-00">
        <t>
        <list style="symbols">
            <t>Updated text to refer to RFC 5646 rather than the IANA language subtags registry directly.</t>
            <t>Moved discussion of existing 'lang' attribute out of "Proposed Solution" section and into own section now that it is not part of proposal.</t>
            <t>Updated text about existing 'lang' attribute.</t>
            <t>Added example use cases.</t>
            <t>Replaced proposed single 'humintlang' attribute with 'humintlang-send' and 'humintlang-recv' per Harald's request/information that it was a misuse of SDP to use the same attribute for sending and receiving.</t>
            <t>Added section describing usage being advisory vs required and text in attribute section.</t>
            <t>Added section on SIP "hint" header (not yet nailed down between new and existing header).</t>
            <t>Added text discussing usage in policy-based routing function or use of SIP header "hint" if unable to do so.</t>
            <t>Added SHOULD that the value of the parameters stick to the largest granularity of language tags.</t>
            <t>Added text to Introduction to be try and be more clear about purpose of document and problem being solved.</t>
            <t>Many wording improvements and clarifications throughout the document.</t>
            <t>Filled in Security Considerations.</t>
            <t>Filled in IANA Considerations.</t>
            <t>Added to Acknowledgments those who participated in the Orlando ad-hoc discussion as well as those who participated in email discussion and side one-on-one discussions.</t>
        </list>
        </t>
      </section>
      
      <section title="Changes from draft-gellens-...-01 to -02">
        <t>
        <list style="symbols">
            <t>Updated text for (possible) new attribute "humintlang" to reference RFC 5646</t>
            <t>Added clarifying text for (possible) re-use of existing 'lang' attribute saying that the registration would be updated to reflect different semantics for multiple values for interactive versus non-interactive media.</t>
            <t>Added clarifying text for (possible) new attribute "humintlang" to attempt to better describe the role of language tags in media in an offer and an answer.</t>
        </list>
        </t>
      </section>

      <section title="Changes from draft-gellens-...-00 to -01">
        <t>
        <?rfc compact="yes" ?>
        <?rfc subcompact="yes" ?>
        <list style="symbols">
            <t>Changed name of (possible) new attribute from 'humlang" to "humintlang"</t>
            <t>Added discussion of silly state (language not appropriate for media type)</t>
            <t>Added Voice Carry Over example</t>
            <t>Added mention of multilingual people and multiple languages</t>
            <t>Minor text clarifications</t>
        </list>
        </t>
      </section>

    </section>

    <section title="Contributors">
    <t>Gunnar Hellstrom deserves special mention for his reviews, assistance, and especially for contributing the core text in <xref target="CallerPrefs"/>.
    </t>
    </section>
    
    <section title="Acknowledgments">
      <t>Many thanks to Bernard Aboba, Harald Alvestrand, Flemming Andreasen, Francois Audet, Eric Burger, Keith Drage, Doug Ewell, Christian Groves, Andrew Hutton, Hadriel Kaplan, Ari Keranen, John Klensin, Paul Kyzivat, John Levine, Alexey Melnikov, James Polk, Pete Resnick, Peter Saint-Andre, and Dale Worley for reviews, corrections, suggestions, and participating in in-person and email discussions.</t>
    </section>

  </middle>

  <!--  *****BACK MATTER ***** -->

  <back>
    <!-- References split into informative and normative -->

    <!-- There are 2 ways to insert reference entries from the citation libraries:
     1. define an ENTITY at the top, and use "ampersand character"RFC2629; here (as shown)
     2. simply use a PI "less than character"?rfc include="reference.RFC.2119.xml"?> here
        (for I-Ds: include="reference.I-D.narten-iana-considerations-rfc2434bis.xml")

     Both are cited textually in the same manner: by using xref elements.
     If you use the PI option, xml2rfc will, by default, try to find included files in the same
     directory as the including file. You can also define the XML_LIBRARY environment variable
     with a value containing a set of directories to search.  These can be either in the local
     filing system or remote ones accessed by http (http://domain/dir/... ).-->

    <references title="Normative References">
      <?rfc include="reference.RFC.2119" ?>
      <?rfc include="reference.RFC.4566"?>
      <?rfc include="reference.RFC.5646"?>
      <?rfc include="reference.RFC.3840"?>
      <?rfc include="reference.RFC.3841"?>
    </references>
    <references title="Informational References">
      <?rfc include="reference.RFC.3066"?>
      <?rfc include="reference.I-D.saintandre-sip-xmpp-chat"?>
      <?rfc include="reference.I-D.iab-privacy-considerations"?>
      <!-- <?rfc include="reference.I-D.tomkinson-multilangcontent"?> -->
      
      <reference anchor="draft-tomkinson-multilangcontent">
          <front>
          <title>Multiple Language Content Type</title>
          <author initials="N" surname="Tomkinson" fullname="Nik Tomkinson">
          <organization/>
          </author>
          <author initials="N" surname="Borenstein" fullname="Nathaniel Borenstein">
          <organization/>
          </author>
          <date month="April" day="16" year="2014"/>
          <abstract>
          <t>
          This document defines an addition to the Multipurpose Internet Mail Extensions (MIME) standard to make it possible to send one message that contains multiple language versions of the same information. The translations would be identified by a language code and selected by the email client based on a user's language settings or locale.
          </t>
          </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-tomkinson-multilangcontent"/>
          <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-tomkinson-multilangcontent"/>
      </reference>

    </references>


    <section anchor="CallerPrefs" title="Historic Alternative Proposal: Caller-prefs">
    <t>The decision to base the proposal at the media negotiation level, and specifically to use SDP, came after significant debate and discussion.  It is possible to meet the objectives using a variety of mechanisms, but none are perfect.  Using SDP means dealing with the complexity of SDP, and leaves out real-time session protocols that do not use SDP.  The major alternative proposal was to use SIP.  Using SIP leaves out non-SIP session protocols, but more fundamentally, would occur at a different layer than the media negotiation.  This results in a more fragile solution since the media modality and language would be negotiated using SIP, and then the specific media formats (which inherently include the modality) would be negotiated at a different level (typically SDP, especially in the emergency calling cases), making it easier to have mismatches (such as where the media modality negotiated in SIP don't match what was negotiated using SDP).
    </t>
    <t>
    An alternative proposal was to use the SIP-level Caller Preferences mechanism from <xref target="RFC3840">RFC 3840</xref> and <xref target="RFC3841">RFC 3841</xref>.
    </t>
    <t>
    The Caller-prefs mechanism includes a priority system; this would allow different combinations of media and languages to be assigned different priorities. The evaluation and decisions on what to do with the call can be done either by proxies along the call path, or by the addressed UA. Evaluation of alternatives for routing is described in <xref target="RFC3841">RFC 3841</xref>.
    </t>

        <section title="Use of Caller Preferences Without Additions">
        <t>
        The following would be possible without adding any new registered tags:
        </t>
        <t>
        Potential callers and recipients MAY include in the Contact field in their SIP registrations media and language tags according to the joint capabilities of the UA and the human user according to <xref target="RFC3840">RFC 3840</xref>.
        </t>
        <t>
        The most relevant media capability tags are "video", "text" and "audio".
        Each tag represents a capability to use the media in two-way communication.
        </t>
        <t>
        Language capabilities are declared with a comma-separated list of languages that can be used in the call as parameters to the tag "language=". 
        </t>
        <t>
        This is an example of how it is used in a SIP REGISTER:
        </t>
        <t>
            <list>
            <t>
                <list style="hanging" hangIndent="12">
                    <t hangText="REGISTER">user@example.net
                    </t>
                    
                    <t hangText="Contact:">&lt;sip:user1@example.net&gt;
                     audio; video; text; language=&quot;en,es,ase&quot;
                    </t>
                </list>
            </t>
            </list>
        </t>
        <t>
        Including this information in SIP REGISTER allows proxies to act on the information.  For the problem set addressed by this document, it is not anticipated that proxies will do so using registration data.  Further, there are classes of devices (such as cellular mobile phones) that are not anticipated to include this information in their registrations.  Hence, use in registration is OPTIONAL.
        </t>
        <t>
        In a call, a list of acceptable media and language combinations is declared, and a priority assigned to each combination.
        </t>
        <t>
        This is done by the Accept-Contact header field, which defines different combinations of media and languages and assigns priorities for completing the call with the SIP URI represented by that Contact.  A priority is assigned to each set as a so-called "q-value" which ranges from 1 (most preferred) to 0 (least preferred).
        </t>
        <t>
        Using the Accept-Contact header field in INVITE requests and responses allows these capabilities to be expressed and used during call set-up.  Clients SHOULD include this information in INVITE requests and responses.
        </t>
        <t>
        Example:
        </t>
        <t>
            <list>
            <t>
                <list style="hanging" hangIndent="19">
                    <t hangText="Accept-Contact:"> *; text; language=&quot;en&quot;; q=0.2
                    </t>
                    <t hangText="Accept-Contact:"> *; video; language=&quot;ase&quot;; q=0.8
                    </t>
                </list>
            </t>
            </list>
        </t>
        
        <t>
        This example shows the highest preference expressed by the caller is to use video with American Sign Language (language code "ase").
        As a fallback, it is acceptable to get the call connected with only English text used for human communication. Other media may of course be connected as well, without expectation that it will be usable by the caller for interactive communications (but may still be helpful to the caller).
        </t>
        <t>
        This system satisfies all the needs described in the previous chapters, except that language specifications do not make any distinction between spoken and written language, and that the need for directionality in the specification cannot be fulfilled.
        </t>
        <t>
        To some degree the lack of media specification between speech and text in language tags can be compensated by only specifying the important medium in the Accept-Contact field.
        </t>
        <t>
        Thus, a user who wants to use English mainly for text would specify:
        </t>
        <t>
            <list>
            <t>
                <list style="hanging" hangIndent="19">
                    <t hangText="Accept-Contact:"> *;text;language="en";q=1.0</t>
                </list>
            </t>
            </list>
        </t>
        <t>
        While a user who wants to use English mainly for speech but accept it for text would specify:
        </t>
        <t>
            <list>
            <t>
                <list style="hanging" hangIndent="19">
                    <t hangText="Accept-Contact:">*;audio;language="en";q=0.8</t>
                    <t hangText="Accept-Contact:">*;text;language="en";q=0.2</t>
                </list>
            </t>
            </list>
        </t>
        
        <t>
        However, a user who would like to talk, but receive text back has no way to do it with the existing specification.
        </t>

        </section>

        <section title="Additional Caller Preferences for Asymmetric Needs">
        <t>
        In order to be able to specify asymmetric preferences, there are two possibilities. Either new language tags in the style of the humintlang parameters described above for SDP could be registered, or additional media tags describing the asymmetry could be registered.
        </t>
        
            <section title="Caller Preferences for Asymmetric Modality Needs">
            <t>
            The following new media tags should be defined:
            </t>
            <t>
                <list>
                    <t>speech-receive</t>
                    <t>speech-send</t>
                    <t>text-receive</t>
                    <t>text-send</t>
                    <t>sign-send</t>
                    <t>sign-receive</t>
                </list>
            </t>
            <t>
            A user who prefers to talk and get text in return in English would register the following (if including this information in registration data):
            </t>
            <t>
                <list>
                <t>
                    <list style="hanging" hangIndent="12">
                        <t hangText="REGISTER">user@example.net</t>
                        <t hangText="Contact:">&lt;sip:user1@example.net&gt;                          audio;text;speech-send;text-receive;language="en"</t>
                    </list>
                </t>
                </list>
            </t>
            
            <t>
            At call time, a user who prefers to talk and get text in return in English would set the Accept-Contact header field to:
            </t>
            <t>
                <list>
                <t>
                    <list style="hanging" hangIndent="19">
                        <t hangText="Accept-Contact:">*; audio; text; speech-receive; text-send; language="en";q=0.8</t>
                        <t hangText="Accept-Contact:">*; text; language="en"; q=0.2</t>
                    </list>
                </t>
                </list>
            </t>
            <t>
            Note that the directions specified here are as viewed from the callee side to match what the callee has registered.
            </t>
            <t>
            A bridge arranged for invoking a relay service specifically arranged for captioned telephony would register the following for supporting calling users:
            </t>
            <t>
                <list>
                <t>
                    <list style="hanging" hangIndent="12">
                        <t hangText="REGISTER">ct@ctrelay.net</t>
                        <t hangText="Contact:">&lt;sip:ct1@ctreley.net&gt;                          audio; text; speech-receive; text-send; language="en"</t>
                    </list>
                </t>
                </list>
            </t>
            <t>
            A bridge arranged for invoking a relay service specifically arranged for captioned telephony would register the following for supporting called users:
            </t>
            <t>
                <list>
                <t>
                    <list style="hanging" hangIndent="12">
                        <t hangText="REGISTER">ct@ctrelay.net</t>
                        <t hangText="Contact:">&lt;sip:ct2@ctreley.net&gt;                          audio; text; speech-send; text-receive; language="en"</t>
                    </list>
                </t>
                </list>
            </t>
            <t>
            At call time, these alternatives are included in the list of possible outcome of the call routing by the SIP proxies and the proper relay service is invoked.
            </t>

            </section>
            
            <section title="Caller Preferences for Asymmetric Language Tags">
            <t>
            An alternative is to register new language tags for the purpose of asymmetric language usage.
            </t>
            <t>
            Instead of using "language=", six new language tags would be registered:
            </t>
            <t>
                <list>
                    <t>humintlang-text-recv</t>
                    <t>humintlang-text-send</t>
                    <t>humintlang-speech-recv</t>
                    <t>humintlang-speech-send</t>
                    <t>humintlang-sign-recv</t>
                    <t>humintlang-sign-send</t>
                </list>
            </t>
            <t>
            These language tags would be used instead of the regular bidirectional language tags, and users with bidirectional capabilities SHOULD specify values for both directions. Services specifically arranged for supporting users with asymmetric needs SHOULD specify only the asymmetry they support.
            </t>

            </section>

        </section>

    </section>




  </back>
</rfc>
