<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc strict="yes"?>
<?rfc toc="yes"?>
<?rfc tocdepth="4"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="std" docName="draft-ietf-softwire-mesh-multicast-14"
     ipr="trust200902">
	<front>
		<title abbrev="softwire mesh multicast">Softwire Mesh Multicast</title>

		<author initials='M.' surname='Xu' fullname='Mingwei Xu'>
			<organization abbrev='Tsinghua University'>Tsinghua University</organization>
			<address>
				<postal>
	        <street>Department of Computer Science, Tsinghua University</street>
	        <city>Beijing</city>
	        <code>100084</code>
	        <country>P.R. China</country>
	    	</postal>
	    	<phone>+86-10-6278-5822</phone>
	    	<email>xmw@cernet.edu.cn</email>
	    </address>
		</author>
		<author initials='Y.' surname='Cui' fullname='Yong Cui'>
			<organization abbrev='Tsinghua University'>Tsinghua University</organization>
			<address>
				<postal>
	        <street>Department of Computer Science, Tsinghua University</street>
	        <city>Beijing</city>
	        <code>100084</code>
	        <country>P.R. China</country>
	    	</postal>
	    	<phone>+86-10-6278-5822</phone>
	    	<email>cuiyong@tsinghua.edu.cn</email>
	    </address>
		</author>
		<author initials='J.' surname='Wu' fullname='Jianping Wu'>
			<organization abbrev='Tsinghua University'>Tsinghua University</organization>
			<address>
				<postal>
	        <street>Department of Computer Science, Tsinghua University</street>
	        <city>Beijing</city>
	        <code>100084</code>
	        <country>P.R. China</country>
	    	</postal>
	    	<phone>+86-10-6278-5983</phone>
	    	<email>jianping@cernet.edu.cn</email>
	    </address>
		</author>
		<author initials='S.' surname='Yang' fullname='Shu Yang'>
			<organization abbrev='Tsinghua University'>Tsinghua University</organization>
			<address>
				<postal>
	        <street>Graduate School at Shenzhen</street>
	        <city>Shenzhen</city>
	        <code>518055</code>
	        <country>P.R. China</country>
	    	</postal>
	    	<phone>+86-10-6278-5822</phone>
	    	<email>yangshu@csnet1.cs.tsinghua.edu.cn</email>
	    </address>
		</author>
		<author initials='C.' surname='Metz' fullname='Chris Metz'>
			<organization abbrev='Cisco Systems'>Cisco Systems</organization>
			<address>
				<postal>
	        <street>170 West Tasman Drive</street>
	        <city>San Jose, CA</city>
	        <code>95134</code>
	        <country>USA</country>
	    	</postal>
	    	<phone>+1-408-525-3275</phone>
	    	<email>chmetz@cisco.com</email>
	    </address>
		</author>
		<author initials='G.' surname='Shepherd' fullname='Greg Shepherd'>
			<organization abbrev='Cisco Systems'>Cisco Systems</organization>
			<address>
				<postal>
	        <street>170 West Tasman Drive</street>
	        <city>San Jose, CA</city>
	        <code>95134</code>
	        <country>USA</country>
	    	</postal>
	    	<phone>+1-541-912-9758</phone>
	    	<email>shep@cisco.com</email>
	    	<!-- gjshep@gmail.com -->
	    </address>
		</author>

		<date day="" month="" year="" />
		<area>Internet</area>
		<workgroup>Softwire WG</workgroup>
		<keyword>Multicast, Mesh, SSM, ASM</keyword>

		<abstract>
			<t>The Internet needs to support IPv4 and IPv6 packets. Both address
				families and their related protocol suites support multicast of the
				single-source and any-source varieties. During IPv6 transition,
				there will be scenarios where a backbone network running one IP
				address family internally (referred to as internal IP or I-IP) will
				provide transit services to attached client networks running another IP
				address family (referred to as external IP or E-IP). It is expected that
				the I-IP backbone will offer unicast and multicast transit services to
				the client E-IP networks.</t>

			<t>Softwire Mesh is a solution providing E-IP unicast and
				multicast support across an I-IP backbone. This document describes the
				mechanism for supporting Internet-style multicast across a set of
				E-IP and I-IP networks supporting softwire mesh.</t>

		</abstract>
	</front>

	<middle>
		<section title="Introduction">
			<t>The Internet needs to support IPv4 and IPv6 packets. Both address
				families and their related protocol suites support multicast of the
				single-source and any-source varieties. During IPv6 transition,
				there will be scenarios where a backbone network running one IP
				address family internally (referred to as internal IP or I-IP) will
				provide transit services to attached client networks running another IP
				address family (referred to as external IP or E-IP).</t>

			<t>One solution is to leverage the multicast functions
				inherent in the I-IP backbone, to efficiently forward
				client E-IP multicast packets inside an I-IP core tree,
				which is rooted at one or more ingress AFBR nodes and branches out
        to one or more egress AFBR leaf nodes.</t>

			<t>
				<xref target="RFC4925"></xref> outlines the requirements for the
				softwires mesh scenario and includes support for multicast traffic. It
        is likely that client E-IP multicast sources and receivers will reside in
				different client E-IP networks connected to an I-IP backbone network.
				This requires the client E-IP source-rooted or shared tree to
				traverse the I-IP backbone network.</t>

			<t>One method of accomplishing this is to re-use the multicast VPN approach
				outlined in <xref target="RFC6513"></xref>. MVPN-like schemes can
				support the softwire mesh scenario and achieve a "many-to-one" mapping
				between the E-IP client multicast trees and the transit core multicast
				trees. The advantage of this approach is that the number of trees in the
				I-IP backbone network scales less than linearly with the number of E-IP
				client trees. Corporate enterprise networks and by extension multicast
				VPNs have been known to run applications that create too many
				(S,G) states. Aggregation at the edge contains the (S,G) states for
        customer's VPNs and these need to be maintained by the network operator.
				The disadvantage of this approach is the possibility of inefficient
        bandwidth and resource utilization when multicast packets are delivered
        to a receiving AFBR with no attached E-IP receivers.</t>

			<t>Internet-style multicast is somewhat different in that the trees
				are source-rooted and relatively sparse. The need for multicast
				aggregation at the edge (where many customer multicast trees are mapped
				into one or more backbone multicast trees) does not exist and to date
				has not been identified. Thus the need for a basic or closer alignment
				with E-IP and I-IP multicast procedures emerges.</t>

      <t><xref target="RFC5565"></xref> describes the "Softwire Mesh Framework".
      This document provides a more detailed description of how one-to-one
      mapping schemes (<xref target="RFC5565"></xref>, Section 11.1) for
      IPv6 over IPv4 and IPv4 over IPv6 can be achieved.</t>

			<section title="Requirements Language">
			  <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
			    "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
			    document are to be interpreted as described in
          <xref target="RFC2119"/>.</t>
      </section>
</section>

<section title="Terminology">
	<t>Figure 1 shows an example of how a softwire mesh network can support
    multicast traffic. A multicast source S is located in one E-IP
		client network, while candidate E-IP group receivers are located in the
		same or different E-IP client networks that all share a common I-IP
		transit network. When E-IP sources and receivers are not local to each
		other, they can only communicate with each other through the I-IP core.
		There may be several E-IP sources for a single multicast group residing in
		different client E-IP networks. In the case of shared trees, the E-IP
		sources, receivers and RPs might be located in different client E-IP
		networks. In the simplest case, a single operator manages the resources of
    the I-IP core, although the inter-operator case is also possible and so
    not precluded.</t>

	<figure title="Figure 1: Softwire Mesh Multicast Framework">
		<artwork>
			<![CDATA[
              ._._._._.            ._._._._.
             |         |          |         |   --------
             |  E-IP   |          |  E-IP   |--|Source S|
             | network |          | network |   --------
              ._._._._.            ._._._._.
                 |                    |
                AFBR             upstream AFBR
                 |                    |
               __+____________________+__
              /   :   :           :   :  \
             |    :      :      :     :   |  E-IP Multicast
             |    : I-IP transit core :   |  packets are forwarded
             |    :     :       :     :   |  across the I-IP
             |    :   :            :  :   |  transit core
              \_._._._._._._._._._._._._./
                  +                   +
             downstream AFBR    downstream AFBR
                  |                   |
               ._._._._            ._._._._
  --------    |        |          |        |   --------
 |Receiver|-- |  E-IP  |          |  E-IP  |--|Receiver|
  --------    |network |          |network |   --------
               ._._._._            ._._._._
			]]>
		</artwork>
	</figure>

	<t></t>

	<t>Terminology used in this document:</t>

	<t>o Address Family Border Router (AFBR) - A router
		interconnecting two or more networks using different IP address
		families. In the context of softwire mesh multicast, the AFBR runs E-IP
		and I-IP control planes to maintain E-IP and I-IP multicast states
		respectively and performs the appropriate encapsulation/decapsulation
		of client E-IP multicast packets for transport across the I-IP core. An
		AFBR will act as a source and/or receiver in an I-IP multicast
		tree.</t>

	<t>o Upstream AFBR: The AFBR router that is located on the upper reaches of
		a multicast data flow.</t>

	<t>o Downstream AFBR: The AFBR router that is located on the lower reaches
		of a multicast data flow.</t>

	<t>o I-IP (Internal IP): This refers to IP address family (i.e., either
		IPv4 or IPv6) that is supported by the core (or backbone)
		network. An I-IPv6 core network runs IPv6 and an I-IPv4
  	core network runs IPv4.</t>

	<t>o E-IP (External IP): This refers to the IP address family (i.e. either
    IPv4 or IPv6) that is supported by the client network(s) attached to the
    I-IP transit core. An E-IPv6 client network runs IPv6 and an E-IPv4 client
		network runs IPv4.</t>

	<t>o I-IP core tree: A distribution tree rooted at one or more AFBR source
    nodes and branched out to one or more AFBR leaf nodes. An I-IP core tree is
    built using standard IP or MPLS multicast signaling protocols operating
    exclusively inside the I-IP core network. An I-IP core tree is used to
    forward E-IP multicast packets belonging to E-IP trees across the I-IP core.
    Another name for an I-IP core tree is multicast or multipoint softwire.</t>

	<t>o E-IP client tree: A distribution tree
		rooted at one or more hosts or routers located inside a client E-IP
		network and branched out to one or more leaf nodes located in the same
		or different client E-IP networks.</t>

	<t>o uPrefix64: The /96 unicast IPv6 prefix for constructing an
		IPv4-embedded IPv6 source address in IPv6-over-IPv4 scenario.</t>

	<t>o uPrefix46: The /96 unicast IPv6 prefix for constructing an
		IPv4-embedded IPv6 source address in IPv4-over-IPv6 scenario.</t>

	<t>o mPrefix46: The /96 multicast IPv6 prefix for constructing an
		IPv4-embedded IPv6 multicast address in IPv4-over-IPv6 scenario.</t>

	<t>o Inter-AFBR signaling: A mechanism used by downstream AFBRs to send
		PIM messages to the upstream AFBR.</t>

	<t></t>
</section>

<section title="Scenarios of Interest">
	<t></t>

	<t>This section describes the two different scenarios that softwires
		mesh multicast is appliacable to.</t>

	<section title="IPv4-over-IPv6">
		<figure title="Figure 2: IPv4-over-IPv6 Scenario">
			<artwork>
				<![CDATA[
                ._._._._.            ._._._._.
               |  IPv4   |          |  IPv4   |   --------
               | Client  |          | Client  |--|Source S|
               | network |          | network |   --------
                ._._._._.            ._._._._.
                   |                    |
                  AFBR             upstream AFBR
                   |                    |
                 __+____________________+__
                /   :   :           :   :  \
               |    :      :      :     :   |
               |    : IPv6 transit core :   |
               |    :     :       :     :   |
               |    :   :            :  :   |
                \_._._._._._._._._._._._._./
                    +                   +
               downstream AFBR     downstream AFBR
                    |                   |
                 ._._._._            ._._._._
    --------    |  IPv4  |          |  IPv4  |   --------
   |Receiver|-- | Client |          | Client |--|Receiver|
    --------    | network|          | network|   --------
                 ._._._._            ._._._._
				]]>
			</artwork>
		</figure>

		<t>In Figure 2, the E-IP client networks run IPv4 and the I-IP core
			runs IPv6.</t>

		<t>Because of the much larger IPv6 group address space, the client E-IPv4
      tree can be mapped to a specific I-IPv6 core tree. This simplifies
      operations on the AFBR because it becomes possible to algorithmically
      map an IPv4 group/source address to an IPv6 group/source address and
      vice-versa. </t>

		<t>The IPv4-over-IPv6 scenario is an emerging requirement as network
			operators build out native IPv6 backbone networks. These networks 
	support native IPv6 services and applications
    but in many cases, support for legacy IPv4 unicast and multicast services
    will also need to be accomodated.</t>

		<t></t>
	</section>

	<section title="IPv6-over-IPv4 ">
		<figure title="Figure 3: IPv6-over-IPv4 Scenario">
			<artwork>
				<![CDATA[
                 ._._._._.            ._._._._.
                |  IPv6   |          |  IPv6   |   --------
                | Client  |          | Client  |--|Source S|
                | network |          | network |   --------
                 ._._._._.            ._._._._.
                    |                    |
                   AFBR             upstream AFBR
                    |                    |
                  __+____________________+__
                 /   :   :           :   :  \
                |    :      :      :     :   |
                |    : IPv4 transit core :   |
                |    :     :       :     :   |
                |    :   :            :  :   |
                 \_._._._._._._._._._._._._./
                     +                   +
                downstream AFBR    downstream AFBR
                     |                   |
                  ._._._._            ._._._._
     --------    |  IPv6  |          |  IPv6  |   --------
    |Receiver|-- | Client |          | Client |--|Receiver|
     --------    | network|          | network|   --------
                  ._._._._            ._._._._
				]]>
			</artwork>
		</figure>

		<t>In Figure 3, the E-IP Client Networks run IPv6 while the I-IP
			core runs IPv4. </t>

		<t>IPv6 multicast group addresses are longer than IPv4 multicast group
			addresses so it is not possible to perform an algorithmic IPv6 to
			IPv4 address mapping without the risk of multiple IPv6 group
			addresses mapped to the same IPv4 address, resulting in unnecessary
			bandwidth and resource consumption.Therefore, additional efforts will
			be required to ensure that client E-IPv6 multicast packets can be
			injected into the correct I-IPv4 multicast trees
			at the AFBRs. This clear mismatch in IPv6 and IPv4 group address
			lengths means that it will not be possible to perform a one-to-one
			mapping between IPv6 and IPv4 group addresses unless the IPv6 group
			address is scoped, such as applying a "Well-Known"
			prefix or an ISP-defined prefix.</t>

		<t>As mentioned earlier, this scenario is common in the MVPN environment.
			As native IPv6 deployments and multicast applications emerge from the
			outer reaches of the greater public IPv4 Internet, it is envisaged
			that the IPv6 over IPv4 softwire mesh multicast scenario will be a
			necessary feature supported by network operators. </t>
	</section>

</section>

<section title="IPv4-over-IPv6 Mechanism">

	<section title="Mechanism Overview">
		<t>Routers in the client E-IPv4 networks have routes to all other
			client E-IPv4 networks. Through PIM messages, 
			E-IPv4 hosts and routers have discovered or learnt of
			(S,G) or (*,G) IPv4 addresses. Any I-IPv6 multicast state instantiated
			in the core is referred to as (S',G') or (*,G') and is certainly
			separated from E-IPv4 multicast state.</t>

   	<t>Suppose a downstream AFBR receives an E-IPv4 PIM Join/Prune
			message from the E-IPv4 network for either an (S,G) tree or a (*,G)
			tree. The AFBR can translate the E-IPv4 PIM message into an
			I-IPv6 PIM message with the latter being directed towards the I-IP IPv6
			address of the upstream AFBR. When the I-IPv6 PIM message arrives at
			the upstream AFBR, it MUST be translated back into an
			E-IPv4 PIM message. The result of these actions is the construction
			of E-IPv4 trees and a corresponding I-IP tree in the I-IP network.
		  An example of the packet format and traslation is provided
      in Section 8.</t>

		<t>In this case, it is incumbent upon the AFBR routers to perform PIM
			message conversions in the control plane and IP group
			address conversions or mappings in the data plane. The AFBRs perform an
      algorithmic, one-to-one mapping of IPv4-to-IPv6.
    </t>
	</section>

	<section title="Group Address Mapping">

		<t>For the IPv4-over-IPv6 scenario, a simple algorithmic mapping between
  		IPv4 multicast group addresses and IPv6 group addresses is performed.
 		  Figure 4 shows the reminder of the format:</t>

	  	<figure title="Figure 4: IPv4-Embedded IPv6 Multicast Address Format">
				<artwork>
					<![CDATA[
  +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
  | 0-------------32--40--48--56--64--72--80--88--96-----------127|
  +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
  |                    mPrefix46                  |group  address |
  +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
					]]>
				</artwork>
			</figure>

		<t>An IPv6 multicast prefix (mPrefix46) is assigned to each AFBR. AFBRs
      will prepend the prefix to an IPv4 multicast group address when
      translating it to an IPv6 multicast group address.</t>

  		<t>The mPrefix46 for SSM mode is also defined in Section 4.1 of
 		  <xref target="RFC7371"></xref>
		</t>

		<t>With this scheme, each IPv4 multicast address can be mapped into an
   		IPv6 multicast address (with the assigned prefix), and each IPv6
   		multicast address with the assigned prefix can be mapped into an IPv4
   		multicast address.</t>
	</section>

	<section title="Source Address Mapping">
		<t>There are two kinds of multicast: ASM and SSM. Considering that
			I-IP network and E-IP network may support different kinds of multicast,
			the source address translation rules needed to support all possible
      scenarios may become very complex. But since SSM can be implemented with
      a strict subset of the PIM-SM protocol mechanisms
			<xref target="RFC7761"></xref>, we can treat the I-IP core as SSM-only
			to make it as simple as possible. There then remain only two scenarios
			to be discussed in detail:</t>
		<t>
		<list style="symbols">
			<t>E-IP network supports SSM <vspace blankLines='1'/>
				One possible way to make sure that the translated I-IPv6 PIM message
        reaches upstream AFBR is to set S' to a virtual IPv6 address that leads
        to the upstream AFBR. Figure 5 is the recommended address
				format based on <xref target="RFC6052"></xref>:<vspace blankLines='1'/>

			<figure title="Figure 5: IPv4-Embedded IPv6 Virtual Source Address Format">
				<artwork>
				<![CDATA[
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   | 0-------------32--40--48--56--64--72--80--88--96-----------127|
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   |     prefix    |v4(32)         | u | suffix    |source address |
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   |<------------------uPrefix46------------------>|
				]]>
				</artwork>
			</figure>

			<vspace blankLines='1'/>

			In this address format, 
			
			<list style ="symbols">
			<t>The "prefix" field contains a "Well-Known"
			prefix or an ISP-defined prefix. An existing "Well-Known" prefix is
			64:ff9b, which is defined in <xref target="RFC6052"></xref>;</t>
		        <t>The "v4" field is the IP address of one of upstream AFBR's  E-IPv4
      			interfaces; </t>
			<t>The "u" field is defined in <xref target="RFC4291"></xref>, and
      			MUST be set to zero; </t>
			<t>The "suffix" field is reserved for future extensions
   			and SHOULD be set to zero; </t>
			<t>The "source address" field stores the original
   			S. </t>
			</list>
			We call the overall /96 prefix ("prefix" field and "v4" field and "u"
			field and "suffix" field altogether) "uPrefix46".<vspace blankLines='1'/>
   		</t>


			<t>E-IP network supports ASM <vspace blankLines='1'/>
				The (S,G) source list entry and the (*,G) source list entry only differ
				in that the latter has both the WC and RPT bits of the
        Encoded-Source-Address set, while the former is all cleared (See
        Section 4.9.5.1 of <xref target="RFC7761"></xref>). So we can translate
        source list entries in (*,G) messages into source list entries in (S'G')
        messages by applying the format specified in Figure 5 and clearing both
        the WC and RPT bits at downstream AFBRs, and vice-versa for the
        reverse translation at upstream AFBRs.
				<vspace blankLines='1'/>
     	</t>

		</list>
		</t>
	</section>

	<section title="Routing Mechanism">
		<t>In the mesh multicast scenario, routing information is REQUIRED to be
      distributed among AFBRs to make sure that the PIM messages that a
      downstream AFBR propagates reach the right upstream AFBR.</t>

		<t>Every AFBR MUST know the /32 prefix in "IPv4-Embedded IPv6 Virtual Source
      		Address Format". To achieve this,
   		every AFBR should announce one of its E-IPv4
   		interfaces in the "v4" field, and 
   		the corresponding uPrefix46. The announcement SHOULD be sent to the other AFBRs
		through MBGP. Since every IP
   		address of upstream AFBR's E-IPv4 interface is different
   		from each other, every uPrefix46 that AFBR announces MUST be different,
   		and uniquely identifies each AFBR.
   		"uPrefix46" is an IPv6 prefix, and the distribution mechanism is the
			same as the traditional mesh unicast scenario. But "v4"
			field is an E-IPv4 address, and BGP messages are NOT tunneled through
			softwires or any other mechanism specified in
			<xref target="RFC5565"></xref>, AFBRs MUST be able to transport and encode/decode
			BGP messages that are carried over I-IPv6, whose NLRI and NH are of E-IPv4
			address family.</t>

   	<t>In this way, when a downstream AFBR receives an E-IPv4 PIM (S,G) message, it can translate
   		this message into (S',G') by looking up the IP address of the corresponding AFBR's E-IPv4 interface.
   		Since the uPrefix46 of S' is unique, and is known to every router
   		in the I-IPv6 network, the translated message will be forwarded to the
   		corresponding upstream AFBR, and the upstream AFBR can translate the message
   		back to (S,G).
   		When a downstream AFBR receives an E-IPv4 PIM (*,G) message, S' can be generated
			according to the format specified in Figure 4, with "source
			address" field set to *(the IPv4 address of RP). The translated
			message will be forwarded to the
   	  corresponding upstream AFBR. Since every PIM router within
			a PIM domain MUST be able to map a particular multicast group address
			to the same RP (see Section 4.7 of <xref target="RFC7761"></xref>),
			when the upstream AFBR checks the "source address" field of the message,
      it finds the IPv4 address of the RP, and assertains that this is
			originally a (*,G) message. This is then translated back to the (*,G)
      message and processed.</t>
	</section>
</section>

<section title="IPv6-over-IPv4 Mechanism">

	<section title="Mechanism Overview">
		<t>Routers in the client E-IPv6 networks contain routes to all other
			client E-IPv6 networks. Through PIM messages, 
			E-IPv6 hosts and routers have discovered or learnt of
			(S,G) or (*,G) IPv6 addresses. Any I-IP multicast state instantiated
			in the core is referred to as (S',G') or (*,G') and is
			separated from E-IP multicast state.</t>

		<t>This particular scenario introduces unique challenges. Unlike the
			IPv4-over-IPv6 scenario, it is impossible to map all of the IPv6
			multicast address space into the IPv4 address space to address the
			one-to-one Softwire Multicast requirement. To coordinate with the
			"IPv4-over-IPv6" scenario and keep the solution as simple as possible,
			one possible solution to this problem is to limit the scope of the
			E-IPv6 source addresses for mapping, such as applying a "Well-Known"
			prefix or an ISP-defined prefix.</t>
	</section>

	<section title="Group Address Mapping">

		<t>To keep one-to-one group address mapping simple, the group address
			range of E-IP IPv6 can be reduced in a number
			of ways to limit the scope of addresses that need to be mapped into
			the I-IP IPv4 space.</t>

		<t>For example, the high order bits of the E-IPv6 address range will be
      fixed for mapping purposes.
			With this scheme, each IPv4 multicast address can be mapped into an
   		IPv6 multicast address (with the assigned prefix), and each IPv6
   		multicast address with the assigned prefix can be mapped into an IPv4
   		multicast address.</t>
	</section>

	<section title="Source Address Mapping">
		<t>There are two kinds of multicast --- ASM and SSM. Considering that
			I-IP network and E-IP network may support different kind of multicast,
      the source address translation rules needed to support all possible
      scenarios may become very complex. But since SSM
			can be implemented with a strict subset of the PIM-SM protocol mechanisms
			<xref target="RFC7761"></xref>, we can treat the I-IP core as SSM-only
			to make it as simple as possible. There then remain only two scenarios
			to be discussed in detail:</t>
		<t>
		<list style="symbols">
			<t>E-IP network supports SSM<vspace blankLines='1'/>
				To make sure that the translated I-IPv4 PIM
      	message reaches the upstream AFBR, we need to set S' to an IPv4
      	address that leads to the upstream AFBR. But due to the non-"one-to-one"
      	mapping of E-IPv6 to I-IPv4 unicast address, the
				upstream AFBR is unable to remap the I-IPv4 source address to the
				original E-IPv6 source address without any constraints.
        <vspace blankLines='1'/>

				We apply a fixed IPv6 prefix and static mapping to solve this
				problem. A recommended source address format is defined in
				<xref target="RFC6052"></xref>. Figure 6 is the reminder of the
				format:<vspace blankLines='1'/>

		<figure title="Figure 6: IPv4-Embedded IPv6 Source Address Format">
			<artwork>
				<![CDATA[
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   | 0-------------32--40--48--56--64--72--80--88--96-----------127|
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   |                     uPrefix64                 |source address |
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
				]]>
			</artwork>
		</figure>
				<vspace blankLines='1'/>
				In this address format, the "uPrefix64" field starts with a "Well-Known"
				prefix or an ISP-defined prefix. An existing "Well-Known" prefix is
				64:ff9b/32, which is defined in <xref target="RFC6052"></xref>;
        The "source address" field is
				the corresponding I-IPv4 source address.<vspace blankLines='1'/>
			</t>

			<t>The E-IP network supports ASM<vspace blankLines='1'/>
				The (S,G) source list entry and the (*,G) source list entry only differ
				in that the latter has both the WC and RPT bits of the Encoded-Source-Address
				set, while the former is all cleared (See Section 4.9.5.1 of <xref target="RFC7761"></xref>).
				So we can translate source list entries in (*,G) messages into source
				list entries in (S',G') messages by applying the format specified in
				Figure 5 and setting both the WC and RPT bits at downstream AFBRs,
        and vice-versa for the reverse translation at upstream AFBRs.
				Here, the E-IPv6 address of RP MUST follow the format specified in
        Figure 6. RP' is the upstream AFBR that locates between RP and the
        downstream AFBR.
   			</t>

		</list>
		</t>
	</section>

	<section title="Routing Mechanism">

  		<t>In the mesh multicast scenario, routing information is REQUIRED to be distributed
			among AFBRs to make sure that PIM messages that a downstream AFBR propagates reach
			the right upstream AFBR.</t>

		<t>To make it feasible, the /96 uPrefix64 MUST be known to every AFBR,
			every E-IPv6 address of sources that support mesh multicast MUST follow
			the format specified in Figure 6, and the corresponding upstream AFBR
			of this source MUST announce the I-IPv4 address in "source address" field of
			this source's IPv6 address to the I-IPv4 network. Since uPrefix64 is
			static and unique in IPv6-over-IPv4 scenario, there
			is no need to distribute it using BGP. The distribution of "source address"
			field of multicast source addresses is a pure I-IPv4 process and no more
			specification is needed.</t>

		<t>In this way, when a downstream AFBR receives a
			(S,G) message, it can translate the message into (S',G') by simply taking off the prefix
			in S. Since S' is known to every router in I-IPv4 network, the translated
			message will be forwarded to the corresponding upstream AFBR, and the
			upstream AFBR can translate the message back to (S,G) by appending the prefix to S'.
			When a downstream AFBR receives a
			(*,G) message, it can translate it into (S',G') by simply taking off the prefix
			in *(the E-IPv6 address of RP). Since S' is known to every router
			in I-IPv4 network, the translated message will be forwarded to RP'.
			And since every PIM router within a PIM domain MUST be able to map a
			particular multicast group address to the same RP (see Section 4.7
			of <xref target="RFC7761"></xref>), RP' knows that S' is the mapped
			I-IPv4 address of RP, so RP' will translate the message back to (*,G)
			by appending the prefix to S' and propagate it towards RP.</t>

	</section>

</section>

<section title="Control Plane Functions of AFBR">
		<t>AFBRs are responsible for the following functions:</t>

		<section title="E-IP (*,G) State Maintenance">
				<t>When an AFBR wishes to propagate a Join/Prune(*,G) message to an
					I-IP upstream router, the AFBR MUST translate
					Join/Prune(*,G) messages into Join/Prune(S',G') messages following
					the rules specified above, then send the latter.</t>
    </section>

		<section title="E-IP (S,G) State Maintenance">
				<t>When an AFBR wishes to propagate a Join/Prune(S,G) message to an
					I-IP upstream router, the AFBR MUST translate
					Join/Prune(S,G) messages into Join/Prune(S',G') messages following
					the rules specified above, then send the latter.</t>
   </section>

		<section title="I-IP (S',G') State Maintenance">
				<t>It is possible that the I-IP transit core runs another non-transit
          I-IP PIM-SSM instance. Since the translated source address starts with
					the unique "Well-Known" prefix or the ISP-defined prefix
					that SHOULD NOT be used by other service provider, mesh multicast will not influence non-transit
					PIM-SSM multicast at all. When an AFBR
					receives an I-IP (S',G') message, it MUST check S'. If S' starts with
					the unique prefix, then the message is actually a translated
					E-IP (S,G) or (*,G) message, and the AFBR MUST translate this
					message back to E-IP PIM message and process it.</t>

    </section>



		<section title="E-IP (S,G,rpt) State Maintenance">
				<t>When an AFBR wishes to propagate a Join/Prune(S,G,rpt) message to an
					I-IP upstream router, the AFBR MUST operate as specified in Section
          6.5 and Section 6.6.</t>
    </section>

		<section title="Inter-AFBR Signaling">
				<t>Assume that one downstream AFBR has joined a RPT of (*,G) and a SPT
          of (S,G), and decide to perform a SPT switchover. According to
          <xref target="RFC7761"></xref>, it SHOULD propagate a Prune(S,G,rpt)
          message along with the periodical Join(*,G) message upstream towards RP.
          However, routers in the I-IP transit core do not process
					(S,G,rpt) messages since the I-IP transit core is treated as SSM-only.
					As a result, the downstream AFBR is unable to prune S from this RPT, so
					it will receive two copies of the same data of (S,G). In order to solve
					this problem, we introduce a new mechanism for downstream AFBRs to inform
					upstream AFBRs of pruning any given S from an RPT.</t>

				<t>When a downstream AFBR wishes to propagate a (S,G,rpt) message upstream,
					it SHOULD encapsulate the (S,G,rpt) message, then
					send the encapsulated unicast message to the corresponding upstream AFBR,
					which we call "RP'".</t>

				<t>When RP' receives this encapsulated message, it SHOULD decapsulate the
          message as in the unicast scenario, and retrieve the original (S,G,rpt) message.
					The incoming interface of this message may be different to the outgoing
          interface which propagates multicast data to the
					corresponding downstream AFBR, and there may be other downstream AFBRs that
					need to receive multicast data of (S,G) from this incoming interface,
					so RP' SHOULD NOT simply process this message as specified in
					<xref target="RFC7761"></xref> on the incoming interface.</t>

				<t>To solve this problem as simply as possible, we introduce an
          "interface agent" to process all the encapsulated (S,G,rpt) messages
          the upstream AFBR receives, and prune S from the RPT of group G when
          no downstream AFBR is subscribed to receive multicast data of (S,G)
					along the RPT. In this way, we ensure that downstream AFBRs will not
          miss any multicast data that they need, at the cost of duplicated
          multicast data of (S,G) along the RPT
					received by SPT-switched-over downstream AFBRs, if
					at least one downstream AFBR exists that has not yet sent Prune(S,G,rpt)
					messages to the upstream AFBR. The following diagram
					shows an example of how an "interface agent" MAY be implemented:</t>

			<figure title="Figure 7: Interface Agent Implementation Example">
				<artwork>
				<![CDATA[

       +----------------------------------------+
       |                                        |
       |       +-----------+----------+         |
       |       |  PIM-SM   |    UDP   |         |
       |       +-----------+----------+         |
       |          ^                |            |
       |          |                |            |
       |          |                v            |
       |       +----------------------+         |
       |       |       I/F Agent      |         |
       |       +----------------------+         |
       |   PIM    ^                | multicast  |
       | messages |                |   data     |
       |          |  +-------------+---+        |
       |       +--+--|-----------+     |        |
       |       |     v           |     v        |
       |     +--------- +     +----------+      |
       |     | I-IP I/F |     | I-IP I/F |      |
       |     +----------+     +----------+      |
       |        ^     |          ^     |        |
       |        |     |          |     |        |
       +--------|-----|----------|-----|--------+
                |     v          |     v

				]]>
				</artwork>
			</figure>

				<t>Figure 7 shows an example of interface agent implementation using UDP
				  encapsulation. The interface agent has two responsibilities: In the
					control plane, it SHOULD work as a real interface that has joined (*,G),
					representing of all the I-IP interfaces which are
					outgoing interfaces of the (*,G) state machine, and process the (S,G,rpt)
					messages received from all the I-IP interfaces. The interface agent
					maintains downstream (S,G,rpt) state machines of every downstream AFBR,
					and submits Prune (S,G,rpt) messages to the PIM-SM module only when
					every (S,G,rpt) state machine is at Prune(P) or PruneTmp(P') state,
					which means that no downstream AFBR is subscribed to receive multicast data of (S,G)
					along the RPT of G. Once a (S,G,rpt) state machine changes to NoInfo(NI) state,
					which means that the corresponding downstream AFBR has switched to receive
					multicast data of (S,G) along the RPT again, the interface agent
					SHOULD send a Join (S,G,rpt) to the PIM-SM module immediately; In the data plane,
					upon receiving a multicast data packet, the interface agent SHOULD
					encapsulate it at first, then propagate the encapsulated packet
					from every I-IP interface.</t>


				<t>NOTICE: It is possible that an E-IP neighbor of RP' that has joined the RPT of G,
					so the per-interface state machine for receiving E-IP Join/Prune (S,G,rpt)
					messages SHOULD keep alive.</t>

        </section>

			<section title="SPT Switchover">
				<t>After a new AFBR expresses its interest in receiving traffic destined for
					a multicast group, it will receive all the data from the RPT at
					first. At this time, every downstream AFBR will receive multicast data from any
					source from this RPT, in spite of whether they have switched over to
          an SPT of some source(s) or not.</t>

    
				<t>To minimize this redundancy, it is recommended that every AFBR's
					SwitchToSptDesired(S,G) function employs the "switch on first packet"
					policy. In this way, the delay in switchover to SPT is kept as small
					as possible, and after the moment that every AFBR has performed the SPT
					switchover for every S of group G, no data will be forwarded in the
					RPT of G, thus no more redundancy will be produced.</t>
        </section>

			<section title="Other PIM Message Types">
				<t>Apart from Join or Prune, other message types exist, including
					Register, Register-Stop, Hello and Assert. Register and Register-Stop
					messages are sent by unicast, while Hello and Assert messages are
					only used between directly linked routers to negotiate with each other.
					It is not necessary to translate these for forwarding, thus the
          processing of these messages is out of scope for this document.</t>
			</section>

			<section title="Other PIM States Maintenance">
				<t>Apart from states mentioned above, other states exist, including
					(*,*,RP) and I-IP (*,G') state. Since we treat the I-IP core as SSM-only,
					the maintenance of these states is out of scope for this document.</t>
			</section>

</section>

<section title="Data Plane Functions of the AFBR">

		<section title="Process and Forward Multicast Data">
			<t>On receiving multicast data from upstream routers, the AFBR checks its
				forwarding table to find the IP address of each outgoing interface. If there
				is at least one outgoing interface whose IP address family is different
				from the incoming interface, the AFBR MUST encapsulate/decapsulate this
				packet and forward it via the outgoing interface(s), then forward the data
				via other outgoing interfaces without encapsulation/decapsulation.</t>

			<t>When a downstream AFBR that has already switched over to the SPT of S
				receives an encapsulated multicast data packet of (S,G) along the RPT,
				it SHOULD silently drop this packet.</t>
		</section>

		<section title="Selecting a Tunneling Technology">
			<t>Choosing tunneling technology depends on the policies configured
				on AFBRs. It is REQUIRED that all AFBRs use the same technology,
				otherwise some AFBRs SHALL not be able to decapsulate encapsulated packets
				from other AFBRs that use a different tunneling technology.</t>
		</section>

		<section title="TTL">
			<t>Processing of TTL depends on the tunneling technology,
				and it is out of scope of this document.</t>
		</section>

		<section title="Fragmentation">
			<t>The encapsulation performed by an upstream AFBR will increase the size of
				packets. As a result, the outgoing I-IP link MTU may not accommodate
				the larger packet size. As it is not always possible for core operators to increase
				the MTU of every link. Fragmentation after encapsulation and reassembling of encapsulated packets
				MUST be supported by AFBRs <xref target="RFC5565"></xref>.</t>

		</section>

</section>

<section title = "Packet Format and Translation">
  <t>Because the PIM-SM Specification is independent of the underlying unicast
    routing protocol, the packet format in Section 4.9 of <xref target="RFC7761"></xref>
  remains the same, except that the group address and source address MUST be
  translated when traversing AFBR.</t>

  <t>For example, Figure 8 shows the register-stop message format in IPv4 and
    IPv6 address family.</t>
               <figure title="Figure 8: Register-Stop Message Format">
                        <artwork>
                                <![CDATA[
    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |PIM Ver| Type  |   Reserved    |           Checksum            |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |             IPv4 Group Address (Encoded-Group format)         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |            IPv4 Source Address (Encoded-Unicast format)       |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
                 (1). IPv4 Register-Stop Message Format

    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |PIM Ver| Type  |   Reserved    |           Checksum            |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |             IPv6 Group Address (Encoded-Group format)         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |            IPv6 Source Address (Encoded-Unicast format)       |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
                 (2). IPv6 Register-Stop Message Format
                                ]]>
                        </artwork>
                </figure>

<t>In Figure 8, the semantics of fields "PIM Ver", "Type", "Reserved", and "Checksum"
  remain the same. </t>
<t>IPv4 Group Address (Encoded-Group format): The encoded-group format of the
  IPv4 group address described in Section 4.2 and 5.2.</t>
<t>IPv4 Source Address (Encoded-Group format): The encoded-unicast format of
  the IPv4 source address described in Section 4.3 and 5.3.</t>
<t>IPv6 Group Address (Encoded-Group format): The encoded-group format of the
  IPv6 group address described in Section 4.2 and 5.2.</t>
<t>IPv6 Source Address (Encoded-Group format): The encoded-unicast format of
  the IPv6 source address described in Section 4.3 and 5.3.</t>

</section>

<section title = "Softwire Mesh Multicast Encapsulation">
  <t>Softwire mesh multicast encapsulation does not require the use of any one
    particular encapsulation mechanism. Rather, it must accommodate a variety of
    different encapsulation mechanisms, and allow the use of encapsulation
    mechanisms mentioned in <xref target="RFC4925" />. Additionally, all of the
    AFBRs attached to the I-IP network MUST implement the same encapsulation
    mechanism.</t>

</section>

<section title="Security Considerations">
    <t>The security concerns raised in <xref target="RFC4925"/> and <xref target="RFC7761"/> are applicable here. In
      addition, the additional workload associated with some schemes could be exploited by
      an attacker to perform a out DDoS attack. Compared with
      <xref target="RFC4925"/>, the security concerns SHOULD be considered more
      carefully: an attacker could potentially set up many multicast trees in
      the edge networks, causing too many multicast states in the core network.</t>
</section>

<section title="IANA Considerations">
	<t>This document includes no request to IANA. </t>

</section>
</middle>

<back>
	<references title="Normative References">

		<?rfc include="reference.RFC.4291" ?>

		<?rfc include="reference.RFC.2119" ?>

		<?rfc include="reference.RFC.4301" ?>

		<?rfc include="reference.RFC.7761" ?>

		<?rfc include="reference.RFC.4925" ?>

		<?rfc include="reference.RFC.5565" ?>

		<?rfc include="reference.RFC.6052" ?>

		<?rfc include="reference.RFC.6513" ?>
	</references>

	<references title="Informative References">


		<?rfc include="reference.RFC.7371" ?>
	</references>

	<section title="Acknowledgements">
		<t>Wenlong Chen, Xuan Chen, Alain Durand, Yiu Lee, Jacni Qin and Stig Venaas
		provided useful input into this document.</t>
	</section>

</back>
</rfc>

