<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="std" docName="draft-ietf-pim-sr-p2mp-policy-21"
     ipr="trust200902">
  <front>
    <title abbrev="SR P2MP Policy">Segment Routing Point-to-Multipoint
    Policy</title>

    <author fullname="Rishabh Parekh (editor)" initials="R."
            surname="Parekh, Ed.">
      <organization>Arrcus</organization>

      <address>
        <postal>
          <street/>

          <city>San Jose</city>

          <region/>

          <code/>

          <country>US</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>rishabh@arrcus.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Daniel Voyer (editor)" initials="D."
            surname="Voyer, Ed.">
      <organization>Cisco Systems, Inc.</organization>

      <address>
        <postal>
          <street/>

          <city>Montreal</city>

          <region/>

          <code/>

          <country>CA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>davoyer@cisco.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Clarence Filsfils" initials="C." surname="Filsfils">
      <organization>Cisco Systems, Inc.</organization>

      <address>
        <postal>
          <street/>

          <city>Brussels</city>

          <region/>

          <code/>

          <country>BE</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>cfilsfil@cisco.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Hooman Bidgoli" initials="H." surname="Bidgoli">
      <organization>Nokia</organization>

      <address>
        <postal>
          <street/>

          <city>Ottawa</city>

          <region/>

          <code/>

          <country>CA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>hooman.bidgoli@nokia.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Zhaohui Zhang" initials="Z." surname="Zhang">
      <organization>Juniper Networks</organization>

      <address>
        <email>zzhang@juniper.net</email>
      </address>
    </author>

    <date day="04" month="September" year="2025"/>

    <abstract>
      <t>Point-to-Multipoint (P2MP) Policy enables creation of P2MP trees for
      efficient multi-point packet delivery in a Segment Routing (SR) domain.
      This document specifies the architecture, signaling, and procedures for
      SR P2MP Policies with Segment Routing over MPLS (SR-MPLS) and Segment
      Routing over IPv6 (SRv6). It defines the SR P2MP Policy construct,
      candidate paths (CP) of an SR P2MP Policy and the instantiation of the
      P2MP tree instances of a candidate path using Replication segments.
      Additionally, it describes the required extensions for a controller to
      support P2MP path computation and provisioning. This document updates
      the RFC 9524.</t>
    </abstract>

    <note title="Requirements Language">
      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
      "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
      "OPTIONAL" in this document are to be interpreted as described in BCP 14
      <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when,
      they appear in all capitals, as shown here.</t>
    </note>
  </front>

  <middle>
    <section title="Introduction">
      <t>RFC 9524 defines a Replication segment which enables an SR node to
      replicate traffic to multiple downstream nodes in an SR domain <xref
      target="RFC8402"/>. A P2MP service can be realized by a single
      Replication segment spanning from the ingress node to the egress nodes
      of the service. This effectively achieves ingress replication which is
      inefficient since the traffic of the P2MP service may traverse the same
      set of nodes and links in the SR domain on its path from the ingress
      node to the egress nodes.</t>

      <t>A Multi-point service delivery can be efficiently realized with a
      P2MP tree in a Segment Routing domain . A P2MP tree spans from a Root
      node to a set of Leaf nodes via intermediate Replication nodes. It
      consists of a Replication segment at the Root node, stitched to one or
      more Replication segments at Leaf nodes and intermediate Replication
      nodes. A Bud node <xref target="RFC9524"/> is a node that is both a
      Replication node and a Leaf node. Any mention of "Leaf node(s)" in this
      document should be considered as referring to "Leaf or Bud node(s)".</t>

      <t>An SR P2MP Policy defines the Root and Leaf nodes of a P2MP tree. It
      has one or more candidate paths (CP) provisioned with optional
      constraints and/or optimization objectives.</t>

      <t>A controller computes P2MP tree instances of the candidate paths
      using the constraints and objectives specified in the candidate path.
      The controller then instantiates a P2MP tree instance in the SR domain
      by signaling Replication segments to the Root, Replication and Leaf
      nodes. A Path Computation Element (PCE) <xref target="RFC4655"/> is one
      example of such a controller. In other cases, a P2MP tree instance can
      be installed using NETCONF/YANG or Command Line Interface(CLI) on the
      Root, Replication and the Leaf nodes.</t>

      <t>The Replication segments of a P2MP tree instance can be instantiated
      for SR-MPLS <xref target="RFC8660"/> and SRv6 <xref target="RFC8986"/>
      data planes, enabling efficient packet replication within an SR
      domain.</t>

      <t>This document updates Replication-ID portion of a Replication segment
      identifier specified in Section 2 of <xref target="RFC9524"/>.</t>

      <section title="Terminology">
        <t>This section defines terms used frequently in this document. Refer
        to Terminology section of <xref target="RFC9524"/> for definition of
        Replication segment and other terms associated with it and the
        definition of Root, Leaf and Bud node.</t>

        <t>SR P2MP Policy: An SR P2MP Policy is a framework to construct P2MP
        trees in an SR domain by specifying a Root and Leaf nodes.</t>

        <t>Tree-ID: An identifier of an SR P2MP Policy in context of the Root
        node.</t>

        <t>Candidate path: A candidate path (CP) of SR P2MP Policy defines
        topological or resource constraints and optimization objectives that
        are used to compute and construct P2MP tree instances.</t>

        <t>P2MP tree instance: A P2MP tree instance (PTI) of a candidate path
        is constructed by stitching Replication segments between Root and Leaf
        nodes of an SR P2MP Policy. Its topology is determined by constraints
        and optimization objective of the candidate path.</t>

        <t>Instance-ID: An identifier of a P2MP tree instance in context of
        the SR P2MP Policy.</t>

        <t>Tree-SID: The Replication-SID of the Replication segment at the
        Root node of a P2MP tree instance.</t>
      </section>
    </section>

    <section title="SR P2MP Policy">
      <t>An SR P2MP Policy is used to instantiate P2MP trees between a Root
      and Leaf nodes in an SR domain. Note, multiple SR P2MP Policies can have
      identical Root node and identical set of Leaf nodes. An SR P2MP Policy
      has one or more candidate paths <xref target="RFC9256"/>.</t>

      <section title="SR P2MP Policy identification">
        <t>An SR P2MP Policy is uniquely identified by the tuple &lt;Root,
        Tree-ID&gt;, where:</t>

        <t><list style="symbols">
            <t>Root: The IP address of the Root node of P2MP trees
            instantiated by the SR P2MP Policy.</t>

            <t>Tree-ID: A 32-bit unsigned integer that uniquely identifies the
            SR P2MP Policy in the context of the Root node.</t>
          </list></t>
      </section>

      <section title="Components of an SR P2MP Policy">
        <t>An SR P2MP Policy consists of the following elements:</t>

        <t><list style="symbols">
            <t>Leaf nodes: A set of nodes that terminate the P2MP trees of the
            SR P2MP Policy.</t>

            <t>candidate paths: A set of possible paths that define
            constraints and optimization objectives for P2MP tree instances of
            the SR P2MP Policy.</t>
          </list></t>

        <t>An SR P2MP Policy and its CPs are provisioned on a controller (see
        Section <xref format="counter" target="Controller"/>) or the Root node
        or both depending upon the provisioning model. After provisioning, the
        Policy and its CPs are instantiated on the Root node or the controller
        by using a signalling protocol.</t>
      </section>

      <section anchor="Candidate_Path"
               title="Candidate Paths and P2MP Tree instances">
        <t>An SR P2MP Policy has one or more CPs. The tuple
        &lt;Protocol-Origin, Originator, Discriminator&gt;, as specified in
        Section 2.6 of <xref target="RFC9256"/>, uniquely identifies a
        candidate path in the context of an SR P2MP Policy. The semantics of
        Procotol-Origin, Originator and Discriminator fields of the identifier
        are same as in Section 2.3, 2.4 and 2.5 of <xref target="RFC9256"/>
        respectively.</t>

        <t>The Root node of the SR P2MP Policy selects the active candidate
        path based on the tie breaking rules defined in Section 2.9 of <xref
        target="RFC9256"/>.</t>

        <t>A CP may include topological and/or resource constraints and
        optimization objectives which influence the computation of the PTIs of
        the CP.</t>

        <t>A candidate path has zero or more PTIs. A candidate path does not
        have a PTI when the controller cannot compute a P2MP tree from the
        netowrk topology based on the constraints and/or optimization
        objectives of the CP. A candidate path can have more than one PTIs,
        for e.g during Make-Before-Break (see <xref target="Tree_Compute"/>)
        procedure to handle a network state change. However, one and only one
        PTI MUST be the active instance of the CP. If more than one PTIs of a
        CP are active at same time, and that CP is the active CP of SR P2MP
        Policy, then duplicate traffic may be delivered to the Leaf nodes.</t>

        <t>A PTI is identified by an Instance-ID. This is an unsigned 16-bit
        number which is unique in context of the SR P2MP Policy of the
        candidate path.</t>

        <t>PTIs are instantiated using Replication segments. Section 2 of
        <xref target="RFC9524"/> specifies Replication-ID of the Replication
        segment identifier tuple as a variable length field that can be
        modified as required based on the use of a Replication segment.
        However, length is an imprecise indicator of the actual structure of
        the Replication-ID. This document updates the Replication-ID of a
        Replication segment identifier of RFC 9524 to be the tuple: &lt;Root,
        Tree-ID, Instance-ID, Node-ID&gt;, where &lt;Root, Tree-ID&gt;
        identifies the SR P2MP Policy and Instance-ID identifies the PTI
        within that SR P2MP Policy. This results in the Replication segments
        used to instantiate a PTI being identified by the tuple: &lt;Root,
        Tree-ID, Instance-ID, Node-ID&gt;. In the simplest case,
        Replication-ID of a Replication segment is a 32-bit number as per
        Section 2 of RFC 9524. For this use case, the Root MUST be zero
        (0.0.0.0 for IPv4 and :: for IPv6) and the Instance-ID MUST be zero
        and the 32-bit Tree-ID effectively make the Replication segment
        identifier &lt;[0.0.0.0 or ::], Tree-ID, 0, Node-ID&gt;. </t>

        <t>PTIs may have different tree topologies due to possibly differing
        constraints and optimization objectives of the CPs in an SR P2MP
        policy and across different Policies. Even within a given CP, two PTIs
        of that CP, say during Make-Before-Break procedure, are likely to have
        different tree topologies due to a change in the network state. Since
        the PTIs may have different tree topologies, their replication states
        also differ at various nodes in the SR domain. Therefore each PTI has
        its own Replication segment and a unique Replication-SID at a given
        node in the SR domain.</t>

        <t>A controller designates an active instance of a CP at the Root node
        of SR P2MP Policy by signalling this state through the protocol used
        to instantiate the Replication segment of the instance.</t>

        <t>This document focuses on the use of a controller to compute and
        instantiate PTIs of SR P2MP Policy CPs. It is also feasible to
        provision an explicit CP in an SR P2MP Policy with a static tree
        topology using NETCONF/YANG or CLI. Note, a static tree topology will
        not adapt to any changes in the network state of an SR domain. The
        explicit CPs may be provisioned on the controller or the Root node.
        When an explicit CP is provisioned on the controller, the controller
        bypasses the compute stage and directly instantiates the PTIs in the
        SR domain. When an explicit CP is provisioned on the Root node, the
        Root node instantiates the PTIs in the SR domain. The exact procedures
        for provisioning an explicit CP and the signalling from the Root node
        to instantiate the PTIs are outside the scope of this document.</t>
      </section>
    </section>

    <section anchor="Steering" title="Steering traffic into an SR P2MP Policy">
      <t>The Replication-SID of the Replication segment at the Root node is
      referred to as the Tree-SID of a PTI. It is RECOMMENDED that the
      Tree-SID is also used as the Replication-SID for the Replication
      segments at the intermediate Replication nodes and the Leaf nodes of the
      PTI as it simplifies operations and troubleshooting. However, the
      Replication-SIDs of the Replication segments at the intermediate
      Replication nodes and the Leaf nodes MAY differ from the Tree-SID. For
      SRv6, Replication-SID is the FUNCT portion of the SRv6 SID <xref
      target="RFC8986"/> <xref target="RFC9524"/>.Note, even if the Tree-SID
      is the Replication-SID of all the Replication segments of a PTI, the LOC
      portion of the SRv6 SID <xref target="RFC8986"/> differs for the Root
      node, the intermediate Replication nodes and the Leaf nodes of the
      PTI.</t>

      <t>An SR P2MP Policy has a Binding SID (BSID). The BSID is used to steer
      traffic into an SR Policy, as described below, when the Root node is not
      the ingress node of the SR domain where the traffic arrives. The packets
      are steered from the ingress node to the Root node using a segment list
      with the BSID as the last segment in the list. In this case, it is
      RECOMMENDED that the BSID of an SR P2MP Policy SHOULD be constant
      throughout the lifetime of the Policy so the steering of traffic to the
      Root node remains unchanged. The BSID of an SR P2MP Policy MAY be the
      Tree-SID of the active P2MP instance of the active CP of the Policy. In
      this case, the BSID of an SR P2MP Policy changes when the active CP or
      the active PTI of the SR P2MP Policy changes. Note, the BSID is not
      required to steer traffic into an SR P2MP Policy when the Root node of
      an SR P2MP Policy is also the ingress node of the SR domain where the
      traffic arrives.</t>

      <t>The Root node can steer an incoming packet into an SR P2MP Policy in
      one of following methods:</t>

      <t><list style="symbols">
          <t>Local policy-based forwarding: The Root node maps the incoming
          packet to the active PTI of the active CP of an SR P2MP Policy based
          on local forwarding policy and it is replicated with the
          encapsulated Replication-SIDs of the downstream nodes. The
          procedures to map an incoming packet to an SR P2MP Policy are out of
          scope of this document. It is RECOMMENDED that an implementation
          provide a mechanism to examine the result of application of the
          local forwarding policy i.e. provide information about the traffic
          mapped to an SR P2MP Policy and the active CP and active PTI of the
          Policy.</t>

          <t>Tree-SID based forwarding: The Binding SID, which may be the
          Tree-SID of the active PTI, in an incoming packet is used to map the
          packet to the active PTI. The Binding SID in the incoming packet is
          replaced with the Tree-SID of the active PTI of active CPand the
          packet is replicated with the Replication-SIDs of the downstream
          nodes.</t>
        </list></t>

      <t>For local policy-based forwarding with SR-MPLS, the TTL the Root node
      SHOULD set the TTL in encapsulating MPLS header so that the replicated
      packet can reach the furthest Leaf node. The Root MAY set the TTL in
      encapsulating MPLS header from the payload. In this case, the TTL may
      not be sufficient for the replicated packet to reach the furthest node.
      For SRv6, Section 2.2 of <xref target="RFC9524"/> provides guidance to
      set the IPv6 Hop Limit of the encapsulating IPv6 header. </t>
    </section>

    <section anchor="P2MP_Tree" title="P2MP tree instance">
      <t>A P2MP tree instance within an SR domain establishes a forwarding
      structure that connects a Root node to a set of Leaf nodes via a series
      of intermediate Replication nodes. The tree consists of:</t>

      <t><list style="symbols">
          <t>A Replication segment at the Root node.</t>

          <t>Zero or more Replication segments at intermediate Replication
          nodes.</t>

          <t>Replication segments at the Leaf nodes.</t>
        </list></t>

      <section title="Replication segments at Leaf Nodes">
        <t>A specific service is identified by a service context in a packet.
        A PTI is usually associated with one and only one multi-point service.
        On a Leaf node of such a multi-point service, the transport identifier
        which is the Tree-SID or Replication-SID of the Replication segment at
        a Leaf node is also associated with the service context because it is
        not always feasible to separate the transport and service context with
        efficient replication in core since a) multi-point services may have
        differing sets of end-points, and b) downstream allocation of service
        context cannot be encoded in packets replicated in the core.</t>

        <t>A PTI can be associated with one or more multi-point services on
        the Root and Leaf nodes. In SR-MPLS deployments, if it is known a
        priori that multi-point services mapped to an SR-MPLS PTI can be
        uniquely identified with their service label, a controller MAY opt not
        to instantiate Replication segments at Leaf nodes. In such cases,
        Replication nodes upstream of the Leaf nodes can remove the Tree-SID
        from the packet before forwarding it. A multi-point service context
        allocated from an upstream assigned label or Domain-wide Common Block
        (DCB), as specified in <xref target="RFC9573"/>, is an example of a
        globally unique context that facilitates this optimization.</t>

        <t>In SRv6 deployments, Replication segments of a PTI MUST be
        instantiated on Leaf nodes of the tree since PHP like behavior is not
        feasible because the Tree-SID is carried in IPv6 Destination Address
        field of outer IPv6 header. If two or more multi-point services are
        mapped to one SRv6 PTI, an SRV6 SID representing the service context
        is assigned by the Root node or assigned from DCB. This SRv6 SID MUST
        be encoded as the last segment in the Segment List of the Segment
        Routing Header <xref target="RFC8754"/> by the Root node to derive the
        packet processing context (PPC) for the service as described in
        Section 2.2 of <xref target="RFC9524"/> at a Leaf node.</t>
      </section>

      <section title="Shared Replication segments">
        <t>A Replication segment MAY be shared across different PTIs. One
        simple use of a shared Replication segment is for local protection on
        a Replication node. A shared Replication segment can protect
        Replication segments of different PTIs against an adjacency or path
        failure to the common downstream node of these Replication
        segments.</t>

        <t>A shared Replication segment MUST be identified using a Root set to
        zero (0.0.0.0 for IPv4 and :: for IPv6), Instance-ID set to zero and a
        Tree-ID that is unique within the context of the node where the
        Replication segment is instantiated. The Root is zero because a shared
        Replication segment is not associated with a particular SR P2MP Policy
        or a PTI. Note, the shared Replication segment identifier conforms
        with the updated Replication-ID definition in <xref
        target="Candidate_Path"/>.</t>

        <t>It is possible for different PTIs to share a P2MP tree at a
        Replication node. This allows a common sub-tree to be shared across
        PTIs whose tree topologies are identical in some portion of a SR
        domain. The procedures to share a P2MP tree across PTIs are outside
        the scope of this document. </t>
      </section>

      <section title="Packet forwarding in P2MP tree instance">
        <t>When a packet is steered into a PTI, the Replication segment at the
        Root node performs packet replication and forwards copies to
        downstream nodes.</t>

        <t><list style="symbols">
            <t>Each replicated packet carries the Replication-SID of the
            Replication segment at the downstream node.</t>

            <t>A downstream node can be either: <list style="symbols">
                <t>A Leaf node, in which case the replication process
                terminates.</t>

                <t>An intermediate Replication node, which further replicates
                the packet through its associated Replication segments until
                it reaches all Leaf nodes.</t>
              </list></t>
          </list></t>

        <t>A Replication node and a downstream node can be non-adjacent. In
        this case the replicated packet has to traverse a path to reach the
        downstream node. For SR-MPLS, this is achieved by inserting one or
        more SIDs before the downstream Replication SID. For SRv6, the LOC
        <xref target="RFC8986"/> of downstream Replication-SID can guide the
        packet to the downstream node or an optional segment list may be used
        to steer the replicated packet on a specific path to the downstream
        node. For details of SRv6 replication to non-adjacent downstream node
        and IPv6 Hop Limit considerations, refer to Section 2.2 of <xref
        target="RFC9524"/>.</t>
      </section>
    </section>

    <section anchor="Controller"
             title="Using a controller to build a P2MP Tree">
      <t>A controller is instantiated or provisioned with SR P2MP Policy and
      its candidate paths to compute and instantiate PTIs in an SR domain. The
      procedures for provisioning or instantiation of these constructs on a
      controller are outside the scope of this document.</t>

      <section title="SR P2MP Policy on a controller">
        <t>An SR P2MP Policy is provisioned on a controller by an entity which
        can be an operator, a network node or a machine, by specifying the
        addresses of the Root, the set of Leaf nodes and the candidate paths.
        In this case, the Policy and its CPs are instantiated on the Root node
        using a signalling protocol. An SR P2MP Policy, its Leaf nodes and the
        CPs may also be provisioned on the Root node and then instantiated on
        the controller using a signalling protocol. The procedures and
        mechanisms for provisioning and instantiate SR P2MP Policy and its CPS
        on a controller or a Root node are outside the scope of this
        document.</t>

        <t>The possible set of constraints and optimization objective of a CP
        are described in Section 3 of <xref
        target="I-D.filsfils-spring-sr-policy-considerations"/>. Other
        constraints and optimization objectives MAY be used for P2MP tree
        computation.</t>
      </section>

      <section title="Controller Functions">
        <t>A controller performs the following functions in general:</t>

        <t><list style="symbols">
            <t>Topology Discovery: A controller discovers network topology
            across Interior Gateway Protocol (IGP) areas, levels or Autonomous
            Systems (ASes).</t>

            <t>Capability Exchange: A controller discovers a node's capability
            to participate in SR P2MP as well as advertise its capability to
            support SR P2MP.</t>
          </list></t>
      </section>

      <section anchor="Tree_Compute" title="P2MP Tree Compute">
        <t>A controller computes one or more PTIs for CPs of an SR P2MP
        Policy. A CP may not have any PTI if a controller cannot compute a
        P2MP tree for it.</t>

        <t>A controller MUST compute a P2MP tree such that there are no loops
        in the tree at steady state as required by <xref
        target="RFC9524"/>.</t>

        <t>A controller SHOULD modify a PTI of a candidate path on detecting a
        change in the network topology, if the change affects the tree
        instance, or when a better path can be found based on the new network
        state. Alternatively, the controller MAY decide implement a
        Make-Before-Break approach to minimize traffic loss. The controller
        can do this by creating a new PTI, activating the new instance once it
        is instantiated in the network, and then removing the old PTI.</t>
      </section>

      <section title="SID Management">
        <t>The controller assigns the Replication-SIDs for the Replication
        segments of the PTI.</t>

        <t>The Replication-SIDs of a PTI of a CP of an SR P2MP Policy can be
        either dynamically assigned by the controller or statically assigned
        by entity provisioning the SR P2MP Policy.</t>

        <t>For SR-MPLS, a Replication-SID may be assigned from the SR Local
        Block (SRLB) or the SR Global Block (SRGB) <xref target="RFC8402"/>.
        It is RECOMMENDED to assign a Replication-SID from the SRLB since
        Replication segments are local to each node of the PTI. It is NOT
        RECOMMENDED to allocate a Replication-SID from the SRBG since this
        block is globally significant the SR domain any it may get depleted if
        significant number of PTIs are instantiated in the SR domain.</t>

        <t><xref target="Steering"/> recommends the Tree-SID to be used as the
        Replication-SIDs for all the Replication segments of a PTI. It may be
        feasible to allocate the same Tree-SID value for all the Replication
        segments if the blocks used for allocation are not identical on all
        the nodes of the PTI, or if the particular Tree-SID value in the block
        is assigned to some other SID on some node.</t>

        <t>A BSID is also assigned for the SR P2MP Policy. The controller MAY
        decide to not assign a BSID and allow the Root node of the SR P2MP
        Policy to assign the BSID. It is RECOMMENDED to assign the BSID of an
        SR P2MP Policy from the SRLB for SR-MPLS.</t>

        <t>The controller MAY be provisioned with a reserved block or multiple
        reserved blocks for assigning Replication-SIDs and/or the BSIDs for SR
        P2MP Policies. a A single block maybe be reserved for the whole SR
        domain, or dedicated blocks can be reserved for each node or a group
        of nodes in the SR domain. These blocks MAY overlap with either the
        SRBG, SRLB or both. The procedures for provisioning these reserved
        blocks and procedures for deconflicting assignments from these
        reserved blocks with overlapping SRLB or SRGB blocks are outside the
        scope of this document.</t>

        <t>A controller may not be aware of all the assignments of SIDs from
        the SRGB or the SRLB of the SR domain. If reserved blocks are not
        used, the assignment of Replication-SIDs or BSIDs of SR P2MP Policies
        from these blocks may conflict with other SIDs.</t>
      </section>

      <section title="Instantiating P2MP tree instance on nodes">
        <t>After computing P2MP trees, the controller instantiates the
        Replication segments that compose the PTIs in the SR domain using
        signalling protocols such as PCEP <xref
        target="I-D.ietf-pce-sr-p2mp-policy"/>, BGP <xref
        target="I-D.ietf-idr-sr-p2mp-policy"/> or other mechanisms such as
        NETCONF/YANG <xref target="I-D.hb-spring-sr-p2mp-policy-yang"/> , etc.
        The procedures for the instantiation of the Replication segments in an
        SR domain are outside the scope of this document.</t>

        <t>A node SHOULD report a successful instantiation of a Replication
        segment. The exact procedure for reporting this is outside the scope
        of this document.</t>

        <t>The instantiation of a Replication segment on a node may fail, for
        e.g. when the Replication SID conflicts with another SID on the node.
        The node SHOULD report this, preferably with a reason for the failure,
        using a signalling protocol. The exact procedure for reporting this
        failure is outside the scope of this document.</t>

        <t>If the instantiation of a Replication segment on a node fails, the
        controller SHOULD attempt to re-instantiate the Replication segment.
        There SHOULD be an upper bound on the number of attempts. If the
        instantiation of Replication segment ultimately fails after the
        allowed number of attempts, the controller SHOULD generate an alert
        via mechanisms like syslog. These alerts SHOULD be rate-limited to
        protect the logging facility in case Replication segment instantiation
        fails on multiple nodes. The controller MAY decide to tear down the
        PTI if the instantiations of some of the Replication segments of the
        instance fail. The controller is RECOMMENDED to tear down the PTI if
        the instantiation of the Replication segment on the Root node fails.
        The controller can employ different strategies to re-try instantiating
        a PTI after a failure. These are out of scope of this document.</t>

        <t>A PTI should be instantiated within a reasonable time especially if
        it is the active PTI of a SR P2MP Policy. One approach is the
        controller instantiates the Replication segments in a batch. For
        example, the controller instantiates the Replication segments of the
        Leaf nodes and the intermediate Replication nodes first. If all of
        these Replication segments are successfully instantiated, the
        controller next proceeds to instantiate the Replication segment at the
        Root node. If the Replication segment instantiation at the Root node
        succeeds, the controller can immediately activate the instance if it
        needs to carry traffic of the SR P2MP Policy. A controller can adopt a
        similar approach when instantiating the new PTI for Make-Before-Break
        procedure.</t>
      </section>

      <section title="Protection">
        <section title="Local Protection">
          <t>A network link, node or replication branch on a PTI can be
          protected using SR Policies <xref target="RFC9256"/>. The backup SR
          Policies are associated with replication branches of a Replication
          segment, and are programmed in the data plane in order to minimize
          traffic loss when the protected link/node fails. The segment list of
          the backup SR policy is imposed on the downstream Replication SID of
          a replication branch to steer the traffic on the backup path.</t>

          <t>It is also possible to use node local Loop-Free Alternate <xref
          target="RFC5286"/> or TI-LFA <xref
          target="I-D.ietf-rtgwg-segment-routing-ti-lfa"/> protection and
          Micro-Loop <xref target="RFC5715"/> or SR Micro-Loop <xref
          target="I-D.bashandy-rtgwg-segment-routing-uloop"/> prevention
          mechanisms to protect link/nodes of a PTI.</t>
        </section>

        <section title="Path Protection">
          <t>A controller can create a disjoint backup tree instance for
          providing end-to-end tree protection if the topology permits. This
          can be achieved by having a backup CP with constraints and/or
          optimization objectives that ensure its PTIs are disjoint from the
          PTIs of the primary/active CP.</t>
        </section>
      </section>
    </section>

    <section anchor="IANA" title="IANA Considerations">
      <t>This document makes no request of IANA.</t>
    </section>

    <section anchor="Security" title="Security Considerations">
      <t>This document describes how a PTI can be created in an SR domain by
      stitching Replication segments together. Some security considerations
      for Replication segments outlined in <xref target="RFC9524"/> are also
      applicable to this document. Following is a brief reminder of the
      same.</t>

      <t>An SR domain needs protection from outside attackers as described in
      <xref target="RFC8402"/> <xref target="RFC8754"/> and <xref
      target="RFC8986"/> .</t>

      <t>Failure to protect the SR MPLS domain by correctly provisioning MPLS
      support per interface permits attackers from outside the domain to send
      packets to receivers of the Multi-point services that use the SR P2MP
      Policies provisioned within the domain.</t>

      <t>Failure to protect the SRv6 domain with inbound Infrastructure Access
      Control Lists (IACLs) on external interfaces, combined with failure to
      implement BCP 38 <xref target="RFC2827"/> or apply IACLs on nodes
      provisioning SIDs, permits attackers from outside the SR domain to send
      packets to the receivers of Multi-point services that use the SR P2MP
      Policies provisioned within the domain.</t>

      <t>Incorrect provisioning of Replication segments by a controller that
      computes SR PTI can result in a chain of Replication segments forming a
      loop. In this case, replicated packets can create a storm till MPLS TTL
      (for SR-MPLS) or IPv6 Hop Limit (for SRv6) decrements to zero.</t>

      <t>The control plane protocols (like PCEP, BGP, etc.) used to
      instantiate Replication segments of SR PTI can leverage their own
      security mechanisms such as encryption, authentication filtering
      etc.</t>

      <t>For SRv6, <xref target="RFC9524"/> describes an exception for
      Parameter Problem Message, code 2 ICMPv6 Error messages. If an attacker
      is able to inject a packet into Multi-point service with source address
      of a node and with an extension header using unknown option type marked
      as mandatory, then a large number of ICMPv6 Parameter Problem messages
      can cause a denial-of-service attack on the source node.</t>
    </section>

    <section anchor="Acknowledgements" title="Acknowledgements">
      <t>The authors would like to acknowledge Siva Sivabalan, Mike Koldychev
      and Vishnu Pavan Beeram for their valuable inputs.</t>
    </section>

    <section title="Contributors">
      <t>Clayton Hassen <vspace blankLines="0"/> Bell Canada <vspace
      blankLines="0"/> Vancouver <vspace blankLines="0"/> Canada</t>

      <t>Email: clayton.hassen@bell.ca</t>

      <t>Kurtis Gillis <vspace blankLines="0"/> Bell Canada <vspace
      blankLines="0"/> Halifax <vspace blankLines="0"/> Canada</t>

      <t>Email: kurtis.gillis@bell.ca</t>

      <t>Arvind Venkateswaran <vspace blankLines="0"/> Cisco Systems, Inc.
      <vspace blankLines="0"/> San Jose <vspace blankLines="0"/> US</t>

      <t>Email: arvvenka@cisco.com</t>

      <t>Zafar Ali <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> US</t>

      <t>Email: zali@cisco.com</t>

      <t>Swadesh Agrawal <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> San Jose <vspace blankLines="0"/> US</t>

      <t>Email: swaagraw@cisco.com</t>

      <t>Jayant Kotalwar <vspace blankLines="0"/> Nokia <vspace
      blankLines="0"/> Mountain View <vspace blankLines="0"/> US</t>

      <t>Email: jayant.kotalwar@nokia.com</t>

      <t>Tanmoy Kundu <vspace blankLines="0"/> Nokia <vspace blankLines="0"/>
      Mountain View <vspace blankLines="0"/> US</t>

      <t>Email: tanmoy.kundu@nokia.com</t>

      <t>Andrew Stone <vspace blankLines="0"/> Nokia <vspace blankLines="0"/>
      Ottawa <vspace blankLines="0"/> Canada</t>

      <t>Email: andrew.stone@nokia.com</t>

      <t>Tarek Saad <vspace blankLines="0"/> Juniper Networks <vspace
      blankLines="0"/> Canada</t>

      <t>Email:tsaad@juniper.net</t>

      <t>Kamran Raza <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> Canada</t>

      <t>Email:skraza@cisco.com</t>

      <t>Anuj Budhiraja <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> US</t>

      <t>Email:abudhira@cisco.com</t>

      <t>Mankamana Mishra <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> US</t>

      <t>Email:mankamis@cisco.com</t>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <?rfc include="reference.RFC.2119"?>

      <?rfc include="reference.RFC.8174"?>

      <?rfc include='reference.RFC.8402'?>

      <?rfc include='reference.RFC.9524'?>

      <?rfc include='reference.RFC.9256'?>
    </references>

    <references title="Informative References">
      <?rfc include='reference.RFC.4655'?>

      <?rfc include='reference.RFC.5286'?>

      <?rfc include='reference.RFC.5715'?>

      <?rfc include='reference.RFC.9573'?>

      <?rfc include='reference.RFC.8660'?>

      <?rfc include='reference.RFC.8986'?>

      <?rfc include='reference.RFC.8754'?>

      <?rfc include='reference.RFC.2827'?>

      <?rfc include='reference.I-D.ietf-pce-sr-p2mp-policy'?>

      <?rfc include='reference.I-D.ietf-idr-sr-p2mp-policy'?>

      <?rfc include='reference.I-D.hb-spring-sr-p2mp-policy-yang'?>

      <?rfc include='reference.I-D.filsfils-spring-sr-policy-considerations'?>

      <?rfc include='reference.I-D.ietf-rtgwg-segment-routing-ti-lfa'?>

      <?rfc include='reference.I-D.bashandy-rtgwg-segment-routing-uloop'?>
    </references>

    <section title="Illustration of SR P2MP Policy and P2MP Tree">
      <t>Consider the following topology:</t>

      <figure align="center" title="SR Toplogy">
        <artwork name="SR_Topology" type="ascii-art"><![CDATA[
                               R3------R6
                  Controller--/         \
                      R1----R2----R5-----R7
                              \         / 
                               +--R4---+  
           ]]></artwork>
      </figure>

      <t>In these examples, the Node-SID of a node Rn is N-SIDn and
      Adjacency-SID from node Rm to node Rn is A-SIDmn. Interface between Rm
      and Rn is Lmn.</t>

      <t>For SRv6, the reader is expected to be familiar with SRv6 Network
      Programming <xref target="RFC8986"/> to follow the examples.</t>

      <t><list style="symbols">
          <t>2001:db8::/32 is an IPv6 block allocated by a RIR to the
          operator</t>

          <t>2001:db8:0::/48 is dedicated to the internal address space</t>

          <t>2001:db8:cccc::/48 is dedicated to the internal SRv6 SID
          space</t>

          <t>We assume a location expressed in 64 bits and a function
          expressed in 16 bits</t>

          <t>node k has a classic IPv6 loopback address 2001:db8::k/128 which
          is advertised in the IGP</t>

          <t>node k has 2001:db8:cccc:k::/64 for its local SID space. Its SIDs
          will be explicitly assigned from that block</t>

          <t>node k advertises 2001:db8:cccc:k::/64 in its IGP</t>

          <t>Function :1:: (function 1, for short) represents the End function
          with Penultimate Segment Pop (PSP) support</t>

          <t>Function :Cn:: (function Cn, for short) represents the End.X
          function to node n</t>

          <t>Function :C1n: (function C1n for short) represents the End.X
          function to node n with Ultimate Segment Decapsulation (USD)</t>
        </list></t>

      <t>Each node k has: <list style="symbols">
          <t>An explicit SID instantiation 2001:db8:cccc:k:1::/128 bound to an
          End function with additional support for PSP</t>

          <t>An explicit SID instantiation 2001:db8:cccc:k:Cj::/128 bound to
          an End.X function to neighbor J with additional support for PSP</t>

          <t>An explicit SID instantiation 2001:db8:cccc:k:C1j::/128 bound to
          an End.X function to neighbor J with additional support for USD</t>
        </list></t>

      <t>Assume a controller is provisioned with following SR P2MP Policy at
      Root R1 with Tree-ID T-ID:</t>

      <figure>
        <artwork><![CDATA[SR P2MP Policy <R1,T-ID>:
 Leaf nodes: {R2, R6, R7}
 candidate-path 1:
   Optimize: IGP metric
   Tree-SID: T-SID1
]]></artwork>
      </figure>

      <t>The controller is responsible for computing a PTI of the candidate
      path. In this example, we assume one active PTI with Instance-ID I-ID1.
      Assume the controller instantiates PTIs by signalling Replication
      segments i.e. Replication-ID of these Replication segments is &lt;Root,
      Tree-ID, Instance-ID&gt;. All Replication segments use the Tree-SID
      T-SID1 as Replication-SID. For SRv6, assume the Replication-SID at node
      k, bound to an End.Replicate function, is 2001:db8:cccc:k:fa::/128.</t>

      <section title="P2MP Tree with non-adjacent Replication Segments">
        <t>Assume the controller computes a PTI with Root node R1,
        Intermediate and Leaf node R2, and Leaf nodes R6 and R7. The
        controller instantiates the instance by stitching Replication segments
        at R1, R2, R6 and R7. Replication segment at R1 replicates to R2.
        Replication segment at R2 replicates to R6 and R7. Note nodes R3, R4
        and R5 do not have any Replication segment state for the tree.</t>

        <section title="SR-MPLS">
          <t>The Replication segment state at nodes R1, R2, R6 and R7 is shown
          below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: T-SID1
 Replication State:
   R2: <T-SID1->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers a packet directly to the node on
          interface L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: T-SID1
 Replication State:
   R2: <Leaf>
   R6: <N-SID6, T-SID1>
   R7: <N-SID7, T-SID1>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R6 and R7. Replication to R6, using N-SID6,
          steers a packet via IGP shortest path to that node. Replication to
          R7, using N-SID7, steers a packet via IGP shortest path to R7 via
          either R5 or R4 based on ECMP hashing.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: T-SID1
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: T-SID1
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet is steered into the active instance candidate path
          1 of SR P2MP Policy at R1:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 performs PUSH
              operation with just &lt;T-SID1&gt; label for the replicated copy
              and sends it to R2 on interface L12.</t>

              <t>R2, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload. For replication to R6, R2 performs a PUSH
              operation of N-SID6, to send &lt;N-SID6,T-SID1&gt; label stack
              to R3. R3 is the penultimate hop for N-SID6; it performs
              penultimate hop popping, which corresponds to the NEXT operation
              and the packet is then sent to R6 with &lt;T-SID1&gt; in the
              label stack. For replication to R7, R2 performs a PUSH operation
              of N-SID7, to send &lt;N-SID7,T-SID1&gt; label stack to R4, one
              of IGP ECMP nexthops towards R7. R4 is the penultimate hop for
              N-SID7; it performs penultimate hop popping, which corresponds
              to the NEXT operation and the packet is then sent to R7 with
              &lt;T-SID1&gt; in the label stack.</t>

              <t>R6, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>

              <t>R7, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>
            </list></t>
        </section>

        <section title="SRv6">
          <t>For SRv6, the replicated packet from R2 to R7 has to traverse R4
          using an SR Policy, Policy27. The Policy has one SID in segment
          list: End.X function with USD of R4 to R7 . The Replication segment
          state at nodes R1, R2, R6 and R7 is shown below.</t>

          <figure>
            <artwork><![CDATA[Policy27: <2001:db8:cccc:4:c17::>
]]></artwork>
          </figure>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: 2001:db8:cccc:1:fa::
 Replication State:
   R2: <2001:db8:cccc:2:fa::->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers a packet directly to the node on
          interface L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: 2001:db8:cccc:2:fa::
 Replication State:
   R2: <Leaf>
   R6: <2001:db8:cccc:6:fa::>
   R7: <2001:db8:cccc:7:fa:: -> Policy27>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R6 and R7. Replication to R6, steers a packet
          via IGP shortest path to that node. Replication to R7, via an SR
          Policy, first encapsulates the packet using H.Encaps and then steers
          the outer packet to R4. End.X USD on R4 decapsulates outer header
          and sends the original inner packet to R7.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: 2001:db8:cccc:6:fa::
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: 2001:db8:cccc:7:fa::
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet (A,B2) is steered into the active instance of
          candidate path 1 of SR P2MP Policy at R1 using H.Encaps.Replicate
          behavior:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 sends replicated
              copy (2001:db8::1, 2001:db8:cccc:2:fa::) (A,B2) to R2 on
              interface L12.</t>

              <t>R2, as Leaf removes outer IPv6 header and delivers the
              payload. R2, as a bud node, also replicates the packet.</t>

              <t><list style="symbols">
                  <t>For replication to R6, R2 sends (2001:db8::1,
                  2001:db8:cccc:6:fa::) (A,B2) to R3. R3 forwards the packet
                  using 2001:db8:cccc:6::/64 packet to R6.</t>

                  <t>For replication to R7 using Policy27, R2 encapsulates and
                  sends (2001:db8::2, 2001:db8:cccc:4:C17::) (2001:db8::1,
                  2001:db8:cccc:7:fa::) (A,B2) to R4. R4 performs End.X USD
                  behavior, decapsulates outer IPv6 header and sends
                  (2001:db8::1, 2001:db8:cccc:7:fa::) (A,B2) to R7.</t>
                </list></t>

              <t>R6, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>

              <t>R7, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>
            </list></t>
        </section>
      </section>

      <section title="P2MP Tree with adjacent Replication Segments">
        <t>Assume the controller computes a PTI with Root node R1,
        Intermediate and Leaf node R2, Intermediate nodes R3 and R5, and Leaf
        nodes R6 and R7. The controller instantiates the PTI by stitching
        Replication segments at R1, R2, R3, R5, R6 and R7. Replication segment
        at R1 replicates to R2. Replication segment at R2 replicates to R3 and
        R5. Replication segment at R3 replicates to R6. Replication segment at
        R5 replicates to R7. Note node R4 does not have any Replication
        segment state for the tree.</t>

        <section title="SR-MPLS">
          <t>The Replication segment state at nodes R1, R2, R3, R5, R6 and R7
          is shown below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: T-SID1
 Replication State:
   R2: <T-SID1->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers a packet directly to the node on
          interface L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: T-SID1
 Replication State:
   R2: <Leaf>
   R3: <T-SID1->L23>
   R5: <T-SID1->L25>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R3 and R5. Replication to R3, steers a packet
          directly to the node on L23. Replication to R5, steers a packet
          directly to the node on L25.</t>

          <t>Replication segment at R3:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R3>:
 Replication-SID: T-SID1
 Replication State:
   R6: <T-SID1->L36>
]]></artwork>
          </figure>

          <t>Replication to R6, steers a packet directly to the node on
          L36.</t>

          <t>Replication segment at R5:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R5>:
 Replication-SID: T-SID1
 Replication State:
   R7: <T-SID1->L57>
]]></artwork>
          </figure>

          <t>Replication to R7, steers a packet directly to the node on
          L57.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: T-SID1
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: T-SID1
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet is steered into the SR P2MP Policy at R1:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 performs PUSH
              operation with just &lt;T-SID1&gt; label for the replicated copy
              and sends it to R2 on interface L12.</t>

              <t>R2, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload. It also performs PUSH operation on T-SID1
              for replication to R3 and R5. For replication to R6, R2 sends
              &lt;T-SID1&gt; label stack to R3 on interface L23. For
              replication to R5, R2 sends &lt;T-SID1&gt; label stack to R5 on
              interface L25.</t>

              <t>R3 performs NEXT operation on T-SID1 and performs a PUSH
              operation for replication to R6 and sends &lt;T-SID1&gt; label
              stack to R6 on interface L36.</t>

              <t>R5 performs NEXT operation on T-SID1 and performs a PUSH
              operation for replication to R7 and sends &lt;T-SID1&gt; label
              stack to R7 on interface L57.</t>

              <t>R6, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>

              <t>R7, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>
            </list></t>
        </section>

        <section title="SRv6">
          <t>The Replication segment state at nodes R1, R2, R3, R5, R6 and R7
          is shown below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: 2001:db8:cccc:1:fa::
 Replication State:
   R2: <2001:db8:cccc:2:fa::->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers a packet directly to the node on
          interface L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: 2001:db8:cccc:2:fa::
 Replication State:
   R2: <Leaf>
   R3: <2001:db8:cccc:3:fa::->L23>
   R5: <2001:db8:cccc:5:fa::->L25>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R3 and R5. Replication to R3, steers a packet
          directly to the node on L23. Replication to R5, steers a packet
          directly to the node on L25.</t>

          <t>Replication segment at R3:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R3>:
 Replication-SID: 2001:db8:cccc:3:fa::
 Replication State:
   R6: <2001:db8:cccc:6:fa::->L36>
]]></artwork>
          </figure>

          <t>Replication to R6, steers a packet directly to the node on
          L36.</t>

          <t>Replication segment at R5:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R5>:
 Replication-SID: 2001:db8:cccc:5:fa::
 Replication State:
   R7: <2001:db8:cccc:7:fa::->L57>
]]></artwork>
          </figure>

          <t>Replication to R7, steers a packet directly to the node on
          L57.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: 2001:db8:cccc:6:fa::
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: 2001:db8:cccc:7:fa::
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet (A,B2) is steered into the active instance of
          candidate path 1 of SR P2MP Policy at R1 using H.Encaps.Replicate
          behavior:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 sends replicated
              copy (2001:db8::1, 2001:db8:cccc:2:fa::) (A,B2) to R2 on
              interface L12.</t>

              <t>R2, as Leaf, removes outer IPv6 header and delivers the
              payload. R2, as a bud node, also replicates the packet. For
              replication to R3, R2 sends (2001:db8::1, 2001:db8:cccc:3:fa::)
              (A,B2) to R3 on interface L23. For replication to R5, R2 sends
              (2001:db8::1, 2001:db8:cccc:5:fa::) (A,B2) to R5 on interface
              L25.</t>

              <t>R3 replicates and sends (2001:db8::1, 2001:db8:cccc:6:fa::)
              (A,B2) to R6 on interface L36.</t>

              <t>R5 replicates and sends (2001:db8::1, 2001:db8:cccc:7:fa::)
              (A,B2) to R7 on interface L57.</t>

              <t>R6, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>

              <t>R7, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>
            </list></t>
        </section>
      </section>
    </section>
  </back>
</rfc>
