<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.20 (Ruby 3.3.5) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

<!ENTITY RFC9000 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9000.xml">
<!ENTITY I-D.ietf-moq-transport SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-moq-transport.xml">
<!ENTITY RFC9438 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml">
<!ENTITY I-D.ietf-ccwg-bbr SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-ccwg-bbr.xml">
<!ENTITY RFC6817 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6817.xml">
<!ENTITY RFC6582 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml">
<!ENTITY RFC3649 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3649.xml">
<!ENTITY I-D.irtf-iccrg-ledbat-plus-plus SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.irtf-iccrg-ledbat-plus-plus.xml">
<!ENTITY RFC9330 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9330.xml">
<!ENTITY RFC9331 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9331.xml">
<!ENTITY I-D.briscoe-iccrg-prague-congestion-control SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.briscoe-iccrg-prague-congestion-control.xml">
]>


<rfc ipr="trust200902" docName="draft-huitema-ccwg-c4-design-02" category="info" consensus="true" submissionType="IETF">
  <front>
    <title abbrev="C4 Design">Design of Christian's Congestion Control Code (C4)</title>

    <author initials="C." surname="Huitema" fullname="Christian Huitema">
      <organization>Private Octopus Inc.</organization>
      <address>
        <email>huitema@huitema.net</email>
      </address>
    </author>
    <author initials="S." surname="Nandakumar" fullname="Suhas Nandakumar">
      <organization>Cisco</organization>
      <address>
        <email>snandaku@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Jennings" fullname="Cullen Jennings">
      <organization>Cisco</organization>
      <address>
        <email>fluffy@iii.ca</email>
      </address>
    </author>

    <date year="2026" month="February" day="26"/>

    <area>Web and Internet Transport</area>
    
    <keyword>C4</keyword> <keyword>Congestion Control</keyword> <keyword>Realtime Communication</keyword> <keyword>Media over QUIC</keyword>

    <abstract>


<?line 108?>

<t>Christian's Congestion Control Code is a new congestion control
algorithm designed to support Real-Time applications such as
Media over QUIC. It is designed to drive towards low delays,
with good support for the "application limited" behavior
frequently found when using variable rate encoding, and
with fast reaction to congestion to avoid the "priority
inversion" happening when congestion control overestimates
the available capacity. It pays special attention to the
high jitter conditions encountered in Wi-Fi networks.
The design emphasizes simplicity and
avoids making too many assumption about the "model" of
the network. The main control variables are the estimate
of the data rate and of the maximum path delay in the
absence of queues.</t>



    </abstract>



  </front>

  <middle>


<?line 125?>

<section anchor="introduction"><name>Introduction</name>

<t>Christian's Congestion Control Code (C4) is a new congestion control
algorithm designed to support Real-Time multimedia applications, specifically
multimedia applications using QUIC <xref target="RFC9000"/> and the Media
over QUIC transport <xref target="I-D.ietf-moq-transport"/>. These applications
require low delays, and often exhibit a variable data rate as they
alternate high bandwidth requirements when sending reference frames
and lower bandwith requirements when sending differential frames.
We translate that into 3 main goals:</t>

<t><list style="symbols">
  <t>Drive towards low delays (see <xref target="react-to-delays"/>),</t>
  <t>Support "application limited" behavior (see <xref target="limited"/>),</t>
  <t>React quickly to changing network conditions (see <xref target="congestion"/>).</t>
</list></t>

<t>The design of C4 is inspired by our experience using different
congestion control algorithms for QUIC,
notably Cubic <xref target="RFC9438"/>, Hystart <xref target="HyStart"/>, and BBR <xref target="I-D.ietf-ccwg-bbr"/>,
as well as the study
of delay-oriented algorithms such as TCP Vegas <xref target="TCP-Vegas"/>
and LEDBAT <xref target="RFC6817"/>. In addition, we wanted to keep the algorithm
simple and easy to implement.</t>

<t>C4 assumes that the transport stack is
capable of signaling to the congestion algorithms events such
as acknowledgements, RTT measurements, ECN signals or the detection
of packet losses. It also assumes that the congestion algorithm
controls the transport stack by setting the congestion window
(CWND) and the pacing rate.</t>

<t>C4 tracks the state of the network by keeping a small set of
variables, the main ones being 
the "nominal rate", the "nominal max RTT",
and the current state of the algorithm. The details on using and
tracking the min RTT are discussed in <xref target="react-to-delays"/>.</t>

<t>The nominal rate is the pacing rate corresponding to the most recent
estimate of the bandwidth available to the connection.
The nominal max RTT is the best estimate of the maximum RTT
that can occur on the network in the absence of queues. When we
do not observe delay jitter, this coincides with the min RTT.
In the presence of jitter, it should be the sum of the
min RTT and the maximum jitter. C4 will compute a pacing
rate as the nominal rate multiplied by a coefficient that
depends on the state of the protocol, and set the CWND for
the path to the product of that pacing rate by the max RTT.
The design of these mechanisms is
discussed in <xref target="congestion"/>.</t>

</section>
<section anchor="react-to-delays"><name>Studying the reaction to delays</name>

<t>The current design of C4 is the result of a series of experiments.
Our initial design was to monitor delays and react to
dellay increases in much the same way as
congestion control algorithms like TCP Vegas or LEDBAT:</t>

<t><list style="symbols">
  <t>monitor the current RTT and the min RTT</t>
  <t>if the current RTT sample exceed the min RTT by more than a preset
margin, treat that as a congestion signal.</t>
</list></t>

<t>The "preset margin" is set by default to 10 ms in TCP Vegas and LEDBAT.
That was adequate when these algorithms were designed, but it can be
considered excessive in high speed low latency networks.
For the initial C4 design, we set it to the lowest of 1/8th of the min RTT and 25ms.</t>

<t>The min RTT itself is measured over time. The detection of congestion by comparing
delays to min RTT plus margin works well, except in two conditions:</t>

<t><list style="symbols">
  <t>if the C4 connection is competing with a another connection that
does not react to delay variations, such as a connection using Cubic,</t>
  <t>if the network exhibits a lot of latency jitter, as happens on
some Wi-Fi networks.</t>
</list></t>

<t>We also know that if several connection using delay-based algorithms
compete, the competition is only fair if they all have the same
estimate of the min RTT. We handle that by using a "periodic slow down"
mechanism.</t>

<section anchor="vegas-struggle"><name>Managing Competition with Loss Based Algorithms</name>

<t>Competition between Cubic and a delay based algorithm leads to Cubic
consuming all the bandwidth and the delay based connection starving.
This phenomenon force TCP Vegas to only be deployed in controlled
environments, in which it does not have to compete with
TCP Reno <xref target="RFC6582"/> or Cubic.</t>

<t>We handled this competition issue by using a simple detection algorithm.
If C4 detected competition with a loss based algorithm, it switched
to a "pig war" mode and stopped reacting to changes in delays -- it would
instead only react to packet losses and ECN signals. In that mode,
we used another algorithm to detect when the competition has ceased,
and switch back to the delay responsive mode.</t>

<t>In our initial deployments, we detected competition when delay based
congestion notifications leads to CWND and rate
reduction for more than 3
consecutive RTT. The assumption is that if the competition reacted to delays
variations, it would have reacted to the delay increases before
3 RTT. However, that simple test caused many "false positive"
detections.</t>

<t>We refined this test to start the pig war
if we observed 4 consecutive delay-based rate reductions
and the nominal CWND was less than half the max nominal CWND
observed since the last "initial" phase, or if we observed
at least 5 reductions and the nominal CWND is less than 4/5th of
the max nominal CWND.</t>

<t>We validated this test by comparing the
ratio <spanx style="verb">CWND/MAX_CWND</spanx> for "valid" decisions, when we are simulating
a competition scenario, and "spurious" decisions, when the
"more than 3 consecutive reductions" test fires but we are
not simulating any competition:</t>

<texttable>
      <ttcol align='left'>Ratio CWND/Max</ttcol>
      <ttcol align='left'>valid</ttcol>
      <ttcol align='left'>spurious</ttcol>
      <c>Average</c>
      <c>30%</c>
      <c>75%</c>
      <c>Max</c>
      <c>49%</c>
      <c>100%</c>
      <c>Top 25%</c>
      <c>37%</c>
      <c>91%</c>
      <c>Media</c>
      <c>35%</c>
      <c>83%</c>
      <c>Bottom 25%</c>
      <c>20%</c>
      <c>52%</c>
      <c>Min</c>
      <c>12%</c>
      <c>25%</c>
      <c>&lt;50%</c>
      <c>100%</c>
      <c>20%</c>
</texttable>

<t>Note that this validation was based on simulations, and that we cannot
claim that our simulations perfectly reflect the real world. We will
discuss in <xref target="simplify"/> how this imperfections lead us to use change
our overall design.</t>

<t>Our initial exit competition algorithm was simple. C4 will exit the
"pig war" mode if the available bandwidth increases.</t>

</section>
<section anchor="handling-chaotic-delays"><name>Handling Chaotic Delays</name>

<t>Some Wi-Fi networks exhibit spikes in latency. These spikes are
probably what caused the delay jitter discussed in
<xref target="Cubic-QUIC-Blog"/>. We discussed them in more details in
<xref target="Wi-Fi-Suspension-Blog"/>. We are not sure about the
mechanism behind these spikes, but we have noticed that they
mostly happen when several adjacent Wi-Fi networks are configured
to use the same frequencies and channels. In these configurations,
we expect the hidden node problem to result in some collisions.
The Wi-Fi layer 2 retransmission algorithm takes care of these
losses, but apparently uses an exponential back off algorithm
to space retransmission delays in case of repeated collisions.
When repeated collisions occur, the exponential backoff mechanism
can cause large delays. The Wi-Fi layer 2 algorithm will also
try to maintain delivery order, and subsequent packets will
be queued behind the packet that caused the collisions.</t>

<t>In our initial design, we detected the advent of such "chaotic delay jitter" by computing
a running estimate of the max RTT. We measured the max RTT observed
in each round trip, to obtain the "era max RTT". We then computed
an exponentially averaged "nominal max RTT":</t>

<figure><artwork><![CDATA[
nominal_max_rtt = (7 * nominal_max_rtt + era_max_rtt) / 8;
]]></artwork></figure>

<t>If the nominal max RTT was more than twice the min RTT, we set the
"chaotic jitter" condition. When that condition was set, we stopped
considering excess delay as an indication of congestion,
and we changed
the way we computed the "current CWND" used for the controlled
path. Instead of simply setting it to "nominal CWND", we set it
to a larger value:</t>

<figure><artwork><![CDATA[
target_cwnd = alpha*nominal_cwnd +
              (max_bytes_acked - nominal_cwnd) / 2;
]]></artwork></figure>
<t>In this formula, <spanx style="verb">alpha</spanx> is the amplification coefficient corresponding
to the current state, such as for example 1 if "cruising" or 1.25
if "pushing" (see <xref target="congestion"/>), and <spanx style="verb">max_bytes_acked</spanx> is the largest
amount of bytes in flight that was succesfully acknowledged since
the last initial phase.</t>

<t>The increased <spanx style="verb">target_cwnd</spanx> enabled C4 to keep sending data through
most jitter events. There is of course a risk that this increased
value will cause congestion. We limit that risk by only using half
the value of <spanx style="verb">max_bytes_ack</spanx>, and by the setting a
conservative pacing rate:</t>

<figure><artwork><![CDATA[
target_rate = alpha*nominal_rate;
]]></artwork></figure>
<t>Using the pacing rate that way prevents the larger window to
cause big spikes in traffic.</t>

<t>The network conditions can evolve over time. C4 will keep monitoring
the nominal max RTT, and will reset the "chaotic jitter" condition
if nominal max RTT decreases below a threshold of 1.5 times the
min RTT.</t>

</section>
<section anchor="slowdown"><name>Monitor min RTT</name>

<t>Delay based algorithm rely on a correct estimate of the
min RTT. They will naturally discover a reduction in the min
RTT, but detecting an increase in the max RTT is difficult.
There are known failure modes when multiple delay based
algorithms compete, in particular the "late comer advantage".</t>

<t>In our initial design, the connections ensured that their min RTT is valid by
occasionally entering a "slowdown" period, during which they set
CWND to half the nominal value. This is similar to
the "Probe RTT" mechanism implemented in BBR, or the
"initial and periodic slowdown" proposed as extension
to LEDBAT in <xref target="I-D.irtf-iccrg-ledbat-plus-plus"/>. In our
implementation, the slowdown occurs if more than 5
seconds have elapsed since the previous slowdown, or
since the last time the min RTT was set.</t>

<t>The measurement of min RTT in the period
that follows the slowdown is considered a "clean"
measurement. If two consecutive slowdown periods were
followed by clean measurements larger than the current
min RTT, we detect an RTT change and reset the
connection. If the measurement results in the same
value as the previous min RTT, C4 continue normal
operation.</t>

<t>Some applications exhibit periods of natural slow down. This
is the case for example of multimedia applications, when
they only send differentially encoded frames. Natural
slowdown was detected if an application sent less than
half the nominal CWND during a period, and more than 4 seconds
had elapsed since the previous slowdown or the previous
min RTT update. The measurement that follows a natural
slowdown was also considered a clean measurement.</t>

<t>A slowdown period corresponds to a reduction in offered
traffic. If multiple connections are competing for the same
bottleneck, each of these connections may experience cleaner
RTT measurements, leading to equalization of the min RTT
observed by these connections.</t>

</section>
</section>
<section anchor="simplify"><name>Simplifying the initial design</name>

<t>After extensive testing of our initial design, we felt we had
drifted away from our initial "simplicity" tenet. The algorithms
used to detect "pig war" and "chaotic jitter" were difficult
to tune, and despite our efforts they resulted in many
false positive or false negative. The "slowdown" algorithm
made C4 less friendly to "real time" applications that
prefer using stable estimated rates. These algorithms
interacted with each other in ways that were sometimes
hard to predict.</t>

<section anchor="chaotic-jitter-and-rate-control"><name>Chaotic jitter and rate control</name>

<t>As we observed the chaotic jitter behavior, we came to the
conclusion that only controlling the CWND did not work well.
we had a dilemma: either use a small CWND to guarantee that
RTTs remain small, or use a large CWND so that transmission
would not stall during peaks in jitter. But if we use a large
CWND, we need some form of pacing to prevent senders from
sending a large amount of packets too quickly. And then we
realized that if we do have to set a pacing rate, we can simplify
the algorithm.</t>

<t>Suppose that we compute a pacing rate that matches the network
capacity, just like BBR does. Then, in first approximation, the
setting the CWND too high does not matter too much. 
The number of bytes in flight will be limited by the product
of the pacing rate by the actual RTT. We are thus free to
set the CWND to a large value.</t>

</section>
<section anchor="monitoring-the-nominal-max-rtt"><name>Monitoring the nominal max RTT</name>

<t>The observation on chaotic jitter leads to the idea of monitoring
the maximum RTT. There is some difficulty here, because the
observed RTT has three components:</t>

<t><list style="symbols">
  <t>The minimum RTT in the absence of jitter</t>
  <t>The jitter caused by access networks such as Wi-Fi</t>
  <t>The delays caused by queues in the network</t>
</list></t>

<t>We cannot merely use the maximum value of the observed RTT,
because of the queing delay component. In pushing periods, we
are going to use data rate slightly higher than the measured
value. This will create a bit of queuing, pushing the queing
delay component ever higher -- and eventually resulting in
"buffer bloat".</t>

<t>To avoid that, we can have periodic periods in which the
endpoint sends data at deliberately slower than the
rate estimate. This would enable a "clean" measurement
of the Max RTT.</t>

<t>However, tests showed that only measuring
the Max RTT during recovery periods is not reactive enough.
For example, if the underlying RTT changes, we would need to wait
up to 6 RTT before registering the change. In practice, we can
measure the Max RTT in both the "recovery" and "cruising"
periods, i.e., all the periods in which data is sent at most
at the "nominal data rate".</t>

<t>If we are dealing with jitter, the clean Max RTT measurements
will include whatever jitter was happening at the time of the
measurement. It is not sufficient to measure the Max RTT once,
we must keep the maximum value of a long enough series of measurement
to capture the maximum jitter than the network can cause. But
we are also aware that jitter conditions change over time, so
we have to make sure that if the jitter diminished, the
Max RTT also diminishes.</t>

<t>We solved that by measuring the Max RTT during the "recovery"
periods that follow every "push". These periods occur about every 6 RTT,
giving us reasonably frequent measurements. During these periods, we
try to ensure clean measurements by
setting the pacing rate a bit lower than the nominal rate -- 6.25%
slower in our initial trials. We apply the following algorithm:</t>

<t><list style="symbols">
  <t>compute the <spanx style="verb">max_rtt_sample</spanx> as the maximum RTT observed for
packets sent during the recovery period.</t>
  <t>if the <spanx style="verb">max_rtt_sample</spanx> is more than <spanx style="verb">max_jitter</spanx> above
<spanx style="verb">running_min_rtt</spanx>, reset it to <spanx style="verb">running_min_rtt + max_jitter</spanx>
(by default, <spanx style="verb">max_jitter</spanx> is set to 250ms).</t>
  <t>if <spanx style="verb">max_rtt_sample</spanx> is larger than <spanx style="verb">nominal_max_rtt</spanx>, set
<spanx style="verb">nominal_max_rtt</spanx> to that value.</t>
  <t>else, set <spanx style="verb">nominal_max_rtt</spanx> to:</t>
</list></t>

<figure><artwork><![CDATA[
   nominal_max_rtt = gamma*max_rtt_sample + 
                     (1-gamma)*nominal_max_rtt
]]></artwork></figure>

<t>The <spanx style="verb">gamma</spanx> coefficient is set to <spanx style="verb">1/8</spanx> in our initial trials.</t>

<section anchor="preventing-runaway-max-rtt"><name>Preventing Runaway Max RTT</name>

<t>Computing Max RTT the way we do bears the risk of "run away increase"
of Max RTT:</t>

<t><list style="symbols">
  <t>C4 notices high jitter, increases Nominal Max RTT accordingly, set CWND to the
product of the increased Nominal Max RTT and Nominal Rate</t>
  <t>If Nominal rate is above the actual link rate, C4 will fill the pipe, and create a queue.</t>
  <t>On the next measurement, C4 finds that the max RTT has increased because of the queue,
interprets that as "more jitter", increases Max RTT and fills the queue some more.</t>
  <t>Repeat until the queue become so large that packets are dropped and cause a
congestion event.</t>
</list></t>

<t>Our proposed algorithm limits the Max RTT to at most <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but that is still risky. If congestion causes queues, the running measurements of <spanx style="verb">min RTT</spanx>
will increase, causing the algorithm to allow for corresponding increases in <spanx style="verb">max RTT</spanx>.
This would not happen as fast as without the capping to <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but it would still increase.</t>

</section>
<section anchor="initial-phase-and-max-rtt"><name>Initial Phase and Max RTT</name>

<t>During the initial phase, the nominal max RTT and the running min RTT are
set to the first RTT value that is measured. This is not great in presence
of high jitter, which causes C4 to exit the Initial phase early, leaving
the nominal rate way too low. If C4 is competing on the Wi-Fi link
against another connection, it might remain stalled at this low data rate.</t>

<t>We considered updating the Max RTT during the Initial phase, but that
prevents any detection of delay based congestion. The Initial phase
would continue until path buffers are full, a classic case of buffer
bloat. Instead, we adopted a simple workaround:</t>

<t><list style="symbols">
  <t>Maintain a flag "initial_after_jitter", initialized to 0.</t>
  <t>Get a measure of the max RTT after exit from initial.</t>
  <t>If C4 detects a "high jitter" condition and the
"initial_after_jitter" flag is still 0, set the
flag to 1 and re-enter the "initial" state.</t>
</list></t>

<t>Empirically, we detect high jitter in that case if the "running min RTT"
is less that 2/5th of the "nominal max RTT".</t>

</section>
</section>
<section anchor="monitor-rate"><name>Monitoring the nominal rate</name>

<t>The nominal rate is measured on each acknowledgement by dividing
the number of bytes acknowledged since the packet was sent
by the RTT measured with the acknowledgement of the packet,
protecting against delay jitter as explained in
<xref target="rate-measurement"/>, without additional filtering
as discussed in <xref target="not-filtering"/>.</t>

<t>We only use the measurements to increase the nominal rate,
replacing the current value if we observe a greater filtered measurement.
This is a deliberate choice, as decreases in measurement are ambiguous.
They can result from the application being rate limited, or from
measurement noises. Following those causes random decrease over time,
which can be detrimental for rate limited applications.
If the network conditions have changed, the rate will
be reduced if congestion signals are received, as explained
in <xref target="congestion"/>.</t>

<section anchor="rate-measurement"><name>Rate measurement</name>

<t>The simple algorithm protects from underestimation of the
delay by observing that
delivery rates cannot be larger than the rate at which the
packets were sent, thus keeping the lower of the estimated
receive rate and the send rate.</t>

<t>The algorithm uses four input variables:</t>

<t><list style="symbols">
  <t><spanx style="verb">current_time</spanx>: the time when the acknowledment is received.</t>
  <t><spanx style="verb">send_time</spanx>: the time at which the highest acknowledged
packet was sent.</t>
  <t><spanx style="verb">bytes_acknowledged</spanx>: the number of bytes acknowledged
 by the receiver between <spanx style="verb">send_time</spanx> and <spanx style="verb">current_time</spanx></t>
  <t><spanx style="verb">first_sent</spanx>: the time at which the packet containing
the first acknowledged bytes was sent.</t>
</list></t>

<t>The computation goes as follow:</t>

<figure><artwork><![CDATA[
ack_delay = current_time - send_time
send_delay = send_time - first_sent
measured_rate = bytes_acknowledged /
                max(ack_delay, send_delay)
]]></artwork></figure>

<t>This is in line with the specification of rate measurement
in <xref target="I-D.ietf-ccwg-bbr"/>.</t>

<t>We use the data rate measurement to update the
nominal rate, but only if not congested (see <xref target="congestion-bounce"/>)</t>

<figure><artwork><![CDATA[
if measured_rate > nominal_rate and not congested:
    nominal_rate = measured_rate
]]></artwork></figure>

</section>
<section anchor="congestion-bounce"><name>Avoiding Congestion Bounce</name>

<t>In our early experiments, we observed a "congestion bounce"
that happened as follow:</t>

<t><list style="symbols">
  <t>congestion is detected, the nominal rate is reduced, and
C4 enters recovery.</t>
  <t>packets sent at the data rate that caused the congestion
continue to be acknowledged during recovery.</t>
  <t>if enough packets are acknowledged, they will cause
a rate measurement close to the previous nominal rate.</t>
  <t>if C4 accepts this new nominal rate, the flow will
bounce back to the previous transmission rate, erasing
the effects of the congestion signal.</t>
</list></t>

<t>Since we do not want that to happen, we specify that the
nominal rate cannot be updated during congested periods,
defined as:</t>

<t><list style="symbols">
  <t>C4 is in "recovery" state,</t>
  <t>The recovery state was entered following a congestion signal,
or a congestion signal was received since the beginning
of the recovery era.</t>
</list></t>

</section>
<section anchor="not-filtering"><name>Not filtering the measurements</name>

<t>There is some noise in the measurements of the data rate, and we
protect against that noise by retaining the maximum of the
<spanx style="verb">ack_delay</spanx> and the <spanx style="verb">send_delay</spanx>. During early experiments,
we considered smoothing the measurements for eliminating that
noise.</t>

<t>The best filter that we could defined operated by
smoothing the inverse of the data rate, the "time per byte sent".
This works better because the data rate measurements are the
quotient of the number of bytes received by the delay.
The number of bytes received is
easy to assert, but the measurement of the delays are very noisy.
Instead of trying to average the data rates, we can average
their inverse, i.e., the quotients of the delay by the
bytes received, the times per byte. Then we can obtain
smoothed data rates as the inverse of these times per byte,
effectively computing an harmonic average of measurements
over time. We could for example 
compute an exponentially weighted moving average
of the time per byte, and use the inverse of that
as a filtered measurement of the data rate.</t>

<t>We do not specify any such filter in C4, because while
filtering will reduce the noise, we will also delay
any observation, resulting into a somewhat sluggish
response to change in network conditions. Experience
shows that the precaution of using the max of the
ack delay and the send delay as a divider is sufficient
for stable operation, and does not cause the response
delays that filtering would.</t>

</section>
</section>
<section anchor="early-congestion-modification"><name>Early Congestion Modification</name>

<t>We want C4 to handle Early Congestion Notification in a manner
compatible with the L4S design. For that, we monitor
the evolving ratio of CE marks that the L4S specification
designates as <spanx style="verb">alpha</spanx>
(we use <spanx style="verb">ecn_alpha</spanx> here to avoid confusion),
and we detect congestion if the ratio grows over a threshold.</t>

<t>We did not find a recommended algorithm for computing <spanx style="verb">ecn_alpha</spanx>
in either <xref target="RFC9330"/> or <xref target="RFC9331"/>, but we could get some
concrete suggestions in <xref target="I-D.briscoe-iccrg-prague-congestion-control"/>.
That draft, now obsolete, suggests updating the ratio once per
RTT, as the exponential weighted average of the fraction of
CE marks per packet:</t>

<figure><artwork><![CDATA[
frac = nb_CE / (nb_CE + nb_ECT1)
ecn_alpha += (frac - ecn_alpha)/16
]]></artwork></figure>

<t>This kind of averaging introduces a reaction delay. The draft suggests mitigating that
delay by preempting the averaging if the fraction is large:</t>

<figure><artwork><![CDATA[
if frac > 0.5:
    ecn_alpha = frac
]]></artwork></figure>

<t>We followed that design, but decided to update the coefficient after
each acknowledgement, instead of after each RTT. This is in line with
our implementation of "delayed acknowledgements" in QUIC, which
results in a small number of acknowledgements per RTT.</t>

<t>The reaction of C4 to an excess of CE marks is similar to the
reaction to excess delays or to packet losses, see <xref target="congestion"/>.</t>

</section>
</section>
<section anchor="competition-with-other-algorithms"><name>Competition with other algorithms</name>

<t>We saw in <xref target="vegas-struggle"/> that delay based algorithms required
a special "escape mode" when facing competition from algorithms
like Cubic. Relying on pacing rate and max RTT instead of CWND
and min RTT makes this problem much simpler. The measured max RTT
will naturally increase as algorithms like Cubic cause buffer
bloat and increased queues. Instead of being shut down,
C4 will just keep increasing its max RTT and thus its running
CWND, automatically matching the other algorithm's values.</t>

<t>We verified that behavior in a number of simulations. We also
verified that when the competition ceases, C4 will progressively
drop its nominal max RTT, returning to situations with very low
queuing delays.</t>

<section anchor="no-need-for-slowdowns"><name>No need for slowdowns</name>

<t>The fairness of delay based algorithm depends on all competing
flows having similar estimates of the min RTT. As discussed
in <xref target="slowdown"/>, this ends up creating variants of the
<spanx style="verb">latecomer advantage</spanx> issue, requiring a periodic slowdown
mechanism to ensure that all competing flow have chance to
update the RTT value.</t>

<t>This problem is caused by the default algorithm of setting
min RTT to the minimum of all RTT sample values since the beginning 
of the connection. Flows that started more recently compute
that minimum over a longer period, and thus discover a larger
min RTT than older flows. This problem does not exist with
max RTT, because all competing flows see the same max RTT
value. The slowdown mechanism is thus not necessary.</t>

<t>Removing the need for a slowdown mechanism allows for a
simpler protocol, better suited to real time communications.</t>

</section>
</section>
<section anchor="congestion"><name>React quickly to changing network conditions</name>

<t>Our focus is on maintaining low delays, and thus reacting
quickly to changes in network conditions. We can detect some of these
changes by monitoring the RTT and the data rate, but
experience with the early version of BBR showed that
completely ignoring packet losses can lead to very unfair
competition with Cubic. The L4S effort is promoting the use
of ECN feedback by network elements (see <xref target="RFC9331"/>),
which could well end up detecting congestion and queues
more precisely than the monitoring of end-to-end delays.
C4 will thus detect changing network conditions by monitoring
3 congestion control signals:</t>

<t><list style="numbers" type="1">
  <t>Excessive increase of measured RTT (above the nominal Max RTT),</t>
  <t>Excessive rate of packet losses (but not mere Probe Time Out, see <xref target="no-pto"/>),</t>
  <t>Excessive rate of ECN/CE marks</t>
</list></t>

<t>If any of these signals is detected, C4 enters a "recovery"
state. On entering recovery, C4 reduces the <spanx style="verb">nominal_rate</spanx>
by a factor "beta":</t>

<figure><artwork><![CDATA[
    # on congestion detected:
    nominal_rate = (1-beta)*nominal_rate
]]></artwork></figure>
<t>The cofficient <spanx style="verb">beta</spanx> differs depending on the nature of the congestion
signal. For packet losses, it is set to <spanx style="verb">1/4</spanx>, similar to the
value used in Cubic. For delay based losses, it is proportional to the
difference between the measured RTT and the target RTT divided by
the acceptable margin, capped to <spanx style="verb">1/4</spanx>. If the signal
is an ECN/CE rate, we may
use a proportional reduction coefficient in line with
<xref target="RFC9331"/>, again capped to <spanx style="verb">1/4</spanx>.</t>

<t>During the recovery period, target CWND and pacing rate are set
to a fraction of the "nominal rate" multiplied by the
"nominal max RTT".
The recovery period ends when the first packet
sent after entering recovery is acknowledged. Congestion
signals are processed when entering recovery; further signals
are ignored until the end of recovery.</t>

<t>Network conditions may change for the better or for the worse. Worsening 
is detected through congestion signals, but increases can only be detected
by trying to send more data and checking whether the network accepts it.
Different algorithms have done two ways: pursuing regular increases of
CWND until congestion finally occurs, like for example the "congestion
avoidance" phase of TCP RENO; or periodically probe the network
by sending at a higher rate, like the Probe Bandwidth mechanism of
BBR. C4 adopt the periodic probing approach, in particular
because it is a better fit for variable rate multimedia applications
(see details in <xref target="limited"/>).</t>

<section anchor="no-pto"><name>Do not react to Probe Time Out</name>

<t>QUIC normally detect losses by observing gaps in the sequences of acknowledged
packet. That's a robust signal. QUIC will also inject "Probe time out"
packets if the PTO timeout elapses before the last sent packet has not been acknowledged.
This is not a robust congestion signal, because delay jitter may also cause
PTO timeouts. When testing in "high jitter" conditions, we realized that we should
not change the state of C4 for losses detected solely based on timer, and
only react to those losses that are detected by gaps in acknowledgements.</t>

</section>
<section anchor="rate-update"><name>Update the Nominal Rate after Pushing</name>

<t>C4 configures the transport with a larger rate and CWND
than the nominal CWND during "pushing" periods.
The peer will acknowledge the data sent during these periods in
the round trip that followed.</t>

<t>When we receive an ACK for a newly acknowledged packet,
we update the nominal rate as explained in <xref target="monitor-rate"/>.</t>

<t>This strategy is effectively a form of "make before break".
The pushing
only increase the rate by a fraction of the nominal values,
and only lasts for one round trip. That limited increase is not
expected to increase the size of queues by more than a small
fraction of the bandwidth*delay product. It might cause a
slight increase of the measured RTT for a short period, or
perhaps cause some ECN signaling, but it should not cause packet
losses -- unless competing connections have caused large queues.
If there was no extra
capacity available, C4 does not increase the nominal CWND and
the connection continues with the previous value.</t>

</section>
</section>
<section anchor="fairness"><name>Driving for fairness</name>

<t>Many protocols enforce fairness by tuning their behavior so
that large flows become less aggressive than smaller ones, either
by trying less hard to increase their bandwidth or by reacting
more to congestion events. We considered adopting a similar
strategy for C4.</t>

<t>The aggressiveness of C4 is driven by several considerations:</t>

<t><list style="symbols">
  <t>the frequency of the "pushing" periods,</t>
  <t>the coefficient <spanx style="verb">alpha</spanx> used during pushing,</t>
  <t>the coefficient <spanx style="verb">beta</spanx> used during response to congestion events,</t>
  <t>the delay threshold above a nominal value to detect congestion,</t>
  <t>the ratio of packet losses considered excessive,</t>
  <t>the ratio of ECN marks considered excessive.</t>
</list></t>

<t>We clearly want to have some or all of these parameters depend
on how much resource the flow is using.
There are know limits to these strategies. For example,
consider TCP Reno, in which the growth rate of CWND during the
"congestion avoidance" phase" is inversely proportional to its size.
This drives very good long term fairness, but in practice
it prevents TCP Reno from operating well on high speed or
high delay connections, as discussed in the "problem description"
section of <xref target="RFC3649"/>. In that RFC, Sally Floyd was proposing
using a growth rate inversely proportional to the
logarithm of the CWND, which would not be so drastic.</t>

<t>In the initial design, we proposed making the frequency of the
pushing periods inversely proportional to the logarithm of the
CWND, but that gets in tension with our estimation of
the max RTT, which requires frequent "recovery" periods.
We would not want the Max RTT estimate to work less well for
high speed connections! We solved the tension in favor of
reliable max RTT estimates, and fixed to 4 the number
of Cruising periods between Recovery and Pushing. The whole
cycle takes about 6 RTT.</t>

<t>We also reduced the default rate increase during Pushing to
6.25%, which means that the default cycle is more on less on
par with the aggressiveness of RENO when
operating at low bandwidth (lower than 34 Mbps).</t>

<section anchor="absence-of-constraints-is-unfair"><name>Absence of constraints is unfair</name>

<t>Once we fixed the push frequency and the default increase rate, we were
left with responses that were mostly proportional to the amount
of resource used by a connection. Such design makes the resource sharing
very dependent on initial conditions. We saw simulations where
after some initial period, one of two competing connections on
a 20 Mbps path might settle at a 15 Mbps rate and the other at 5 Mbps.
Both connections would react to a congestion event by dropping
their bandwidth by 25%, to 15 or 3.75 Mbps. And then once the condition
eased, both would increase their data rate by the same amount. If
everything went well the two connection will share the bandwidth
without exceeding it, and the situation would be very stable --
but also very much unfair.</t>

<t>We also had some simulations in which a first connection will
grab all the available bandwidth, and a late comer connection
would struggle to get any bandwidth at all. The analysis 
showed that the second connection was
exiting the initial phase early, after encountering either
excess delay or excess packet loss. The first
connection was saturating the path, any additional traffic
did cause queuing or losses, and the second connection had
no chance to grow.</t>

<t>This "second comer shut down" effect happened particularly often
on high jitter links. The established connections had tuned their
timers or congestion window to account for the high jitter. The
second connection was basing their timers on their first
measurements, before any of the big jitter events had occured.
This caused an imbalance between the first connection, which
expected large RTT variations, and the second, which did not
expect them yet.</t>

<t>These shutdown effects happened in simulations with the first
connection using either Cubic, BBR or C4. We had to design a response,
and we first turned to making the response to excess delay or
packet loss a function of the data rate of the flow.</t>

</section>
<section anchor="sensitivity-curve"><name>Introducing a sensitivity curve</name>

<t>In our second design, we attempted to fix the unfairness and
shutdowns effect by introducing a sensitivity curve,
computing a "sensitivity" as a function of the flow data
rate. Our first implementation is simple:</t>

<t><list style="symbols">
  <t>set sensitivity to 0 if data rate is lower than 50000B/s</t>
  <t>linear interpolation between 0 and 0.92 for values
between 50,000 and 1,000,000B/s.</t>
  <t>linear interpolation between 0.92 and 1 for values
between 1,000,000 and 10,000,000B/s.</t>
  <t>set sensitivity to 1 if data rate is higher than
10,000,000B/s</t>
</list></t>

<t>The sensitivity index is then used to set the value of delay and
loss thresholds. For the delay threshold, the rule is:</t>

<figure><artwork><![CDATA[
    delay_fraction = 1/16 + (1 - sensitivity)*3/16
    delay_threshold = min(25ms, delay_fraction*nominal_max_rtt)
]]></artwork></figure>

<t>For the loss threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 0.02 + 0.50 * (1-sensitivity);
]]></artwork></figure>

<t>For the CE mark threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 1/32 + 1/32 * (1-sensitivity);
]]></artwork></figure>

<t>This very simple change allowed us to stabilize the results. In our
competition tests we see sharing of resource almost equitably between
C4 connections, and reasonably between C4 and Cubic or C4 and BBR.
We do not observe the shutdown effects that we saw before.</t>

<t>There is no doubt that the current curve will have to be refined. We have
a couple of such tests in our test suite with total capacity higher than
20Mbps, and for those tests the dependency on initial conditions remain.
We will revisit the definition of the curve, probably to have the sensitivity
follow the logarithm of data rate.</t>

</section>
<section anchor="cascade"><name>Cascade of Increases</name>

<t>We sometimes encounter networks in which the available bandwidth changes rapidly.
For example, when a competing connection stops, the available capacity may double.
With low Earth orbit satellite constellations (LEO), it appears
that ground stations constantly check availability of nearby satellites, and
switch to a different satellite every 10 or 15 seconds depending on the
constellation (see <xref target="ICCRG-LEO"/>), with the bandwidth jumping from 10Mbps to
65Mbps.</t>

<t>Because we aim for fairness with RENO or Cubic, the cycle of recovery, cruising
and pushing will only result in slow increases increases, maybe 6.25% after 6 RTT.
This means we would only double the bandwidth after about 68 RTT, or increase
from 10 to 65 Mbps after 185 RTT -- by which time the LEO station might
have connected to a different orbiting satellite. To go faster, we implement
a "cascade": if the previous pushes at 6.25% was successful, the next
pushing will use 25% (see <xref target="variable-pushing"/>), or an intermediate
value if the observed ratio of ECN marks is greater than 0. If three successive pushes
all result in increases of the
nominal rate, C4 will reenter the "startup" mode, during which each RTT
can result in a 100% increase of rate and CWND.</t>

</section>
</section>
<section anchor="limited"><name>Supporting Application Limited Connections</name>

<t>C4 is specially designed to support multimedia applications,
which very often operate in application limited mode.
After testing and simulations of application limited applications,
we incorporated a number of features.</t>

<t>The first feature is the design decision to only lower the nominal
rate if congestion is detected. This is in contrast with the BBR design,
in which the estimate of bottleneck bandwidth is also lowered
if the bandwidth measured after a "probe bandwidth" attempt is
lower than the current estimate while the connection was not
"application limited". We found that detection of the application
limited state was somewhat error prone. Occasional errors end up
with a spurious reduction of the estimate of the bottleneck bandwidth.
These errors can accumulate over time, causing the bandwidth
estimate to "drift down", and the multimedia experience to suffer.
Our strategy of only reducing the nominal values in
reaction to congestion notifications much reduces that risk.</t>

<t>The second feature is the "make before break" nature of the rate
updates discussed in <xref target="rate-update"/>. This reduces the risk
of using rates that are too large and would cause queues or losses,
and thus makes C4 a good choice for multimedia applications.</t>

<t>C4 adds two more features to handle multimedia
applications well: coordinated pushing (see <xref target="coordinated-pushing"/>),
and variable pushing rate (see <xref target="variable-pushing"/>).</t>

<section anchor="coordinated-pushing"><name>Coordinated Pushing</name>

<t>As stated in <xref target="fairness"/>, the connection will remain in "cruising"
state for a specified interval, and then move to "pushing". This works well
when the connection is almost saturating the network path, but not so
well for a media application that uses little bandwidth most of the
time, and only needs more bandwidth when it is refreshing the state
of the media encoders and sending new "reference" frames. If that
happens, pushing will only be effective if the pushing interval
coincides with the sending of these reference frames. If pushing
happens during an application limited period, there will be no data to
push with and thus no chance of increasing the nominal rate and CWND.
If the reference frames are sent outside of a pushing interval, the
rate and CWND will be kept at the nominal value.</t>

<t>To break that issue, one could imagine sending "filler" traffic during
the pushing periods. We tried that in simulations, and the drawback became
obvious. The filler traffic would sometimes cause queues and packet
losses, which degrade the quality of the multimedia experience.
We could reduce this risk of packet losses by sending redundant traffic,
for example creating the additional traffic using a forward error
correction (FEC) algorithm, so that individual packet losses are
immediately corrected. However, this is complicated, and FEC does
not always protect against long batches of losses.</t>

<t>C4 uses a simpler solution. If the time has come to enter pushing, it
will check whether the connection is "application limited", which is
simply defined as testing whether the application send a "nominal CWND"
worth of data during the previous interval. If it is, C4 will remain
in cruising state until the application finally sends more data, and
will only enter the the pushing state when the last period was
not application limited.</t>

</section>
<section anchor="variable-pushing"><name>Variable Pushing Rate</name>

<t>C4 tests for available bandwidth at regular pushing intervals
(see <xref target="fairness"/>), during which the rate and CWND is set at 25% more
than the nominal values. This mimics what BBR
is doing, but may be less than ideal for real time applications.
When in pushing state, the application is allowed to send
more data than the nominal CWND, which causes temporary queues
and degrades the experience somewhat. On the other hand, not pushing
at all would not be a good option, because the connection could
end up stuck using only a fraction of the available
capacity. We thus have to find a compromise between operating at
low capacity and risking building queues.</t>

<t>We manage that compromise by adopting a variable pushing rate:</t>

<t><list style="symbols">
  <t>If pushing at 25% did not result in a significant increase of
the nominal rate, the next pushing will happen at 6.25%</t>
  <t>If pushing at 6.25% did result in some increase of the nominal CWIN,
the next pushing will happen at 25%, otherwise it will
remain at 6.25%</t>
</list></t>

<t>If the observed ratio of ECN-CE marks is greater than zero, we will
use it to modulate the amount of pushing. We leave the pushing rate
at 6.25% if the previous pushing attempt was not successful, but
otherwise we pick a value intermediate between 25% (if 0 ECN marks)
and 6.25% (if the ratio of ECN marks approaches the threshold).</t>

<t>As explained in <xref target="cascade"/>, if three consecutive pushing attempts
result in significant increases, C4 detects that the underlying network
conditions have changed, and will reenter the startup state.</t>

<t>The "significant increase" mentioned above is a matter of debate.
Even if capacity is available,
increasing the send rate by 25% does not always result in a 25%
increase of the acknowledged rate. Delay jitter, for example,
may result in lower measurement. We initially computed the threshold
for detecting "significant" increase as 1/2 of the increase in
the sending rate, but multiple simulation shows that was too high
and caused lower performance. We now set that threshold to 1/4 of the
increase in the sending rate.</t>

</section>
<section anchor="pushing-rate-and-cascades"><name>Pushing rate and Cascades</name>

<t>The choice of a 25% push rate was motivated by discussions of
BBR design. Pushing has two parallel functions: discover the available
capacity, if any; and also, push back against other connections
in case of competition. Consider for example competition with Cubic.
The Cubic connection will only back off if it observes packet losses,
which typically happen when the bottleneck buffers are full. Pushing
at a high rate increases the chance of building queues,
overfilling the buffers, causing losses, and thus causing Cubic to back off.
Pushing at a lower rate like 6.25% would not have that effect, and C4
would keep using a lower share of the network. This is why we will always
pushed at 25% in the "pig war" mode.</t>

<t>The computation of the interval between pushes is tied to the need to
compete nicely, and follows the general idea that
the average growth rate should mimic that of RENO or Cubic in the
same circumstances. If we pick a lower push rate, such as 6.25% or
maybe 12.5%, we might be able to use shorter intervals. This could be
a nice compromise: in normal operation, push frequently, but at a
low rate. This would not create large queues or disturb competing
connections, but it will let C4 discover capacity more quickly. Then,
we could use the "cascade" algorithm to push at a higher rate,
and then maybe switch to startup mode if a lot of capacity is
available. This is something that we intend to test, but have not
implemented yet.</t>

</section>
</section>
<section anchor="state-machine"><name>State Machine</name>

<t>The state machine for C4 has the following states:</t>

<t><list style="symbols">
  <t>"startup": the initial state, during which the CWND is
set to twice the "nominal_CWND". The connection
exits startup if the "nominal_cwnd" does not
increase for 3 consecutive round trips. When the
connection exits startup, it enters "recovery".</t>
  <t>"recovery": the connection enters that state after
"startup", "pushing", or a congestion detection in
a "cruising" state. It remains in that state for
at least one roundtrip, until the first packet sent
in "recovery" is acknowledged. Once that happens,
the connection goes back
to "startup" if the last 3 pushing attemps have resulted
in increases of "nominal rate", or enters "cruising"
otherwise.</t>
  <t>"cruising": the connection is sending using the
"nominal_rate" and "nominal_max_rtt" value. If congestion is detected,
the connection exits cruising and enters
"recovery" after lowering the value of
"nominal_cwnd".
Otherwise, the connection will
remain in "cruising" state until at least 4 RTT and
the connection is not "app limited". At that
point, it enters "pushing".</t>
  <t>"pushing": the connection is using a rate and CWND 25%
larger than "nominal_rate" and "nominal_CWND".
It remains in that state
for one round trip, i.e., until the first packet
send while "pushing" is acknowledged. At that point,
it enters the "recovery" state.</t>
</list></t>

<t>These transitions are summarized in the following state
diagram.</t>

<figure><artwork><![CDATA[
                    Start
                      |
                      v
                      +<-----------------------+
                      |                        |
                      v                        |
                 +----------+                  |
                 | Startup  |                  |
                 +----|-----+                  |
                      |                        |
                      v                        |
                 +------------+                |
  +--+---------->|  Recovery  |                |
  ^  ^           +----|---|---+                |
  |  |                |   |     Rapid Increase |
  |  |                |   +------------------->+
  |  |                |
  |  |                v
  |  |           +----------+
  |  |           | Cruising |
  |  |           +-|--|-----+
  |  | Congestion  |  |
  |  +-------------+  |
  |                   |
  |                   v
  |              +----------+
  |              | Pushing  |
  |              +----|-----+
  |                   |
  +<------------------+

]]></artwork></figure>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>We do not believe that C4 introduce new security issues. Or maybe there are,
such as what happen if applications can be fooled in going to fast and
overwhelming the network, or going to slow and underwhelming the application.
Discuss!</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>




    <references title='Informative References' anchor="sec-informative-references">

&RFC9000;
&I-D.ietf-moq-transport;
&RFC9438;
&I-D.ietf-ccwg-bbr;
&RFC6817;
&RFC6582;
&RFC3649;
<reference anchor="TCP-Vegas" target="https://ieeexplore.ieee.org/document/464716">
  <front>
    <title>TCP Vegas: end to end congestion avoidance on a global Internet</title>
    <author initials="L. S." surname="Brakmo">
      <organization></organization>
    </author>
    <author initials="L. L." surname="Peterson">
      <organization></organization>
    </author>
    <date year="1995" month="October"/>
  </front>
  <seriesInfo name="IEEE Journal on Selected Areas in Communications ( Volume: 13, Issue: 8, October 1995)" value=""/>
</reference>
<reference anchor="HyStart" target="https://doi.org/10.1016/j.comnet.2011.01.014">
  <front>
    <title>Taming the elephants: New TCP slow start</title>
    <author initials="S." surname="Ha">
      <organization></organization>
    </author>
    <author initials="I." surname="Rhee">
      <organization></organization>
    </author>
    <date year="2011" month="June"/>
  </front>
  <seriesInfo name="Computer Networks vol. 55, no. 9, pp. 2092-2110" value=""/>
</reference>
<reference anchor="Cubic-QUIC-Blog" target="https://www.privateoctopus.com/2019/11/11/implementing-cubic-congestion-control-in-quic/">
  <front>
    <title>Implementing Cubic congestion control in Quic</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2019" month="November"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
<reference anchor="Wi-Fi-Suspension-Blog" target="https://www.privateoctopus.com/2023/05/18/the-weird-case-of-wifi-latency-spikes.html">
  <front>
    <title>The weird case of the wifi latency spikes</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2023" month="May"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
&I-D.irtf-iccrg-ledbat-plus-plus;
&RFC9330;
&RFC9331;
&I-D.briscoe-iccrg-prague-congestion-control;
<reference anchor="ICCRG-LEO" target="https://datatracker.ietf.org/meeting/122/materials/slides-122-iccrg-mind-the-misleading-effects-of-leo-mobility-on-end-to-end-congestion-control-00">
  <front>
    <title>Mind the Misleading Effects of LEO Mobility on End-to-End Congestion Control</title>
    <author initials="Z." surname="Lai">
      <organization></organization>
    </author>
    <author initials="Z." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Wu">
      <organization></organization>
    </author>
    <author initials="H." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Zhang">
      <organization></organization>
    </author>
    <date year="2025" month="March"/>
  </front>
  <seriesInfo name="Slides presented at ICCRG meeting during IETF 122" value=""/>
</reference>


    </references>



<?line 1137?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>TODO acknowledge.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA7V9eZMbV3Ln/+9T1IIxYVIC0AdJHRzLYbJFeegVSZmkzQ3H
xrILQAFdIlCFqaMhDEV/9s38ZeY7CtWcmQ0vY0bdDVS9I1/e15vNZq4ru23x
JJv8WLTlpsrqdXZ105RtV+bVP7TZVV1tCvqjrvjXrqm39HNVZPevHj2YuHyx
aIpbevnqUSbvT9wy74pN3RyfZGW1rp1b1csq39EMqyZfd7ObvuyKXT5bLg+b
2fLRbIXXZueXru0Xu7JtaaruuKfnXzx/95Or+t2iaJ64FY36xC3rqi2qtm+f
ZF3TF67cN/it7S7Pz7+nMfKmyGk574tFller7EXVFU1VdNm7Jq/afd10E/ex
OB7qZvXEZbPs6hH+e7JH/vRNkW+7clfQZ7tdX5W0L3qCv3lZrMo8q2+LJvu3
f39x5Vzedzd1wyO6LKNt0/Ku5tmfZKf8kQDAwzX+qm42T7JfmvKWNpi9Xnb1
vm9p3cs5f0nPlNsnmcLsn/XnnHYUz/V2nr2i3eYf+13ehOne9jd5O/iGZsur
8i/YCi2obJd1NE9bycP/vOQv5st6N9jSvxZVVVabNtpTv90WVfLFl+dYb/v1
+vjPZVnOl7lzVd3s6MlbOlxGF/8HvfDmp6vvz8/P8fuL2Y/zsujWs13951ln
h+kfe/Twu/QxYBchpz3xzXcX3/rfH393ab8//ObR908c//Hu6pfZfxSbvMVX
WZc3m6Ij0Hfdvn1ydlYWRfHbfls3xZx/ndMmzwiz+11RdWePvnn07cU38p5S
Ew2XyXBZQYjY1fixDJiW39blKq+WRcZ/ZJttvci3HmEnGAxIn00eXgAxiA6y
i++/fyzftUVTEukQzOiJF8+fP8/+te6bisag8d4W22LZFavsKdFDS8eXInGb
3c/+o972fIAXD6fZi7bt6dfvpsk8D2Qij9z8byY/FCN+nhPqPWvyj7t6/Ouf
59kvBe2oJcLhr/50fEuA7XSwIYxXdQm4XpzPL84vvjn7lTGQgDG/PL+4mJ/z
/x7pmwblfEdYl3U3RUY73t/kVUfzvioOfJxZu60PWcsTyk4Mnv/aV0XGY+rH
MSgJTvueVkyDdMQmPrbZbb2dZ48fT7OqnmffT7P9fk4vf385u7y4OJcBEhAR
gzAoePL8U27fpV+9mGdvbooCsLnqF+Vyxgxl9mxbb8bR8HA4zPfCLGrhFQyj
M9rM92cXF/y/crffFoyVBJjZEmMGrONfmb/Nymr2575cniU4+yJ6VZYTI6y+
ysj0b/RqgqGv6tvsgtCH1zGCnieMjwTLgvZ4imAGnyEP5Y/fl7Ofytnbvt2T
CODN/D+A6fLh2fnjs4vvzghlZoeibFazZd4Ws3o9O5TrcralN6rlcdbuy49F
O7/pdtuUrAnT8FrGr7GwZNzjVzN9NZNXE/C8zI8KnsuH/9/AA97XEO8rl8tm
M9sWq0XezfbbvsV/hMsxs3z48PxJ+P3CM85Fw6y60Pf3Tb7pixHckeevrt78
y+zn56/H4U/7zolLLz8WDfgxyHpXFIxZZxeXl2fE5gkC+bY9a7clqQAz+lAn
Jopezfh4SBnYFvmK8bhYr4mftXxM26ImIbAot2V3nNGqCn66xo8RPD8/T47v
Zcm8mE7spR87ey5j81nSfrKXOjaz0ecyNv0Y0REGJ9wsb7KLb3HGYxz6LbaZ
7ZuCFBjmzHknQMwULNmqb/gHaz0ZQeOvHP5/zrOf83L885GP/22eve9PP/7T
nU//J/HSjXOz2SzLFy2fZefc36IYli1Js4o48CnrcPmWFMOyu9llovUVEIxt
v2dhDoVr9o41rny/33pR1fYE2bx1A61rnr3oeLJ4pBWRfEG/HPJm1WbM/VfF
Nj+2U3egWbNNXa/8bKRrABMm0WTZttwRQa0m2aK4yW/LunHrpvhzTye2PdIb
PSHC4YbUnb7lo7rNCYUX2yJrWHcj4q8ZoaasecqE67ztMpLASwxOC4xgQn9B
BZBFELNiyBxJCaINMnebZDe0soK1KplzhBUzMPgjpqbW8UD5LSlZWNMy3+dL
GhFw2hMQiDEVSyI5wryOebysgV5yN+XmJvu17Fjs0dCrUgDPG+pZHyHgEtMH
+6WTFbE4d8wKBfik2pHobcu/EIK3LH9KnhhgwBbbbJd/hJyua/q1oq9I4djt
RQ9a1H0nQNgR/mwnRIjYis40z3gi0h3Dtg3uhGlNIcJfgeCUITP/kVNhK0A/
3OW/lbt+R8CgowFi8LYYAIThBTSxdUaH3RPjF8zflavVlqTzPdbLmnrV4yD/
NjpgA+m/hRh2PcwQRv6YLqZynmv6e7s9ujueUkxlgsk+fVKN+vNngAV8kN9w
nqgyr1rT0+M69+fPOJE2pVLHZFLSaUREp7AnZMuK327KRdkRMDzNREfU8lKO
BBDWffkTIOSCXj+UKzorHZs1k1ZogY4LvLsp1oSefHTrhsyR1vGUtATajrz+
xbdX5Rqvd0wVMsDcvS8ECCzOaV3EpsuKzuWhoOCmJplFonSW/XgHr8nut0VB
0APZs/SQjz9/fjClt97q8X6Z69gY+oW++4ZHzFht+0jciLkJ82jeiJJKTLw6
QsA7GoSwOiJaNvQfMYYSy9+XTOQLEnp9Q4e1Z8nFUBXs8XByIzzIo3ELlspY
NCWrrqNDPqoSKYhHNtrnz1MyAaCR04dqDPCHfGzPnr2Jkc4sOPraEYYciu1W
MYVU+n51ZFIHaGc1LxYiNSxFhUbmzTAa2lt4nz8DT35+/uOzp+9kdWwfMma/
IIa0EhhOac7skGNkAvbHothjdj+LA68TFkNWFo7Ea98Ea4Iu+FzRCh7xy4G+
CArLjwR+x4yaCYL2w+eSb4VT4vHYXgybK26By7xHBg2NU9UH0vc2guTT7M27
d6RW5G3f2CfPr17p6KToiOBbkWEm/Ixm3rOu1hEaty0RAYsMerI+Xf/Ygpyi
Qju6Q0Kqtug6M9OiAQ6kjNUHd//q/asfH3iOxFKLKZvIT2AIPdIOnolS2bkh
PU3AZ8Mv5Vm7I27IE7IU8XJiqvyfCLiuaDuLgp+GlJlUNSmcRP884UQe9J+R
xGBYTqbOVrfsGyaEdCUeEiKrCLAkhFtWIIV+WBBiFwYEGh1nxOJrRUp3T1CH
jB1hG0q08TKZaAegIrjSwgjuwtkUfXY19I8lk66JSFt0YK9BZwhYVwlqzJO5
FRw2/YKGzIbDmpCl5xywZklmTb0kuDE84nMT2Zudyt7sPTPpQ+FWNU1NJ0lP
NLeFimzRU/igaBnLuqyWUKzB6iPYzt0LGV80bpnA3iVJ1N7U/ZZYnugPhOW6
AefPRk/cNiTvzplnHkrCsaU4Cgjn5BhcJMvS04JoJm4vHDanN8mcIRWJ8YhB
5FYF6Xmr1gCU4Na+qbt6WW+FRzJi86dMMsxunaABb722x1lLkbfzLkERmlw3
JABKZUEHmb4rWKiULXEZYk0D3IyFyZy1orfMiA2pYz1XReGne0N0FmQ2KhpK
IhmmJXjxZ7laUPy7yCQws7l7TchUViXEtg5xYMCTclnTx8TfdH4GGVZA3xGU
t6LyLdknVsArtmMxAZiT8KdBWDP9K0JuS9Z9JFZoMpEjUAps/phVJMgkyEVP
luuTh2gNLE6K35ZFkTzOJ7eroerm7CsESndksO3I7C5JUHW0I8ElRsA8ZrLC
9JWJTORNfW/CIOc/afhVsc4Z7ATDi/NsB+CETQZpyVhDszC48xUpVoxX0KkE
fSI4kQ5WeM12mi1IyS+FGywK+PGJbFnp4O22LetSNCUUP9JrC2hx3qUSjI6f
FLh2/IQ3MgekNW+m7IwWWA9sgUkXZ98RiRiHigj88vGuVdjYx2XXFts1g0Yl
6EpsTtauPX8X7sgjRqAmMDJTyNmMd4qBjJM6MDthFPKZeBZZp5kCAPsO7PBQ
RyocEErRhLYZmHIGxrfbi9sAjI+UfuKUN2K/2WPgLVm2qgnXmY8aJSgjhXQ0
U0LVpTx+X4QXVLhpWIqxb1Xp+Z1tDSjbaRmXpfHEhGXORgtpa6KxoRXJ+jZU
DdZhVN8mRYhUnCbfnq5GNL5F3ibqnhNwFFMVX4CNgaqu2HbPy0Z3QDRO7JsU
7cJT/olwNCGS0eqI5lZbNQXoiFWkEy0RQyKDfyluZtJkqonz7JPZ473sZV7l
UNCvoiXhwH4mRSt7hm08DTTz6d4t09us7Zp+s9kWxC7jNxcEtYJoTbRqxt9c
z3IAkIx9W0A+PApy6+Ep560PpL+ypnigCOysrN/Sm0z4BMw90TodY0XfkPhZ
xpyQZgOoFzzWflsfRWwoCyX11BXVbdnUlaqkTAY3JeEdUazHUTmWWs+wALAc
z/GG5lRV/fF3l2TBEiPA5uYZcEhOaWVqQYwBbV/EB6dae6DioL65F2vhKJ1E
T5bDY8uhIQ/BLQoFPbC8oV2yY4ewoyTSzJtJxk4Nkd1dTcSgAkmVNFhwIoqU
YcxmPNiBlRNHhllHBylg9dSbqOoYOVLuYcEAVXneqTuwDcdrVfYQUAR8gPfp
+XeyXQ4dLllOrkT5le3RzkmlV/4qGCNaJxg4T0mITyuoEwnNyKCHfijugC6v
IULBWAjT0uHmEMM2oDbrQJDw7PghPi3eGVihQVo+lJjxsuewotA08/DIAVW2
nu0MgQCYq2sRx+NipmnnJEgbPRuAE5SNRUHrKtxDWcKfSDTdihZLMytGdiyt
ljnOC16yyZpOlHS6ui159RPncVYZZ1Osy8qQHq+zBwnmNZRBwUFHOyO4qxq9
yiBLPEhijgo10UOy9VaPqbMAOct+sqlage9NvvWKf/Kc8/MR3S2F127ZHzpR
xJhk7DQknl2DM0crdAQTOmZ69nG0mmx0NWW8mEdnjyHn3diCBGS3ZF+zzz4G
Wiy4YQM0fMTZNb919vLp//rAv1wDsSZ4f0JQW5atoMFBrBUYc3SSPYlBVgDy
BJNassFo/FrU+Em77+mPvj0diKefROibnFUAxkRWvi4bxi3SrWQB7HiJFpEx
FkXLIJXiDbYmOyMA/S4AoZ+2JHZ8xv9+H/ykf+4pS+dNQW89PP8D/ffbx39w
Mtqj7/nvi/PzP7h39Z70K/7z4bf83+8v/qA+fPoEn3/38A/uWd119U4fvMRo
jy/pQeKINM4lPqXR//HxuQ0szzn3qjYHHU5SDxbMJDcWDQVYoAEQCwrlABep
ogQtt9zm5U4+ZK4VPZ6RhOfQEJjvmgPqZudsWYHbrqAesDFohpKYSeICXx9J
SN1Ap2Ef204H8zyM+DJTK1G7SgHH09fQe8ysIZSNbZ3iN9agI6wK3Jy3LGwk
WKh4HPiUCiNldMHsD8qA51eiv/yJhSr0l5ucmPAy+1G4oHt7qsp5H6/EPxkU
qhCav1i/YDQlM3UBB+FBXATgeYFtaiwiNj/dp0+DADm7697H/hN6fwejrm6C
EwZvjkaO9X0mW1ANqfohGBE0OXbKatTQ72FqJAfOz/JpWay8k+zo2O9CmxPt
13zOotHmq19z9scMocfLIFJflxs2OZxihjdNNQy1LFXk8+Kqwst7Xpq9rbjO
op+tZsXam3K1KliWruAkoGOHCqDWNkEN2vmSFDVhR+IekEXSkdBpXNLDcO5p
klasS+R8sEveg3kSnOgnAimCQ95IEK0XpYWXVlfqeodSUa/XkUuR5RgpOsVw
TtWSWK/U+HtT7Itc1ImwdniQRr4RT5SYCcMV8AL8qTs2VIGXtPtmo4jZivKQ
QiUiQqY6NmZc18AbzB7HLhfljvg3fVg3K5hGrFH1JO4QW1SVrhVeQvozHGGr
CPNM6esG5BLv+VTv8naxV7lA9yt2H8PdzFbfZKm0HZPexGRir7Ks6ZHfNebu
84aSt5ejL4JMJyiQhnSTNYiidk25n8JkWABAcLsSgXiXK0bsJO4JRxtpBQnW
EC7lIoZWpw5bEnT/9V//5fTjD/Txh6brsh+y+99mX2XDj7/OaBz760F2ln33
R7zO5kCscNiemNkGEd0dSlVv1Gr0rgjwXgOvAdYb+OrnlCO1D4WRF52MIRaD
95bgAOAu0cOCa4YOfGWRpMQhIYr7weTLCloR+7gOhQeqAN68UKwWTMRgsPB4
ZL6xn5H5jRolaxE4wb0vnpdJrHNNIreMGEagpoaldV/oKUnyyIflgVb7A9EP
qYVf2RHhw6990pT8u89HtTiSBvSByWKVzbL4eT7BSzlBMMcSkSkW69PsGsNf
m7Mxh6Q26MWO2cSf7swxHnv/g9eEYVX8Jt67Cxavk2XTl2xuTli5vZhfPmYd
fLLv2xt8OBacE6ZwPdiaXyng1nYu33FMnqGPp5gTrrfl5kZ5A9CnXxKOrHuQ
SIgNqSLuvCJujAJquHrBTAGglUTHcp2R9rpg85rjMRoL81FUDuR2N0TXmxtI
PhPfEqYCy2wQswB29g37CbOmbD9G2puf1wEz1MUO/hugBJ6AmKi8iTE4aFlt
zbxncwQ7lGFoxhSi1wJmdYUb5uZiJDa3SD6NneYpisI+GqIofyjY9u+t+cJj
t7uey5HdthK48+fZaAyM3dOy2QVpakGBItHHCGkhoNMoL0up4rbe0qIjJ6Up
gDgmdUkDi095mYADT4trWPjBXTyL0XjIDcmC8TYue8KADEV7U2/BJC7mj7Gs
No6uqHtMveXmIf10j11p7En77NyPo46tptgeJWEWBLo8CUD5GRjtjrKzKu/6
BhKDlUUAKg+2lAWi6EUHiLDGopY2TCiPnP7JEAfj0Hi5JBUK+lIj6iSTXMUu
xy3rlKxza+KBRoISZ5uLfObejUkTkcbU8ci5piltJcq348WvbnPSKzbF5G6p
n8bxOJnHZLMoqWUAuxlPRBWO9KOc9QlAi6PqjXo77WQmmfg9p5avJj48+FU5
KAGjnFiE9wsYuoAg+VSY2mGslNhbLWHYX0gnhX9mEpSwEE0XV+KzZ2+mGr12
5kUA/iauWF1lU+9r4A5bJp3o/czHNeoPS+2vpExqQgCB1/mV5JIaAO6hk4lS
2TLjD1rBY0dWe81RPdgIdN77NvGFMDdge9sPw1tzA18J6g/ioIWqBxazCDF+
Rn9/oBr5BFQkCrsmEV4f2nTd8JT6QAwd8pIMU3ix/bAEgLXFJbwbwg8gM0is
x8kUEuTEQEkKgjE8UZmCLHWx0qQOyVy2IWqLBvFMn4pi05mqZzEUxJ5pDQRw
7oso0MisB7ufV2IrROs9I2uzy7eupo3lEv8WUzdJqTJD13ZPkFcOE0IBguhO
hTdslVhL4MO6K62LOYUDPUGusZhNMpVAmEtiKivLWcpeyfTOHwyjiVf6CS85
bBglHHEKanCbuRNaBRErfeee4PkgAoI/yhTB6fXV34LflndiX/hIe79nh5zm
+UVnmSBubjBON4nQUYLEJ6hHZ/h0iLKRegcvzEAc1IA2kjYgfxnTPO+OuapY
7RaKM50ZWLeou25b0JMfp2L4+Bh7PMCOxECUb4XFF407TeCxbGUuJPlzT+z6
L17jjwPL3ukqGk46m8Ts1TtlqsogkE4y2NxXBLc19Dhhn7fioeb3aNI7LM11
sVXHyMqtmnKNtCxWftZNvUtemoRMUXZmcpmHeOZDVE/MXB+nCG4s+FCHSooE
nE0eQ2Xvq0LQlla4L1lH4PS2NZ2TaGFH5RciXtjn7lKXO+OsfFIVGyiHsshI
HAafxS5fIVILwlrzia4kT28ClyFT+yTlJIjQ7pHEqPpr28EjZ0qNuORbn3EZ
YFOybJaIAyJTgmKI8XBgDbFncXSyW5p4GHQwotUGIKVJSV52ooddJZD0MRWf
qeqetkkAASwtfccyF6fiWd1ZLhHz6yWJUotHC08zo9JwUPgNaSDsiIOOy6Hx
uRNE4ihnSdJ3lz/JihJb7GFDSLKXaRybPm84W080biahlo4XSV94EKqDvCgu
HbzY1qoSRV4mJ2EdeAU7eGOFFe6L/CNEi+UCPes7DV1E40IFAhwqzmOAX43t
z0zS7JSI1RgAey+aFvThzKSyFQZjzzxEnEGtGaDz7Kk4h5ArxShW/sUUPFnU
qvbxVBafeWyW6EFVmZG7pI+HUKhDqmpr9ot3GaTDaKwx59BnG+cHOMs/n2a/
9qTJIG2GUzw51At8rqDmrsumhYOwqX9jlDftysVJg3rEteSH+GDxLgfuIauc
TPF5JmYSKkbHLGQYA4vC8m3NCtScKcsdH0mZIjojlusdXZJ13vOhMbbVLknL
Cm4O1XljW8d2NDCiRJ0TAlO+Xg1JzAc+wbVXRQ4tIrXuouy7yO4GCnrOeMz4
czJzCjE5GdietFnw3EBV4q3xkcPfxskoX4mAJv6tM4xk8MlS9VGrKBB3Jae+
LeG78j5v85/An6ovqYM3vCRJgTaXYReH8iSAQ3ISRqG5yw0E3gHQ3USsizU+
ZzvXL2kGn1oStgzVXz02puox0XCJcbaplYx5mJDJ3gLP2PNPP2Nl11yjLraC
xMfBqVtMVKxRag4kikhs6rBCN1ghO1gam4rrdDgPmZlKDyVRRBscc5WbLHrW
abisLO/YbHwXyk/yzjMDsAtvTZmC6zM1GFWIRe1p98K5Wtl83sHDvWCtmc+i
lRR8276kRppMs92Dx4pfKdgesdZjBPnSEhZdCJvTYC3ncB6M40GuyMtGDC/N
QSHcuylg+h/DtqK0KJb1RcUuLMkxU0V9asGynrn0FkpTME0kn0GFRSG6yiEv
O9fv+ddvJHcPgX+aZVO2ak6r/KQRBMlYjpdLz5LN/oo3z2dAGqUkK05sK6YJ
mbfReTwt58V86rN9Tg4Sp4bsPzpIJIuwa1G9P8abPF7DzbC2GPeqkBR1qB0h
G1eVV7/gWH11QPWSNYFVgZgfUFcZxMEnikH2aaI8W77m0kns0c4Oru1DGm2d
jQGNdA/JgdmxCPIZ/CcsgtN62LcOBIiyTmNk5GydfN/ZFGlacKB076Sz+BG0
BKewk6T6gwiQvBspulKj13vzpsS9nUUaEVL6WGS60ZC04gOmzJ7bG064ZMAZ
HDCt/7LVlKmW/YZKPouIeLIR4knRzvAsNtHAjY7i5J6YwupNZCSAS3RVnvtG
WPGm5OQyjoWzh62uEBS2mrsEh+bZj34pYWRwZI22iYtrzPewOCbqRCzhhfGm
/CpN3ybO+s2ccxCUq5Wpw61DGavoBXsOh/AAAhJJuFONChLUdCh+5loDTh8k
7/faPBSRDA9yi3O9TQcE0UbnMmBsc5pH0eJkijIOXOFbwZxrPpvbwl1rnO8D
7Z9fvJ6q60ViO8Ovs6+zaAx3PyQST9PRNdGYhrh8fL5rH+gax9YX+4muB5E6
Wg77GE8+Fq2IUFEVrq+yYsuZRTzn2MPq1c+yk1DgD9kmJ0Pjq3RhtM9BCMoi
URczPP/gq8FAEj9kheYaD1wnwaUAjuuLs++u70Ap1hvvZb+IoQDJ01ewpV+a
0nhlEVpPrlGIj7T/RZE3mlXPoRLiaBM6QTHIzak9YTGrryPlmCxYyWhos6gw
dBrlsr1S8vDsZbmsGzZctkeBuenBzIOytCYhDjGdjFOFz95wUt+MHS+vBpUv
wNRYLSdp9FFtGot8rEuTfOVeXQBe0YJCOaexXxvH/i3hNBhlXVbG3mKP/00e
haqyUz2yJ3HDNdQEMDLwOh2BXpKULnVVxLCM986rbsNQorfzi3NU/3E2Ayki
XbmNnqE18FPE38XksKoPcAqI60bSTgEDLJd7UUTplUAvzTQKXvOQSszGUpvI
BLZwRGf4KxyBVO1e/XiM8h3CTISJR3jU4jqLHHkhouuLNmEZBwkjRzxPXF3X
XqkAJKcYw1hikuaaQzyxcy4tkkoqQa71iK810Tm4ADSLh+O87JLPpdbIipZJ
J9irLfC3wMLnjAowbAlK6y+UAfzCEVmcmCf1IPzSuO10zJz0qZIeiKHgzCnn
gZiC8c1fiCpkJ2XmSgjXMCA2qDLh0JSWVDHnSFiEKJd6mBIqtiQ0vzesOiPG
xMyCRPXtMDIJOmcOxXY9nRxwReqDgqtVS6U0E4cYgMs3OedLjxRCIFN3Bw+A
+YPYqyM9GBB9htfetF3JE418yvBPf0EvepGeh6G88/FezsJMikYGufY+vP1u
OJp6onx8QqgfBV9i0AmNc6x/Ct933rbctUVzo+QZB6PPZ27AzMhX9R6+WUs+
ZqU1R24OFJWXlrmUZ+ttvvFJux9ydgl/iBgZPhbPU52ds+z9F/iaTB9PE4Wy
XF3KdCRwCesA/F6cfM8O/0mEW1EM2pCbuNj4qmTFnuGcT30yTiZfcZGThpVm
iHGKeuvzkpHeQWjwfLcvG6mvj8NTcbuE0hJ4EB5eq56cUt3ERUnKXXapScqp
seXznr7kLQJpfLqnTp8Z//l5vDw01C5p2tWgSBg1X6R7rzz1Dbxmp4kjpjpz
HpqEIckqUgdZZPCtQinmcM7gYKMxppwE6oPsSrxJ+icit/ttjhR35HHy7maR
QODScWPGVradQ/qLpc3V0YMaRuIOM/89yhjfF5ZAUsS+GhE4XNJt0f/hSUxd
U9Dylt6k1+wg4aVJSjs3+IL+0ejiOMU/DlEZn80jRwrZgjXcAgjmxVWLUZAM
VuVuUW76upekzSNsT03sBI3hLKLwn5Q/Ywp1hcI3Dhd0PHZVl6gG/8lbMwRq
DikJg2+IhOqdX1pkszoTBJVUA3VSuslHQ/PEEyfxkLlPuDvNdIH9q4lsqiBA
TmjGJMJ3Eus8KX4UHsll0OUtvxzjlRstbCUKfIPK3QgYn+6dYJ+QnnUB8CqH
4rX49MVzpL6vELBTXx4nLwFDBLioBdZMUQR+zMe5KE7i52K9dpFnzmeRIuQD
TRZeaquO57fEhFVC9FEmp8AJLVMQxiw0EKTZBmGHOP61mCxkgISGLJAd10oI
HxgXrp8Ed44vNPKMYafWkB0Oi4Frnvfk3Xin4vNkWR8xKTY0UuaEwXzml39Q
R/0Sw2NjTzmbrqzxxXfR8iRjL9ktTwmt6gOv4M4N6EpZrhMaMqfKIoUs4b2y
urAnKaCG4ScIteFwCLIQmUrVtKUhPgiK/ZDFC8xmmV8/wk3+Kf8xPRJ2YPxg
Zblvp/DMzk5sYxJm9/0KplmY54HZxcLtuEaAqDBIDN/SxkilGZChC3k7aZMQ
YeTGw4NLPkkoqDXXAOSSsHJobRADyHDrjIvQ9k6SNWcLUpOWxefPDwTWpXcU
KpT+KYtzA4ElyZDSzit55od0CAET86Gn7KSXElLP1p5hfuJIp2vy+WBQsOOy
+WkSwmWHe1S5jJcnkiok9o5kTnmc+ipmq2VILpmeSEUhZ3Bj6YSVsVYHNav1
riomzcSdpXZ2OLfTTHebXqxX0Yc79nGk9DLw9qunSV27sWEcv4VtHKOsU5pk
BIGWWwRErdWCJrjEu9fpuPXLksu6WzEwuAVUim+gdrY7IMEyPYKkvtJPkFRA
yOukILSBbRShhV0Kq1D+/xYqnLiFEGXPLcMGyXp85JKrDQo8etdHQiaROBJC
8uAO5GKOWZJkUpyYi1SwXkNx9ELyqDXy592Y0v+COV6h3ccid+rp5tjjUjdj
32AMEy2RErsoNmWlXFdB5icnwKoG8KrugiZ5qhl+upeqks6l8VYoTz5jdODE
SHBdc3ALU4i9NowzkHEWHM9TWZF4iVWduPbs9trL7+vAd6+99/yUL7hDYu22
u5oM6NEtI4ONFbfK7GFSWLA+FUsLqUncakRE0wbYgjVkkMw6iDWXziSN74oR
6MBOgmTasxw+dqLdTLynhmPJJJ4lD8UHtceFgO8Z5/7c110ZmSVDhcAjjmoC
AOR8NMXAP1q2zjpBkSVeNJ25A05yNf2IsiDgHoPyyI1rfIVF1xzVuaS1LunG
Wh+41a/ZlONGBwJLCwOKt1B22yaT695cuo2p11paD3JJ2bDppGpHT5C5gF+R
RTHS42yHw02d8CyabhtVGmWIQTds4C79ltNAXOuiTPf3hl9xcqXzmSrDgqFD
wV4gtr1qaNwGNQVJgmNClYZLyXYI69ErY8yUO8FfUUyU6xp3ZY8Qsh+UWLhB
86OQkkGK4rZwgflofj4LVZW3ZSvhYl9yJufpeOAojWSaZAEgM4WZE6ou222/
2ZTtjdMK/iI0JOD1nNpg8+y5z1R0HHuPHOQkq2jpprUFRyy7NZRFsWTTsqXY
wAiVTOKQYGi0UWzX8dFqWpzPy9WsPksECjRve/H9VxCgDIBkZBEXy3Mwwkir
elmvvOqJM4OIFC+m9gA5eedV1Jogg7dsx3WZDVCQPuVFe+X250dvra43kz42
mnmh/hz4YVDNoeZ5WaM10nPuGvMxAjYPlGjKToY1+tMiJ3dfE9Oui2X1QQuf
IKN871GuGUVy3gNfLKZOrljdW5vBSQvaNHzuWkPhCz0UyTWDj4MnSKklKOw4
vy2OJ4gf3ug9WhoqBCW9TxoGPnx4Ln0+7M8L9vdo5a2Q/YaMKEZopBk23DCk
JaSWdbchx/9v7GzMVgTaG+GmAm74fWBiqreFVHth5Db1B+sxsWqxl8TdqbHA
uMDUc56IqUEFbHLzCjt/0MyCRFNVa46fIhOhWnygZ86y+/LL1/zB86t3Fw+c
h2L29Q/ZfTw+y/yHD84uvokMr4+l9ESVpShrQIyO0Sf08RKBJ1lZDJAAgR0x
hE2kA3hZQmyg4LYaFoYJMwy2a4HeJ96EwqL/KTufPxbzKGzpB3wnG3hvsXVL
XLDcYyna4V5wq9TOSwKv8BO7MYcou7K92FUnNT+mmXSn9irK9dOyEARYAQs+
6EFXxgkap3N3TPEDuKhUwVJZg14xfBs4IUlQoit7tFEOBVmH9LqYZSSFNuDC
cZe2uJZU2kIOusuw6T4sVEQS+Uk/o0F3mVYSTPKD0OCgq9FnO7qRCq/W+rWu
XO47Fk+KdpnvpZRqIm6ktfhd424IcLdFS0DKqfYIelNI8lZdpdkfXNLgM6z8
+aN/Cb7TuNkOBe4w5ax4Hq3jxPfXJLULfkQ3KD7zjmTULaQN5bTbvtQARlEb
rDBEnK1DYqQiijO3vWH85yoiZzHwX33Ok74vNbrtIEhI9iV/qDELTV4mUV6z
vxKxD0nvNZoenPQ/tOLv1o40RPAklXxWkfWzBY4H7I6abEjiDNfMp6+ONiVC
Q6I2hPnpLDaNdI/bHh1HurGXkxpHkgx9U6km3ZZdr/n3QF2o3sRRnKZeWqk/
tIRXtWT2QQnRnP9WaJBbilVKcONduKK+jrl2jET00q1R0nKD2KcnUN/I+6QJ
2dMoiiEOMF8n+VkbYWKefi9ZDqX1Jg/qvrvm+sFB+eC1NMeaKsnF1T5RLV3U
CiMkWUleQ7wp8WZ4N/0SidERG/ZRZs4+exdTUhmn+4ppIs0IAyQZZySBy1cN
WYtTzUdmnrndxn0UBS3HzP7MFP64mOynrVdo0UGp0GInaZ/qjZRCnGR+WlGE
OH2QhXZUKQXKiupNxX8fls9+fNKcOCLEM6uQMZh41bb4rWw7ETgenc1MOIV/
C4YN3ZprMIwR+azjqHYvqrJsZbE8HYGDUDpnv5l7U6iZJCEZpYJ8bIhcirTw
vbZFbqLWpWqct32pHbJ8QQyvP7qXBsLl7+pzHTtBP0sSy7pe9q10/vMdOPjt
YWtybNq6sbnhfBJqG7OCJPXcFGU4e3zXE3sVHTuTGG6clhE5N0hvcVEBmDcW
xE+jdwHw+Fw7ESU8w8JgxZQFy6aSedLWcLxIdBqiLYHJ9RXzLHfS1U5l5Ds1
LaRAKhNs3NVem2O3KC2Eu82tCy6Vle7OviHkVlUVdZh7nf2BjwdCZ0cnb7b8
iF2FMuu4sXRlgs6BAtm2JHsXqZWWTx9gy81h/RUgnnebhBAqVJPmC2iUnJd7
OHbZggYTSWO9YEs4dC218GcIA+Cw74dEtSpNdCOAXMZDNFq+nh7ffVZprcAh
k/poXAjwuu9MMavq2b6rAeKHYyPSWZ2ZMogcbvgHzCVj4dHEnx+c9Xmc8Ss5
EZwz56vC7Uu8I94JMXyu49DGtUPTY9LXuNR/Qqwgn4T8y+xeVidXW9hCRmMk
9y9m/P6DpPcChpKgmNfzr/mxay2bbVUQR3lDUMq8ERbFFtRfDvt8oAqXg7zN
R5yNmirXEvnvNdlAyeon60es+kE6HjLuGk1c0GGs2pfjARp17GL9MmYl0plC
UpLgPoFrVYKsHICA38T6BHOymjBgrN9XUMuuOU2F6EtxxheJ7fKjk/K2ZK2h
YDbJbI0tpMRuh1P7ZAVJYtsgkXlqm/OdHhPVHUFu7SoTmdHiLk66ug86cDOI
R/Ju3p2uQNQqr4pKdFbQwrXenjylCGRyROGleeQzcnFSAkGUSbbQa2VOBvpj
tu4bKNz6FqqPwO85Mc3nghZi0Yegl3t1yuO41lj9e1aprIKZkz/0E3qJaxbe
8w/RlSLukGmbl5EcC+3w7FMq4SX27WDldWQLeY82vH/Sqw01ROhqVkiTfIJG
d6N5WcatLaBWdnP3o9XDx4YUdM9VTdjHDQu4BPZJtu+btheIbtBLI6yQPS6M
WALFaEfrUtpfSFuHqRhosYMZGBYxDX+Vn7a05KNAz9rnr17/kYFrOjWG3YOT
Rztzi6NvpsNatZV0CQVidn5aJMAz36svaF+0E9IO0PcFeX3ikfVVXPQehuYa
y3x5M+gu4qvhhB/lhhPrUu5ISi85uqNrgYPMDz33krtLxJL6sU6bUKcCDeE0
yDLncBGNdGLYWuKkCcUkYWaT70OnB+mPJ9ZTkssh9MrqTd79Axxc9YJNY2P1
mC740cvqVxSby/qkDKnvJj67Rj1Zv7x7jS9R0YLmB9bjFV+je0cbershc1xC
p5xLHPMGF+fY+tWdhjm92p8kyTFRSxMEhK2jZdlFCla0z6HX8YxKCSOlFcQc
CsYVCWgpqmwDgLbbCThLnvBDD8azCHaXbo+h/yavRjreubSPseSS6etiTzZR
qzo6aTvfoTNMEOrfg20ZFw0oT/5Fyyc1bUsM0c+4UMS3WRxeWGItniXTyjuI
4A06qQ2Ke2SEzl4a/RZpsi/Q4YkxK+wgGACDOp42rtODBA+t8rKo0KqAt12D
cZa4RYt7evU/1UCrisOw9ZflXB6K2DWaxPYH6ZZEwkmeqVxEgqxa/nsDIRcH
8XJf5T5BkZpSw4IO/KOKVwWTIEKSV2nV1qeiPOkf1EqYAu8ziYnJySw/wEoo
3ScYhuZNoDAnPTFFB0mWwPeXhVtIhncuwEHrhovznVP/91dClVrrgjJFSTm3
mgupDU7MhROlTu3rG0ZG04G48KtobpgUZCgYnKHlN4qFtbRALzUJITHVVZTI
ZjMSdkhEDl6DuBmJOG/EFSPVJHYbmuiJjWRlVOw5JiTwJf6hlSxMAe+7GM2c
NWXOpQ4Yn9MTXeXiE2B8HT1u3Sq11Yp3wH26Z78Sgb9kE8dcD+wYkz71/mHW
QXpLoyhD5wquswSVyc7Fm6LVNYBZvjFfo+AEMKIA9hEDlchVpOHgHWu3EUOi
bKKGu3UjqR3qhBCMS+4JtC56aU0C5Lw47NQGcZ4uGTZXjyxp06/afJWSiIOr
Eiu5ncnftYDRRaIjZ0dCNiJZj163HjK7qT4Z2wHWZxG4ZO0z5L3Rx8Vei59O
otJDcNgYQnSh35yY3HnKNKIWMnFzzK/i+N3Q+B67oOTkFSZCibOMPa51JFvx
5Ei+lTbkEJ9RAweet8ZJJct3uCxZ7VXHbfjrg0QaaId136gnE67WUi/3G3ae
8xVbtZn5ghilpHKH2nbfVzSzCxaiSxl4Go718gV6JvAjiQcbKvbZDFTgiYTL
kDQhOm9i5fL6mN+q8gNkbMVJhUs6UY5NoNh5ujX7wlfLO+7AZRU2/oYIaTMk
aQJsR7CvqU7ulSF+Kl1EtJmCZ3+SZx9XCwi2m0e2aJckXfjRCfd2MzEAI5ev
EdduceAh9NE0ewv99adtfVyBb0pxHZO5XUURA/huYDGst/Um965wXpc2mcFZ
hVq1BSoBVw2JRrSN1DuwRho1+Uo/u5pzhNTdoP3Fl9eYDdeosSRfArgptDGb
NOPTwGHvIx8aBre0EWkKhw1qSLANleFREqFXud4XESQ0vzEUbPk+kdykge1J
MGggyNpwQjAkwon/kcV18oVfOveUIZTnYJZrim2pXpZ0JnUzr8vfRNt4JHIQ
UTB2pF5p2wYPXnP2vDE3Ar+uiqy4Zw/E5Qq3PC7ZCkVMUmrqv9HYsF2mY+UQ
cTRFsUwFkdKxqcld7VDkbhAntaSKMk5sDJnZqsj56koIlYqsoyaq+zmROWwJ
S2O7QJs5yu4jWXg/qsJ/+Ch7udi3ajs+DR1mmGcRPyuZ6pkDikvbvdbsVgW2
6poRRodrbmQnHhDez4UmhttirXaAyZ+4j5b2dR9DfmnV5OCEUU7t294kcaa3
zM212ZqFlYvwUnsj90fh/EUOIKus8kQ8CEVwkD2+tODAwsCJEQQ548tFTaGs
RPk81HeogezRyC7PcQBSayiaLEfgtoW4KC4ey9dJwYiGhvnaDv5yzpc73CRD
C316EzA/keyoS+OKZa1LS5Ql+g4oyvV7j1l+Ppx/q1OFXli1hftC01q5xkb6
p8gKBvpYyBS1xsAcPZMjZSepQ88KyVU98CrBNsAQpDWmKbEw9vgIi9Q8cFal
Jpe7SSR+GhLhLCSty1toQqimvc1mqB0GZeNzaASC+hHRc6s0nHiMDl6g5+q+
HKzWbZp84RvFjNwJMdWLpqL+s2EIZ8XMktmBNmxc/Vkd4xumECnW7n5EMseW
CNfFzXvEgcMHliwvbx0Xio7WPVsFsXlg9apqpDiLKp50Sofag78jPU+WBLC4
dF5CgK5v8qhjiADiGJcZandIx2lvYm5ZBoF3i0RHfLI9bpFY1SFYDm3AjOyJ
f54B7rM7JmpxhyKN4MhjjyVftuxM37G+YWX1UbdaAKHQIGZg+K3QLHEl9ODg
s0FGUESfvlU1+j1wXzrzGUezYR43epbsEgpml01R6d9yCmnPS/UghJgV2mMn
3cWxdDhqvSdNDVju27xb5Nt8GEYZkoHlY3nHgBiAkq8QbntKD9LkpKY8unDT
xi47Wn9e1r3p5BArtxINf3BllbJtE54n6CjKomZGym18CAaLlSc31GmbTAiV
3Isun9MpW+YcGNFEIp0vNrMGFOMiUmH20VeJ7yMwTctl3AJ/74Xb0tVAZaWp
I7u9O3It2i1XLkWfzfBZqFxS5Il0VW7zt9urz4YkvESjK2/SszfBAG0+Kebk
5ZeXMXVR3jlTnP9+IvnIwx2vrU8AGprNMyQbALSDPMDSLuOBFc0hw3hyrpNn
N3IAoLQgMN3n8Tn9e3bW0qscS0PQghuK1Fsr3xVkPgdOns+/v1RHPTvIuJRI
v398PqWB8NAF/zaVYed/dVweEW+Nj+sHk4fOB2OPbPfiZLtRgzwaORlDq2qj
EYjxFL/pRQhVZl1hremi7+Llk8vh7wpegdbSrU/8BdZpBDptFJTGYx+8s++H
7OLs4pvs6+z+hdRP2soefPWQU2vDK8ET8QOnCN3n+0Wng+GGrYK0JtKWmK59
bIX8RDLT+fz8klZ3Pn98nn3FQfJ4iX9Mh9dsgL9zhouzhzwDftw1g3Bf0Vuk
JNoaeGu+rlx4xSKo5ECDsR9OgvV91uPcFGnyh/tDvF6cxQp2vkUPGjYO5bp5
xVCXXJeqvDvqL+bv0Xwk/n3kXoKd2kX086hmw0r4wfyH7NzHSkgLF3E1jwrB
SLyv6n5hhXY3oUeA8EGoi9bZbeHv81OmflvgErleG4ajXkRgov2icAUckqpU
fNRcYu+9sjGNXZ6zmqwGKVABlYwYTQhDLI3lcdzU0M4pYmRLNcotGa/eOsQb
gVMKf838NV/m+OpSytae8afeg7h6hhsU5+2SmyvTNy98/PbTvaV8/Fm72WmP
46AMhk6jiVtr7OIzy9tq8n252h4HTSARnM9HzSVc0qMti8LA/gw4PMcosKWd
vOcz4u0+J52NXb+4LY07Zm5L6bbc8u+qENz/+fnrB8gSYY0hb1rxTm8k0NF2
+hjeyiVBkaPntggisQ5qEzN69vLaPIIETu/zhBHmu8tHq5EufRfnuMDmsfV6
P8mkccmqLfXrxdXVm3+Z0QZwr41XbQK4f+13aEkAd90FcBMuiMdiN7pnVghF
QC13qbcfw8GhUHt9CDgH30SUBTHNrDEm9CBzZgF9NRTpLz+DQzXqB6W/TfkA
iTDhG1Fb4xu7PV3arFRtaAKKUeW4BxuWV9VZ8534tuqQjeAUEGgbqqa1vHLx
3WOoorMZazSKxHYrBEHYMEFsdCchHMFOEZPx8QLlkHNs50z6OhkeNVpbFdK9
2+syjovEhcQmTyzu7WMyDE72P3UKnHDzEF89pHXhxW+dS+DOh8pPK6JYcsFM
HwK+sGO8Et0EmQadJVaVg2a+I454OhNrtAJd6lxznLibsa4ON/xg8S6Xa28U
CeLUEOB2WrFtKYU0VOgZhFThfi83Kw7uRLH6ERf1Y0EiPG6xjAOBSahZuvRz
8+0GZ/U06tzyswY1ryLj7dM9S7VAdJt1TymbQPKE3L8OwStD3nnxhCZpyjV1
bEhasS4WHS3CIqty3a/cEWBJBrjYLjJrOBlj5NXBxPBL1qSJSm1wXCqwLpCq
Z7e0i7qtH9rdWGr42FWq/iZq06l98FEaEac9YqL0pqTQB2mfueZeYxB0Lxer
xCUCJb6CKFz7EF+rqRdVYEGcwz+IHocosPIJCTtET0zMCOIS40G3UtMp/DJQ
Pmo+sNgCZ0t1MnIectveWkLoUpoTdSuDaAsvOTvEUKfva0qLpqmR+F2xeeTv
EpLPW836dZpnYTfORqmEg8Y03uwfAepcLWwdGkXQy2UP1Ev658Z9AYNDLg4H
THBThThYgqEf0UmUoA1CYn46R665D7rylRgiU9TgPM1d4LSOuPwqQsL0imsN
+FlCrd50NjfDCObxgARGUi4GSa5IlZXcj5OmWHGKzGelgjihl6d3vq5X6rx9
xg7a9Ml1BextkIZ13h1WtJE3zPm0e/F8s8ItAT9pdSV3d4/zpzm4W77itpyH
WgIQxhyiEt3wtkuu2mCP7RMCORqVgsuYXPL9XfxXsTTCkn0OnL0DPnK3DFOt
NZospCONzYMbNkBOeiA+n+Hz8CYvk0HoY8gpXaH5t9Cj5pBIeTDG67gafOsR
my/IFYvDR/J9S3YWoQwpF5Vf+ZnBxWBxDbyjlqEpXlJLW0ffaomuoR3g4EAF
gdBJihSRLtHGMYkKYSFin/XDdSgafgrPY7WldpJas9lqSwNMrNxHiRmXJzVy
j67lXXKDlgkuYikQxLablV5ovb+47drpiBa5KEIalNeS9CkDPqnJJOHKVZzd
YlP7BAA/fTy7pUzpAvytTOMi2SdOS7KO3nfBZiguaayhjGmem5Fi8EHTSqJS
wZiBDRSUF9Y0JV2w5mSzotl3nFkgvdWHwJDe5MmQfqkfi71vBpTeHIcrC8DZ
Mm1TitI1jmZJXQmx8w2nnhtcJ9xOl/Mc1U+voHPx+VjsGJfNNr74sBy9Nhxl
0Acpeyn4jhtXL6ALWxwB6UA2m0ZGvFWasERNZg+5Wd6fXGwatnR5Nlzz1Hnf
96g8mjvfAsP3h2Aq0HbPaVJLlGfMD1crxMhlvVMXZzj7MkLI/pOQR2YJDPTO
gVOcIISdXskIQ/Cn51cPQnL21N+zw5fV3pYr7tucLo5b05Y7VfhRbofBWCsL
1z6oeobaJ8Z97TGV0WxIO0Oyar7F9UfDbjrIKVnoTTUEG5lXxIrcSm3lvBzx
7yVa+yLqDMIpvEgHQxkk62mW0cS320rvKJjgcfZ6yj9HlS87e9Ls9Ebd0DrJ
K9bxmIOr3NDNK7l11xE7lh6joPuoV60334wYsUWwztjCYenCOq4JF9X1QsFB
vATLl5cLQXxGv/gZAp8MVlNMgapEmrhB3rQWX3D8D+d5CjMRsP9hYtmk6xvp
j3oikXHG4uyCNBpxALGSpSUCQ36l2e2xVH5wev1lyiOtVIj7vZK5y0A5TSHW
wmkRvjva2bLF5RhsaaDsovbJnexHWhThzj5cAKTtNH0tZaouIUW4rFJIT09O
D1JdGypIUYYLRRmjWc+DPs9slpDh1thVPU4uXAMf840wTH02S2Fu3dclZ4CV
tym0BpN3Wl6cpDaprljvJWwX93pKEkk5a10LDNuuJ4IUdgUsPE0w9tjg81n1
7vG+9b5Z7WjCfKchSLQhohjns7BhFlx/cDoTH+bvFn25Bd+1An64LHd5lVvL
9njoY5zeOap7olF/0A4MzawBS+xsYHsVtkWVJB9r57jTpnRoh59oOdYAXT09
JzOL/4fnjhxqknmS5joHJHrxamoL+MJ0SPYAghxKKVDRZnmq/voVmUIy6hma
xd0wEufQX4qm9l2UnNbAcHi0XokV2fnEHghTy8TiW6gLc2fHx+I8NMa8ZQIu
seLVGk88ZlwKHHbLOXol+3Otn3DkD/PYB1cazXUeXGAPQH+yivtJ657EUWbV
QFYEYcEetl2enlQCmKf9s16IJJeDhRthB9trXYQJI/gnssY6ffvgSHTJkr9F
7q4OwNG11UGwqDfON/B+BxfdyAL4nqmKhy0sZxilT3qhHGKJCwzxnJOk2WFk
VM3P+Wx3N1CXfcdcTVUKqfCqk8SEyYg7pJCkZkOCzD9G1T7TuAxt6lgohBHF
K5TclfTeJ82E/gWr9Lyh9oUy7Bhak6RjycXZ5fAGD6tT8Uql76TqL0sNanQW
dQxj7Lfr/Jy/mGKlWyCOypUkbJJgC5zTLMFe4IlFJTmufPbIzMThHd3xmkRb
+CW23SGnBak14KweCFgsfHSwlBrzcHEZ/K02LjTviTo4XfALzv0kuEPvUCOp
m8yCrc8laJ+EphDj0mcqF/Ye/yiJV9u2FqtTeoOaNju85KCFuqaoFEVRUXoq
Wd6Jgj/eAwCQ0F40A5eD2Lq8hHq95iWWPjTaDkql1ZPcHfda8qgs3Wt5sT9v
cI2BB6HzpZBp5qreqOwt1oFonaI9INti3ucnMwRHYJqa1bf+C9k4h2J1m3P3
SxBzuaKnti//aFGh+KaQW5Xm4hCQKa4eaZYcevGY5SRjSaqgiUfhecELfbg5
Rj3+mIHAfpeLKyBlLEfdLsYVl/xJi2hPt6LQevGhMRz2IZai/8k68LvG4ulv
oguk2lXWiVXOYEMnyIUjuIsSXhJBaGlwFue2a10SVFwBkOUEWwhPt+KQeLks
m2W/47DmUr0gQRgqizDinPpLJOUwyAqViN3F5RzZzIUmr7L6uJAMRRRRcY1V
0QQdX6G+1PxLl2PbkVr2BO1AUJUa9yGM04w7hhIyNel/UAaFhQ8ulNGbiOIK
K4YDsYWubxZRm6Akh8Fuj2Fk2BZoTOgZSQg3s+bu74d9x5esOt8sz3RlH9RL
r8nBTk7qj13wGgKuIW5swpZxDiyLzgYHG8lK59lbwGq4RNRBJ8kTfAiVoB/Z
aLJV0BLHK3w4kpBSMuvuZW9hNL7MuStVoW5xfLSTj7QESm8yja9jw2NS3ORj
d0+UOCTnQc2kE/NOrTrSPu0CnUOpmcZmfH+A8S3eoChHNsN1J60HmF0TYm8t
D9Vq4lUF3B+lsoy38TBRtEKZoy/vxb0mEbdOJkMCgfbzCDUTnKYV/noytKD0
eeuFZDW1fOGKwWwanMfTk+7LIXpUVmikHZzUqppxeaQo8a2/QsW7r/kVvp6e
XQG+tJO3PI0cEHEvBrmKJBs0lz7pwfBaMsN9i/PWbJBo62imz+yfv6qj+K4e
GvwTDwf6ruqmdpe4rCSJJqf9KAAxO5Tgv8+CrYMD8t+cHJDc3Amp52NbfDpx
WxS9G3SQbDaxHlwv7gqCjgBFMMr7gnDPLFbPc0Z3kSJ2CQZtstdy8+LFAd3n
9Mlr2+1ojCOYeUmUI/FEeTR5ZD1RThevhfXseYvCnU/1jqYsw2W2CZn4qAgf
gv0xdgYmy1PPD+v1WXJdx5fORVgGvXAXRfClRSc1ztbZeZwewKPYPkIcONRr
nlCEQkFhwHjbBeIvTjq1+6xmlMyrWQaff78jqxLNA1QlGXBcR2brpsn5WnHL
sRz+e8uUNn7LYvb7HZ/f3vH51/84G//39V0TjH/8hZn/jhe+jub/m174XYBB
omJsYXfN8PvfMYNOc8e//+ZNjyyKX6Anoof+iVbj695Ol8Yv/B/8L53hd/3/
6Ay/jw2U2YdvONXP5xN+8YVkM7bgr+964Y7Pb08/jzHj9NvfQ4XgyJhf875/
T9+NOlPjA/k8Xf3X/vOTf3d9fnv6+enKU5iZ3TQ2ZoytX1jJGBF/bfwDWiAp
RQ3rmWbhai+aKHd3UWzLwowyTo6yRscI+LY2AGKJpFC9blTL7ayieurMvjgE
xQHabpxcoBddret6KyzQ3w8vV0ZyLTchNtm/290gaA5dwD+OPET0nWdXWPJ8
NCH3P4IH4n8wGF48ffX0BARSVl2TGcXhWGk+I0+K85vjXjPOKWRVhwZ56uWC
9Nj/9ERSsIrVD5M1mUfFhC+4eP3j61iC0Bjy7/8CpaVs9Ee8AAA=

-->

</rfc>

