<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.20 (Ruby 3.3.5) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

<!ENTITY RFC9000 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9000.xml">
<!ENTITY I-D.ietf-moq-transport SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-moq-transport.xml">
<!ENTITY RFC9438 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml">
<!ENTITY I-D.ietf-ccwg-bbr SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-ccwg-bbr.xml">
<!ENTITY RFC6817 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6817.xml">
<!ENTITY RFC6582 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml">
<!ENTITY RFC3649 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3649.xml">
<!ENTITY I-D.ietf-quic-ack-frequency SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-quic-ack-frequency.xml">
<!ENTITY I-D.irtf-iccrg-ledbat-plus-plus SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.irtf-iccrg-ledbat-plus-plus.xml">
<!ENTITY RFC9331 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9331.xml">
]>


<rfc ipr="trust200902" docName="draft-huitema-ccwg-c4-design-00" category="info" consensus="true" submissionType="IETF">
  <front>
    <title abbrev="C4 Design">Design of Christian's Congestion Control Code (C4)</title>

    <author initials="C." surname="Huitema" fullname="Christian Huitema">
      <organization>Private Octopus Inc.</organization>
      <address>
        <email>huitema@huitema.net</email>
      </address>
    </author>
    <author initials="S." surname="Nandakumar" fullname="Suhas Nandakumar">
      <organization>Cisco</organization>
      <address>
        <email>snandaku@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Jennings" fullname="Cullen Jennings">
      <organization>Cisco</organization>
      <address>
        <email>fluffy@iii.ca</email>
      </address>
    </author>

    <date year="2025" month="October" day="19"/>

    <area>Web and Internet Transport</area>
    
    <keyword>C4</keyword> <keyword>Congestion Control</keyword> <keyword>Realtime Communication</keyword> <keyword>Media over QUIC</keyword>

    <abstract>


<?line 108?>

<t>Christian's Congestion Control Code is a new congestion control
algorithm designed to support Real-Time applications such as
Media over QUIC. It is designed to drive towards low delays,
with good support for the "application limited" behavior
frequently found when using variable rate encoding, and
with fast reaction to congestion to avoid the "priority
inversion" happening when congestion control overestimates
the available capacity. It pays special attention to the
high jitter conditions encountered in Wi-Fi networks.
The design emphasizes simplicity and
avoids making too many assumption about the "model" of
the network. The main control variables are the estimate
of the data rate and of the maximum path delay in the
absence of queues.</t>



    </abstract>



  </front>

  <middle>


<?line 125?>

<section anchor="introduction"><name>Introduction</name>

<t>Christian's Congestion Control Code (C4) is a new congestion control
algorithm designed to support Real-Time multimedia applications, specifically
multimedia applications using QUIC <xref target="RFC9000"/> and the Media
over QUIC transport <xref target="I-D.ietf-moq-transport"/>. These applications
require low delays, and often exhibit a variable data rate as they
alternate high bandwidth requirements when sending reference frames
and lower bandwith requirements when sending differential frames.
We translate that into 3 main goals:</t>

<t><list style="symbols">
  <t>Drive towards low delays (see <xref target="react-to-delays"/>),</t>
  <t>Support "application limited" behavior (see <xref target="limited"/>),</t>
  <t>React quickly to changing network conditions (see <xref target="congestion"/>).</t>
</list></t>

<t>The design of C4 is inspired by our experience using different
congestion control algorithms for QUIC,
notably Cubic <xref target="RFC9438"/>, Hystart <xref target="HyStart"/>, and BBR <xref target="I-D.ietf-ccwg-bbr"/>,
as well as the study
of delay-oriented algorithms such as TCP Vegas <xref target="TCP-Vegas"/>
and LEDBAT <xref target="RFC6817"/>. In addition, we wanted to keep the algorithm
simple and easy to implement.</t>

<t>C4 assumes that the transport stack is
capable of signaling to the congestion algorithms events such
as acknowledgements, RTT measurements, ECN signals or the detection
of packet losses. It also assumes that the congestion algorithm
controls the transport stack by setting the congestion window
(CWND) and the pacing rate.</t>

<t>C4 tracks the state of the network by keeping a small set of
variables, the main ones being 
the "nominal rate", the "nominal max RTT",
and the current state of the algorithm. The details on using and
tracking the min RTT are discussed in <xref target="react-to-delays"/>.</t>

<t>The nominal rate is the pacing rate corresponding to the most recent
estimate of the bandwidth available to the connection.
The nominal max RTT is the best estimate of the maximum RTT
that can occur on the network in the absence of queues. When we
do not observe delay jitter, this coincides with the min RTT.
In the presence of jitter, it should be the sum of the
min RTT and the maximum jitter. C4 will compute a pacing
rate as the nominal rate multiplied by a coefficient that
depends on the state of the protocol, and set the CWND for
the path to the product of that pacing rate by the max RTT.
The design of these mechanisms is
discussed in <xref target="congestion"/>.</t>

</section>
<section anchor="react-to-delays"><name>Studying the reaction to delays</name>

<t>The current design of C4 is the result of a series of experiments.
Our initial design was to monitor delays and react to
dellay increases in much the same way as
congestion control algorithms like TCP Vegas or LEDBAT:</t>

<t><list style="symbols">
  <t>monitor the current RTT and the min RTT</t>
  <t>if the current RTT sample exceed the min RTT by more than a preset
margin, treat that as a congestion signal.</t>
</list></t>

<t>The "preset margin" is set by default to 10 ms in TCP Vegas and LEDBAT.
That was adequate when these algorithms were designed, but it can be
considered excessive in high speed low latency networks.
For the initial C4 design, we set it to the lowest of 1/8th of the min RTT and 25ms.</t>

<t>The min RTT itself is measured over time. The detection of congestion by comparing
delays to min RTT plus margin works well, except in two conditions:</t>

<t><list style="symbols">
  <t>if the C4 connection is competing with a another connection that
does not react to delay variations, such as a connection using Cubic,</t>
  <t>if the network exhibits a lot of latency jitter, as happens on
some Wi-Fi networks.</t>
</list></t>

<t>We also know that if several connection using delay-based algorithms
compete, the competition is only fair if they all have the same
estimate of the min RTT. We handle that by using a "periodic slow down"
mechanism.</t>

<section anchor="vegas-struggle"><name>Managing Competition with Loss Based Algorithms</name>

<t>Competition between Cubic and a delay based algorithm leads to Cubic
consuming all the bandwidth and the delay based connection starving.
This phenomenon force TCP Vegas to only be deployed in controlled
environments, in which it does not have to compete with
TCP Reno <xref target="RFC6582"/> or Cubic.</t>

<t>We handled this competition issue by using a simple detection algorithm.
If C4 detected competition with a loss based algorithm, it switched
to a "pig war" mode and stopped reacting to changes in delays -- it would
instead only react to packet losses and ECN signals. In that mode,
we used another algorithm to detect when the competition has ceased,
and switch back to the delay responsive mode.</t>

<t>In our initial deployments, we detected competition when delay based
congestion notifications leads to CWND and rate
reduction for more than 3
consecutive RTT. The assumption is that if the competition reacted to delays
variations, it would have reacted to the delay increases before
3 RTT. However, that simple test caused many "false positive"
detections.</t>

<t>We refined this test to start the pig war
if we observed 4 consecutive delay-based rate reductions
and the nominal CWND was less than half the max nominal CWND
observed since the last "initial" phase, or if we observed
at least 5 reductions and the nominal CWND is less than 4/5th of
the max nominal CWND.</t>

<t>We validated this test by comparing the
ratio <spanx style="verb">CWND/MAX_CWND</spanx> for "valid" decisions, when we are simulating
a competition scenario, and "spurious" decisions, when the
"more than 3 consecutive reductions" test fires but we are
not simulating any competition:</t>

<texttable>
      <ttcol align='left'>Ratio CWND/Max</ttcol>
      <ttcol align='left'>valid</ttcol>
      <ttcol align='left'>spurious</ttcol>
      <c>Average</c>
      <c>30%</c>
      <c>75%</c>
      <c>Max</c>
      <c>49%</c>
      <c>100%</c>
      <c>Top 25%</c>
      <c>37%</c>
      <c>91%</c>
      <c>Media</c>
      <c>35%</c>
      <c>83%</c>
      <c>Bottom 25%</c>
      <c>20%</c>
      <c>52%</c>
      <c>Min</c>
      <c>12%</c>
      <c>25%</c>
      <c>&lt;50%</c>
      <c>100%</c>
      <c>20%</c>
</texttable>

<t>Note that this validation was based on simulations, and that we cannot
claim that our simulations perfectly reflect the real world. We will
discuss in <xref target="simplify"/> how this imperfections lead us to use change
our overall design.</t>

<t>Our initial exit competition algorithm was simple. C4 will exit the
"pig war" mode if the available bandwidth increases.</t>

</section>
<section anchor="handling-chaotic-delays"><name>Handling Chaotic Delays</name>

<t>Some Wi-Fi networks exhibit spikes in latency. These spikes are
probably what caused the delay jitter discussed in
<xref target="Cubic-QUIC-Blog"/>. We discussed them in more details in
<xref target="Wi-Fi-Suspension-Blog"/>. We are not sure about the
mechanism behind these spikes, but we have noticed that they
mostly happen when several adjacent Wi-Fi networks are configured
to use the same frequencies and channels. In these configurations,
we expect the hidden node problem to result in some collisions.
The Wi-Fi layer 2 retransmission algorithm takes care of these
losses, but apparently uses an exponential back off algorithm
to space retransmission delays in case of repeated collisions.
When repeated collisions occur, the exponential backoff mechanism
can cause large delays. The Wi-Fi layer 2 algorithm will also
try to maintain delivery order, and subsequent packets will
be queued behind the packet that caused the collisions.</t>

<t>In our initial design, we detected the advent of such "chaotic delay jitter" by computing
a running estimate of the max RTT. We measured the max RTT observed
in each round trip, to obtain the "era max RTT". We then computed
an exponentially averaged "nominal max RTT":</t>

<figure><artwork><![CDATA[
nominal_max_rtt = (7 * nominal_max_rtt + era_max_rtt) / 8;
]]></artwork></figure>

<t>If the nominal max RTT was more than twice the min RTT, we set the
"chaotic jitter" condition. When that condition was set, we stopped
considering excess delay as an indication of congestion,
and we changed
the way we computed the "current CWND" used for the controlled
path. Instead of simply setting it to "nominal CWND", we set it
to a larger value:</t>

<figure><artwork><![CDATA[
target_cwnd = alpha*nominal_cwnd +
              (max_bytes_acked - nominal_cwnd) / 2;
]]></artwork></figure>
<t>In this formula, <spanx style="verb">alpha</spanx> is the amplification coefficient corresponding
to the current state, such as for example 1 if "cruising" or 1.25
if "pushing" (see <xref target="congestion"/>), and <spanx style="verb">max_bytes_acked</spanx> is the largest
amount of bytes in flight that was succesfully acknowledged since
the last initial phase.</t>

<t>The increased <spanx style="verb">target_cwnd</spanx> enabled C4 to keep sending data through
most jitter events. There is of course a risk that this increased
value will cause congestion. We limit that risk by only using half
the value of <spanx style="verb">max_bytes_ack</spanx>, and by the setting a
conservative pacing rate:</t>

<figure><artwork><![CDATA[
target_rate = alpha*nominal_rate;
]]></artwork></figure>
<t>Using the pacing rate that way prevents the larger window to
cause big spikes in traffic.</t>

<t>The network conditions can evolve over time. C4 will keep monitoring
the nominal max RTT, and will reset the "chaotic jitter" condition
if nominal max RTT decreases below a threshold of 1.5 times the
min RTT.</t>

</section>
<section anchor="slowdown"><name>Monitor min RTT</name>

<t>Delay based algorithm rely on a correct estimate of the
min RTT. They will naturally discover a reduction in the min
RTT, but detecting an increase in the max RTT is difficult.
There are known failure modes when multiple delay based
algorithms compete, in particular the "late comer advantage".</t>

<t>In our initial design, the connections ensured that their min RTT is valid by
occasionally entering a "slowdown" period, during which they set
CWND to half the nominal value. This is similar to
the "Probe RTT" mechanism implemented in BBR, or the
"initial and periodic slowdown" proposed as extension
to LEDBAT in <xref target="I-D.irtf-iccrg-ledbat-plus-plus"/>. In our
implementation, the slowdown occurs if more than 5
seconds have elapsed since the previous slowdown, or
since the last time the min RTT was set.</t>

<t>The measurement of min RTT in the period
that follows the slowdown is considered a "clean"
measurement. If two consecutive slowdown periods were
followed by clean measurements larger than the current
min RTT, we detect an RTT change and reset the
connection. If the measurement results in the same
value as the previous min RTT, C4 continue normal
operation.</t>

<t>Some applications exhibit periods of natural slow down. This
is the case for example of multimedia applications, when
they only send differentially encoded frames. Natural
slowdown was detected if an application sent less than
half the nominal CWND during a period, and more than 4 seconds
had elapsed since the previous slowdown or the previous
min RTT update. The measurement that follows a natural
slowdown was also considered a clean measurement.</t>

<t>A slowdown period corresponds to a reduction in offered
traffic. If multiple connections are competing for the same
bottleneck, each of these connections may experience cleaner
RTT measurements, leading to equalization of the min RTT
observed by these connections.</t>

</section>
</section>
<section anchor="simplify"><name>Simplifying the initial design</name>

<t>After extensive testing of our initial design, we felt we had
drifted away from our initial "simplicity" tenet. The algorithms
used to detect "pig war" and "chaotic jitter" were difficult
to tune, and despite our efforts they resulted in many
false positive or false negative. The "slowdown" algorithm
made C4 less friendly to "real time" applications that
prefer using stable estimated rates. These algorithms
interacted with each other in ways that were sometimes
hard to predict.</t>

<section anchor="chaotic-jitter-and-rate-control"><name>Chaotic jitter and rate control</name>

<t>As we observed the chaotic jitter behavior, we came to the
conclusion that only controlling the CWND did not work well.
we had a dilemma: either use a small CWND to guarantee that
RTTs remain small, or use a large CWND so that transmission
would not stall during peaks in jitter. But if we use a large
CWND, we need some form of pacing to prevent senders from
sending a large amount of packets too quickly. And then we
realized that if we do have to set a pacing rate, we can simplify
the algorithm.</t>

<t>Suppose that we compute a pacing rate that matches the network
capacity, just like BBR does. Then, in first approximation, the
setting the CWND too high does not matter too much. 
The number of bytes in flight will be limited by the product
of the pacing rate by the actual RTT. We are thus free to
set the CWND to a large value.</t>

</section>
<section anchor="monitoring-the-nominal-max-rtt"><name>Monitoring the nominal max RTT</name>

<t>The observation on chaotic jitter leads to the idea of monitoring
the maximum RTT. There is some difficulty here, because the
observed RTT has three components:</t>

<t><list style="symbols">
  <t>The minimum RTT in the absence of jitter</t>
  <t>The jitter caused by access networks such as Wi-Fi</t>
  <t>The delays caused by queues in the network</t>
</list></t>

<t>We cannot merely use the maximum value of the observed RTT,
because of the queing delay component. In pushing periods, we
are going to use data rate slightly higher than the measured
value. This will create a bit of queuing, pushing the queing
delay component ever higher -- and eventually resulting in
"buffer bloat".</t>

<t>To avoid that, we can have periodic periods in which the
endpoint sends data at deliberately slower than the
rate estimate. This enables us to get a "clean" measurement
of the Max RTT.</t>

<t>If we are dealing with jitter, the clean Max RTT measurements
will include whatever jitter was happening at the time of the
measurement. It is not sufficient to measure the Max RTT once,
we must keep the maximum value of a long enough series of measurement
to capture the maximum jitter than the network can cause. But
we are also aware that jitter conditions change over time, so
we have to make sure that if the jitter diminished, the
Max RTT also diminishes.</t>

<t>We solved that by measuring the Max RTT during the "recovery"
periods that follow every "push". These periods occur about every 6 RTT,
giving us reasonably frequent measurements. During these periods, we
try to ensure clean measurements by
setting the pacing rate a bit lower than the nominal rate -- 6.25%
slower in our initial trials. We apply the following algorithm:</t>

<t><list style="symbols">
  <t>compute the <spanx style="verb">max_rtt_sample</spanx> as the maximum RTT observed for
packets sent during the recovery period.</t>
  <t>if the <spanx style="verb">max_rtt_sample</spanx> is more than <spanx style="verb">max_jitter</spanx> above
<spanx style="verb">running_min_rtt</spanx>, reset it to <spanx style="verb">running_min_rtt + max_jitter</spanx>
(by default, <spanx style="verb">max_jitter</spanx> is set to 250ms).</t>
  <t>if <spanx style="verb">max_rtt_sample</spanx> is larger than <spanx style="verb">nominal_max_rtt</spanx>, set
<spanx style="verb">nominal_max_rtt</spanx> to that value.</t>
  <t>else, set <spanx style="verb">nominal_max_rtt</spanx> to:
~~~
 nominal_max_rtt = gamma<em>max_rtt_sample + 
                   (1-gamma)</em>nominal_max_rtt
~~~
The <spanx style="verb">gamma</spanx> coefficient is set to <spanx style="verb">1/8</spanx> in our initial trials.</t>
</list></t>

<section anchor="preventing-runaway-max-rtt"><name>Preventing Runaway Max RTT</name>

<t>Computing Max RTT the way we do bears the risk of "run away increase"
of Max RTT:</t>

<t><list style="symbols">
  <t>C4 notices high jitter, increases Nominal Max RTT accordingly, set CWND to the
product of the increased Nominal Max RTT and Nominal Rate</t>
  <t>If Nominal rate is above the actual link rate, C4 will fill the pipe, and create a queue.</t>
  <t>On the next measurement, C4 finds that the max RTT has increased because of the queue,
interprets that as "more jitter", increases Max RTT and fills the queue some more.</t>
  <t>Repeat until the queue become so large that packets are dropped and cause a
congestion event.</t>
</list></t>

<t>Our proposed algorithm limits the Max RTT to at most <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but that is still risky. If congestion causes queues, the running measurements of <spanx style="verb">min RTT</spanx>
will increase, causing the algorithm to allow for corresponding increases in <spanx style="verb">max RTT</spanx>.
This would not happen as fast as without the capping to <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but it would still increase.</t>

</section>
<section anchor="initial-phase-and-max-rtt"><name>Initial Phase and Max RTT</name>

<t>During the initial phase, the nominal max RTT and the running min RTT are
set to the first RTT value that is measured. This is not great in presence
of high jitter, which causes C4 to exit the Initial phase early, leaving
the nominal rate way too low. If C4 is competing on the Wi-Fi link
against another connection, it might remain stalled at this low data rate.</t>

<t>We considered updating the Max RTT during the Initial phase, but that
prevents any detection of delay based congestion. The Initial phase
would continue until path buffers are full, a classic case of buffer
bloat. Instead, we adopted a simple workaround:</t>

<t><list style="symbols">
  <t>Maintain a flag "initial_after_jitter", initialized to 0.</t>
  <t>Get a measure of the max RTT after exit from initial.</t>
  <t>If C4 detects a "high jitter" condition and the
"initial_after_jitter" flag is still 0, set the
flag to 1 and re-enter the "initial" state.</t>
</list></t>

<t>Empirically, we detect high jitter in that case if the "running min RTT"
is less that 2/5th of the "nominal max RTT".</t>

</section>
</section>
<section anchor="monitor-rate"><name>Monitoring the nominal rate</name>

<t>The nominal rate is measured on each acknowledgement by dividing
the number of bytes acknowledged since the packet was sent
by the RTT measured with the acknowledgement of the packet. However,
that measurement is noisy, because delay jitter can cause the
underestimation of the RTT, resulting in over estimation of the
rate. We tested to ways of measuring the data rate: a simple
way, similar to what is specified in BBR, and a more involved
way, using an harmonic filter.</t>

<t>We only use the measurements to increase the nominal rate,
replacing the current value if we observe a greater filtered measurement.
This is a deliberate choice, as decreases in measurement are ambiguous.
They can result from the application being rate limited, or from
measurement noises. Following those causes random decrease over time,
which can be detrimental for rate limited applications.
If the network conditions have changed, the rate will
be reduced if congestion signals are received, as explained
in <xref target="congestion"/>.</t>

<section anchor="simple-rate-measurement"><name>Simple rate measurement</name>

<t>The simple algorithm protects from underestimation of the
delay by observing that
delivery rates cannot be larger than the rate at which the
packets were sent, thus keeping the lower of the estimated
receive rate and the send rate.</t>

<t>The simple version of the algorithm would be:</t>

<figure><artwork><![CDATA[
measured_rate = bytes_acknowledged / rtt_sample
if measured_rate > send_rate:
    measured_rate = send_rate
if measured_rate > nominal_rate:
    nominal_rate = measured_rate
]]></artwork></figure>

<t>This algorithm works reasonably well if there is not too
much delay jitter. However, 
our first trials show that the rate estimate is often a little
above the actual data rate. For example, if we take the
simple case of a C4 connection simulated over a clean fixed
20Mbps path, we see the data rate measurements after
the initial phase vary between 15.9 and 22.1 Mbps, with
a median of 20.5 Mbps. Since the nominal is taken as the
maximum over an interval, we often see it well above the
nominal 20 Mbps. This translates in the queues and delays.</t>

<t>If we go from a nice fixed rate path to a simulated "bad Wi-Fi"
path, the problem worsen. The data rate measurements after
the initial phase vary between 200kbps and 27.7 Mbps, with
a median of 11.9 Mbps -- with maximum and median well above
the simulated data rate of 10 Mbps. The nominal rate
can be more than double the actual rate, which creates huge
queues and delays.</t>

</section>
<section anchor="harmonic-filter"><name>Harmonic filter</name>

<t>We assumed initially that simple precautions like limiting to
the send rate would be sufficient. They are not, in part
because the effect of "send quantum" which allows for
peaks of data rate above the nominal rate.</t>

<t>The data rate measurements are the quotient of the number of
bytes received by the delay. The number of bytes received is
easy to assert, but the measurement of the delays are very noisy.
Instead of trying to average the data rates, we can average
their inverse, i.e., the quotients of the delay by the
bytes received, the times per byte. Then we can obtain
smoothed data rates as the inverse of these times per byte,
effectively computing an harmonic average of measurements
over time.</t>

<t>We compute an exponentially weighted moving average
of the time per byte by computing the recursive function:</t>

<figure><artwork><![CDATA[
tpb = rtt_sample/nb_bytes_acknowledged
smoothed_tpb = (1-kappa)*smoothed_tpb + kappa*tpb
]]></artwork></figure>

<t>We then derive the smoothed rate measurement:</t>

<figure><artwork><![CDATA[
smoothed_rate_measurement = 1/smoothed_tpb
]]></artwork></figure>

<t>And we update the nominal rate:</t>

<figure><artwork><![CDATA[
if smoothed_rate_measurement > nominal_rate:
    nominal_rate = smoothed_rate_measurement
]]></artwork></figure>

<t>These computations depend from the coefficient <spanx style="verb">kappa</spanx>. We empirically
choose <spanx style="verb">kappa = 1/4</spanx>. However, even using a relatively large coefficient
still bears the risk of smoothing too much.</t>

</section>
<section anchor="alternative-to-smoothing"><name>Alternative to smoothing</name>

<t>Smoothing a control input like the measured data rate is an inherent
tradeoff. While we will reduce the noise, we will also delay
any observation. That is very obvious during the initial phase. The
data rate is expected to double or more every RTT, and we are also
expected to receive a few ACKs per RTT. If we smooth with a
coefficient 1/4 and we receive 4 acknowledgements per RTT, the
smoothed value will end up being about 68% of the actual value.
Instead of doubling every RTT, the growth rate will be much
slower.</t>

<t>In the short term, we solved the issue by using the "classic"
computation during the Initial phase, and only using smoothing
in the later phase if we detect high jitter, i.e., if
the "running min RTT" is smaller than 3/4th of the "nominal max RTT".</t>

<t>This is clearly not the final solution. C4 tests for available
bandwidth by sending data a little bit faster during "pushing"
periods. If we smooth the data during this period, we will not
be able to reliably assess that it did grow. On the other
hand, if we do not smooth, we could be making decisions based
on spurious events.</t>

<t>We have to find a way out of this dilemma. It probably requires
detecting the "high jitter" conditions, and during these
conditions try larger pushes, maybe 25% instead of 6.25%,
because those larger pushes are more likely to get an unambiguous
response, such as triggering a congestion event if the increase
in sending rate was excessive. But congestion signals are
themselves harder to detect in case of large jitter, for example
because it is hard to distinguish delay increases due to
queues from those due to jitter.</t>

<t>Or, we may want to test another form of signal altogether. For example,
if the pacing rate is set correctly, we should find very few cases
when the CWND is all used before an ACK is received. We could monitor
that event, and use it to either increase or decrease the pacing rate.</t>

<t>This clearly requires further testing.</t>

</section>
</section>
</section>
<section anchor="competition-with-other-algorithms"><name>Competition with other algorithms</name>

<t>We saw in <xref target="vegas-struggle"/> that delay based algorithms required
a special "escape mode" when facing competition from algorithms
like Cubic. Relying on pacing rate and max RTT instead of CWND
and min RTT makes this problem much simpler. The measured max RTT
will naturally increase as algorithms like Cubic cause buffer
bloat and increased queues. Instead of being shut down,
C4 will just keep increasing its max RTT and thus its running
CWND, automatically matching the other algorithm's values.</t>

<t>We verified that behavior in a number of simulations. We also
verified that when the competition ceases, C4 will progressively
drop its nominal max RTT, returning to situations with very low
queuing delays.</t>

<section anchor="no-need-for-slowdowns"><name>No need for slowdowns</name>

<t>The fairness of delay based algorithm depends on all competing
flows having similar estimates of the min RTT. As discussed
in <xref target="slowdown"/>, this ends up creating variants of the
<spanx style="verb">latecomer advantage</spanx> issue, requiring a periodic slowdown
mechanism to ensure that all competing flow have chance to
update the RTT value.</t>

<t>This problem is caused by the default algorithm of setting
min RTT to the minimum of all RTT sample values since the beginning 
of the connection. Flows that started more recently compute
that minimum over a longer period, and thus discover a larger
min RTT than older flows. This problem does not exist with
max RTT, because all competing flows see the same max RTT
value. The slowdown mechanism is thus not necessary.</t>

<t>Removing the need for a slowdown mechanism allows for a
simpler protocol, better suited to real time communications.</t>

</section>
</section>
<section anchor="congestion"><name>React quickly to changing network conditions</name>

<t>Our focus is on maintaining low delays, and thus reacting
quickly to changes in network conditions. We can detect some of these
changes by monitoring the RTT and the data rate, but
experience with the early version of BBR showed that
completely ignoring packet losses can lead to very unfair
competition with Cubic. The L4S effort is promoting the use
of ECN feedback by network elements (see <xref target="RFC9331"/>),
which could well end up detecting congestion and queues
more precisely than the monitoring of end-to-end delays.
C4 will thus detect changing network conditions by monitoring
3 congestion control signals:</t>

<t><list style="numbers" type="1">
  <t>Excessive increase of measured RTT (above the nominal Max RTT),</t>
  <t>Excessive rate of packet losses (but not mere Probe Time Out, see <xref target="no-pto"/>),</t>
  <t>Excessive rate of ECN/CE marks</t>
</list></t>

<t>If any of these signals is detected, C4 enters a "recovery"
state. On entering recovery, C4 reduces the <spanx style="verb">nominal_rate</spanx>
by a factor "beta":</t>

<figure><artwork><![CDATA[
    // on congestion detected:
    nominal_rate = (1-beta)*nominal_rate
]]></artwork></figure>
<t>The cofficient <spanx style="verb">beta</spanx> differs depending on the nature of the congestion
signal. For packet losses, it is set to <spanx style="verb">1/4</spanx>, similar to the
value used in Cubic. For delay based losses, it is proportional to the
difference between the measured RTT and the target RTT divided by
the acceptable margin, capped to <spanx style="verb">1/4</spanx>. If the signal
is an ECN/CE rate, we may
use a proportional reduction coefficient in line with
<xref target="RFC9331"/>, again capped to <spanx style="verb">1/4</spanx>.</t>

<t>During the recovery period, target CWND and pacing rate are set
to a fraction of the "nominal rate" multiplied by the
"nominal max RTT".
The recovery period ends when the first packet
sent after entering recovery is acknowledged. Congestion
signals are processed when entering recovery; further signals
are ignored until the end of recovery.</t>

<t>Network conditions may change for the better or for the worse. Worsening 
is detected through congestion signals, but increases can only be detected
by trying to send more data and checking whether the network accepts it.
Different algorithms have done two ways: pursuing regular increases of
CWND until congestion finally occurs, like for example the "congestion
avoidance" phase of TCP RENO; or periodically probe the network
by sending at a higher rate, like the Probe Bandwidth mechanism of
BBR. C4 adopt the periodic probing approach, in particular
because it is a better fit for variable rate multimedia applications
(see details in <xref target="limited"/>).</t>

<section anchor="no-pto"><name>Do not react to Probe Time Out</name>

<t>QUIC normally detect losses by observing gaps in the sequences of acknowledged
packet. That's a robust signal. QUIC will also inject "Probe time out"
packets if the PTO timeout elapses before the last sent packet has not been acknowledged.
This is not a robust congestion signal, because delay jitter may also cause
PTO timeouts. When testing in "high jitter" conditions, we realized that we should
not change the state of C4 for losses detected solely based on timer, and
only react to those losses that are detected by gaps in acknowledgements.</t>

</section>
<section anchor="rate-update"><name>Update the Nominal Rate after Pushing</name>

<t>C4 configures the transport with a larger rate and CWND
than the nominal CWND during "pushing" periods.
The peer will acknowledge the data sent during these periods in
the round trip that followed.</t>

<t>When we receive an ACK for a newly acknowledged packet,
we update the nominal rate as explained in <xref target="monitor-rate"/>.</t>

<t>This strategy is effectively a form of "make before break".
The pushing
only increase the rate by a fraction of the nominal values,
and only lasts for one round trip. That limited increase is not
expected to increase the size of queues by more than a small
fraction of the bandwidth*delay product. It might cause a
slight increase of the measured RTT for a short period, or
perhaps cause some ECN signalling, but it should not cause packet
losses -- unless competing connections have caused large queues.
If there was no extra
capacity available, C4 does not increase the nominal CWND and
the connection continues with the previous value.</t>

</section>
</section>
<section anchor="fairness"><name>Driving for fairness</name>

<t>Many protocols enforce fairness by tuning their behavior so
that large flows become less aggressive than smaller ones, either
by trying less hard to increase their bandwidth or by reacting
more to congestion events. We considered adopting a similar
strategy for C4.</t>

<t>The aggressiveness of C4 is driven by several considerations:</t>

<t><list style="symbols">
  <t>the frequency of the "pushing" periods,</t>
  <t>the coefficient <spanx style="verb">alpha</spanx> used during pushing,</t>
  <t>the coefficient <spanx style="verb">beta</spanx> used during response to congestion events,</t>
  <t>the delay threshold above a nominal value to detect congestion,</t>
  <t>the ratio of packet losses considered excessive,</t>
  <t>the ratio of ECN marks considered excessive.</t>
</list></t>

<t>We clearly want to have some or all of these parameters depend
on how much resource the flow is using.
There are know limits to these strategies. For example,
consider TCP Reno, in which the growth rate of CWND during the
"congestion avoidance" phase" is inversely proportional to its size.
This drives very good long term fairness, but in practice
it prevents TCP Reno from operating well on high speed or
high delay connections, as discussed in the "problem description"
section of <xref target="RFC3649"/>. In that RFC, Sally Floyd was proposing
using a growth rate inversely proportional to the
logarithm of the CWND, which would not be so drastic.</t>

<t>In the initial design, we proposed making the frequency of the
pushing periods inversely proportional to the logarithm of the
CWND, but that gets in tension with our estimation of
the max RTT, which requires frequent "recovery" periods.
We would not want the Max RTT estimate to work less well for
high speed connections! We solved the tension in favor of
reliable max RTT estimates, and fixed to 4 the number
of Cruising periods between Recovery and Pushing. The whole
cycle takes about 6 RTT.</t>

<t>We also reduced the default rate increase during Pushing to
6.25%, which means that the default cycle is more on less on
par with the aggressiveness of RENO when
operating at low bandwidth (lower than 34 Mbps).</t>

<section anchor="absence-of-constraints-is-unfair"><name>Absence of constraints is unfair</name>

<t>Once we fixed the push frequency and the default increase rate, we were
left with responses that were mostly proportional to the amount
of resource used by a connection. Such design makes the resource sharing
very dependent on initial conditions. We saw simulations where
after some initial period, one of two competing connections on
a 20 Mbps path might settle at a 15 Mbps rate and the other at 5 Mbps.
Both connections would react to a congestion event by dropping
their bandwidth by 25%, to 15 or 3.75 Mbps. And then once the condition
eased, both would increase their data rate by the same amount. If
everything went well the two connection will share the bandwidth
without exceeding it, and the situation would be very stable --
but also very much unfair.</t>

<t>We also had some simulations in which a first connection will
grab all the available bandwidth, and a late comer connection
would struggle to get any bandwidth at all. The analysis 
showed that the second connection was
exiting the initial phase early, after encountering either
excess delay or excess packet loss. The first
connection was saturating the path, any additional traffic
did cause queuing or losses, and the second connection had
no chance to grow.</t>

<t>This "second comer shut down" effect happend particularly often
on high jitter links. The established connections had tuned their
timers or congestion window to account for the high jitter. The
second connection was basing their timers on their first
measurements, before any of the big jitter events had occured.
This caused an imbalance between the first connection, which
expected large RTT variations, and the second, which did not
expect them yet.</t>

<t>These shutdown effects happened in simulations with the first
connection using either Cubic, BBR or C4. We had to design a response,
and we first turned to making the response to excess delay or
packet loss a function of the data rate of the flow.</t>

</section>
<section anchor="introducing-a-sensitivity-curve"><name>Introducing a sensitivity curve</name>

<t>In our second design, we attempted to fix the unfairness and
shutdowns effect by introducing a sensitivity curve,
computing a "sensitivity" as a function of the flow data
rate. Our first implementation is simple:</t>

<t><list style="symbols">
  <t>set sensitivity to 0 if data rate is lower than 50000B/s</t>
  <t>linear interpolation between 0 and 0.92 for values
between 50,000 and 1,000,000B/s.</t>
  <t>linear interpolation between 0.92 and 1 for values
between 1,000,000 and 10,000,000B/s.</t>
  <t>set sensitivity to 1 if data rate is higher than
10,000,000B/s</t>
</list></t>

<t>The sensitivity index is then used to set the value of delay and
loss thresholds. For the delay threshold, the rule is:</t>

<figure><artwork><![CDATA[
    delay_fraction = 1/16 + (1 - sensitivity)*3/16
    delay_threshold = min(25ms, delay_fraction*nominal_max_rtt)
]]></artwork></figure>

<t>For the loss threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 0.02 + 0.50 * (1-sensitivity);
]]></artwork></figure>

<t>This very simple change allowed us to stabilize the results. In our
competition tests we see sharing of resource almost equitably between
C4 connections, and reasonably between C4 and Cubic or C4 and BBR.
We do not observe the shutdown effects that we saw before.</t>

<t>There is no doubt that the current curve will have to be refined. We have
a couple of such tests in our test suite with total capacity higher than
20Mbps, and for those tests the dependency on initial conditions remain.
We will revisit the definition of the curve, probably to have the sensitivity
follow the logarithm of data rate.</t>

</section>
<section anchor="cascade"><name>Cascade of Increases</name>

<t>We sometimes encounter networks in which the available bandwidth changes rapidly.
For example, when a competing connection stops, the available capacity may double.
With low Earth orbit satellite constellations (LEO), it appears
that ground stations constantly check availability of nearby satellites, and
switch to a different satellite every 10 or 15 seconds depending on the
constellation (see <xref target="ICCRG-LEO"/>), with the bandwidth jumping from 10Mbps to
65Mbps.</t>

<t>Because we aim for fairness with RENO or Cubic, the cycle of recovery, cruising
and pushing will only result in slow increases increases, maybe 6.25% after 6 RTT.
This means we would only double the bandwidth after about 68 RTT, or increase
from 10 to 65 Mbps after 185 RTT -- by which time the LEO station might
have connected to a different orbiting satellite. To go faster, we implement
a "cascade": if the previous pushing at 6.25% was successful, the next
pushing will use 25% (see <xref target="variable-pushing"/>). If three successive pushings
all result in increases of the
nominal rate, C4 will reenter the "startup" mode, during which each RTT
can result in a 100% increase of rate and CWND.</t>

</section>
</section>
<section anchor="limited"><name>Supporting Application Limited Connections</name>

<t>C4 is specially designed to support multimedia applications,
which very often operate in application limited mode.
After testing and simulations of application limited applications,
we incorporated a number of features.</t>

<t>The first feature is the design decision to only lower the nominal
rate if congestion is detected. This is in contrast with the BBR design,
in which the estimate of bottleneck bandwidth is also lowered
if the bandwidth measured after a "probe bandwidth" attempt is
lower than the current estimate while the connection was not
"application limited". We found that detection of the application
limited state was somewhat error prone. Occasional errors end up
with a spurious reduction of the estimate of the bottleneck bandwidth.
These errors can accumulate over time, causing the bandwidth
estimate to "drift down", and the multimedia experience to suffer.
Our strategy of only reducing the nominal values in
reaction to congestion notifications much reduces that risk.</t>

<t>The second feature is the "make before break" nature of the rate
updates discussed in <xref target="rate-update"/>. This reduces the risk
of using rates that are too large and would cause queues or losses,
and thus makes C4 a good choice for multimedia applications.</t>

<t>C4 adds two more features to handle multimedia
applications well: coordinated pushing (see <xref target="coordinated-pushing"/>),
and variable pushing rate (see <xref target="variable-pushing"/>).</t>

<section anchor="coordinated-pushing"><name>Coordinated Pushing</name>

<t>As stated in <xref target="fairness"/>, the connection will remain in "cruising"
state for a specified interval, and then move to "pushing". This works well
when the connection is almost saturating the network path, but not so
well for a media application that uses little bandwidth most of the
time, and only needs more bandwidth when it is refreshing the state
of the media encoders and sending new "reference" frames. If that
happens, pushing will only be effective if the pushing interval
coincides with the sending of these reference frames. If pushing
happens during an application limited period, there will be no data to
push with and thus no chance of increasing the nominal rate and CWND.
If the reference frames are sent outside of a pushing interval, the
rate and CWND will be kept at the nominal value.</t>

<t>To break that issue, one could imagine sending "filler" traffic during
the pushing periods. We tried that in simulations, and the drawback became
obvious. The filler traffic would sometimes cause queues and packet
losses, which degrade the quality of the multimedia experience.
We could reduce this risk of packet losses by sending redundant traffic,
for example creating the additional traffic using a forward error
correction (FEC) algorithm, so that individual packet losses are
immediately corrected. However, this is complicated, and FEC does
not always protect against long batches of losses.</t>

<t>C4 uses a simpler solution. If the time has come to enter pushing, it
will check whether the connection is "application limited", which is
simply defined as testing whether the application send a "nominal CWND"
worth of data during the previous interval. If it is, C4 will remain
in cruising state until the application finally sends more data, and
will only enter the the pushing state when the last period was
not application limited.</t>

</section>
<section anchor="variable-pushing"><name>Variable Pushing Rate</name>

<t>C4 tests for available bandwidth at regular pushing intervals
(see <xref target="fairness"/>), during which the rate and CWND is set at 25% more
than the nominal values. This mimics what BBR
is doing, but may be less than ideal for real time applications.
When in pushing state, the application is allowed to send
more data than the nominal CWND, which causes temporary queues
and degrades the experience somewhat. On the other hand, not pushing
at all would not be a good option, because the connection could
end up stuck using only a fraction of the available
capacity. We thus have to find a compromise between operating at
low capacity and risking building queues.</t>

<t>We manage that compromise by adopting a variable pushing rate:</t>

<t><list style="symbols">
  <t>If pushing at 25% did not result in a significant increase of
the nominal rate, the next pushing will happen at 6.25%</t>
  <t>If pushing at 6.25% did result in some increase of the nominal CWIN,
the next pushing will happen at 25%, otherwise it will
remain at 6.25%</t>
</list></t>

<t>As explained in <xref target="cascade"/>, if three consecutive pushing attempts
result in significant increases, C4 detects that the underlying network
conditions have changed, and will reenter the startup state.</t>

<t>The "significant increase" mentioned above is a matter of debate.
Even if capacity is available,
increasing the send rate by 25% does not always result in a 25%
increase of the acknowledged rate. Delay jitter, for example,
may result in lower measurement. We initially computed the threshold
for detecting "significant" increase as 1/2 of the increase in
the sending rate, but multiple simulation shows that was too high and
and caused lower performance. We now set that threshold to 1/4 of the
increase in he sending rate.</t>

</section>
<section anchor="pushing-rate-and-cascades"><name>Pushing rate and Cascades</name>

<t>The choice of a 25% push rate was motivated by discussions of
BBR design. Pushing has two parallel functions: discover the available
capacity, if any; and also, push back against other connections
in case of competition. Consider for example competition with Cubic.
The Cubic connection will only back off if it observes packet losses,
which typically happen when the bottleneck buffers are full. Pushing
at a high rate increases the chance of building queues,
overfilling the buffers, causing losses, and thus causing Cubic to back off.
Pushing at a lower rate like 6.25% would not have that effect, and C4
would keep using a lower share of the network. This is why we will always
push at 25% in the "pig war" mode.</t>

<t>The computation of the interval between pushes is tied to the need to
compete nicely, and follows the general idea that
the average growth rate should mimic that of RENO or Cubic in the
same circumstances. If we pick a lower push rate, such as 6.25% or
maybe 12.5%, we might be able to use shorter intervals. This could be
a nice compromise: in normal operation, push frequently, but at a
low rate. This would not create large queues or disturb competing
connections, but it will let C4 discover capacity more quickly. Then,
we could use the "cascade" algorithm to push at a higher rate,
and then maybe switch to startup mode if a lot of capacity is
available. This is something that we intend to test, but have not
implemented yet.</t>

</section>
</section>
<section anchor="state-machine"><name>State Machine</name>

<t>The state machine for C4 has the following states:</t>

<t><list style="symbols">
  <t>"startup": the initial state, during which the CWND is
set to twice the "nominal_CWND". The connection
exits startup if the "nominal_cwnd" does not
increase for 3 consecutive round trips. When the
connection exits startup, it enters "recovery".</t>
  <t>"recovery": the connection enters that state after
"startup", "pushing", or a congestion detection in
a "cruising" state. It remains in that state for
at least one roundtrip, until the first packet sent
in "recovery" is acknowledged. Once that happens,
the connection goes back
to "startup" if the last 3 pushing attemps have resulted
in increases of "nominal rate", or enters "cruising"
otherwise.</t>
  <t>"cruising": the connection is sending using the
"nominal_rate" and "nominal_max_rtt" value. If congestion is detected,
the connection exits cruising and enters
"recovery" after lowering the value of
"nominal_cwnd".
Otherwise, the connection will
remain in "cruising" state until at least 4 RTT and
the connection is not "app limited". At that
point, it enters "pushing".</t>
  <t>"pushing": the connection is using a rate and CWND 25%
larger than "nominal_rate" and "nominal_CWND".
It remains in that state
for one round trip, i.e., until the first packet
send while "pushing" is acknowledged. At that point,
it enters the "recovery" state.</t>
</list></t>

<t>These transitions are summarized in the following state
diagram.</t>

<figure><artwork><![CDATA[
                    Start
                      |
                      v
                      +<-----------------------+
                      |                        |
                      v                        |
                 +----------+                  |
                 | Startup  |                  |
                 +----|-----+                  |
                      |                        |
                      v                        |
                 +------------+                |
  +--+---------->|  Recovery  |                |
  ^  ^           +----|---|---+                |
  |  |                |   |     Rapid Increase |
  |  |                |   +------------------->+
  |  |                |
  |  |                v
  |  |           +----------+
  |  |           | Cruising |
  |  |           +-|--|-----+
  |  | Congestion  |  |
  |  +-------------+  |
  |                   |
  |                   v
  |              +----------+
  |              | Pushing  |
  |              +----|-----+
  |                   |
  +<------------------+

]]></artwork></figure>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>We do not believe that C4 introduce new security issues. Or maybe there are,
such as what happen if applications can be fooled in going to fast and
overwhelming the network, or going to slow and underwhelming the application.
Discuss!</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>




    <references title='Informative References' anchor="sec-informative-references">

&RFC9000;
&I-D.ietf-moq-transport;
&RFC9438;
&I-D.ietf-ccwg-bbr;
&RFC6817;
&RFC6582;
&RFC3649;
<reference anchor="TCP-Vegas" target="https://ieeexplore.ieee.org/document/464716">
  <front>
    <title>TCP Vegas: end to end congestion avoidance on a global Internet</title>
    <author initials="L. S." surname="Brakmo">
      <organization></organization>
    </author>
    <author initials="L. L." surname="Peterson">
      <organization></organization>
    </author>
    <date year="1995" month="October"/>
  </front>
  <seriesInfo name="IEEE Journal on Selected Areas in Communications ( Volume: 13, Issue: 8, October 1995)" value=""/>
</reference>
<reference anchor="HyStart" target="https://doi.org/10.1016/j.comnet.2011.01.014">
  <front>
    <title>Taming the elephants: New TCP slow start</title>
    <author initials="S." surname="Ha">
      <organization></organization>
    </author>
    <author initials="I." surname="Rhee">
      <organization></organization>
    </author>
    <date year="2011" month="June"/>
  </front>
  <seriesInfo name="Computer Networks vol. 55, no. 9, pp. 2092-2110" value=""/>
</reference>
<reference anchor="Cubic-QUIC-Blog" target="https://www.privateoctopus.com/2019/11/11/implementing-cubic-congestion-control-in-quic/">
  <front>
    <title>Implementing Cubic congestion control in Quic</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2019" month="November"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
&I-D.ietf-quic-ack-frequency;
<reference anchor="Wi-Fi-Suspension-Blog" target="https://www.privateoctopus.com/2023/05/18/the-weird-case-of-wifi-latency-spikes.html">
  <front>
    <title>The weird case of the wifi latency spikes</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2023" month="May"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
&I-D.irtf-iccrg-ledbat-plus-plus;
&RFC9331;
<reference anchor="ICCRG-LEO" target="https://datatracker.ietf.org/meeting/122/materials/slides-122-iccrg-mind-the-misleading-effects-of-leo-mobility-on-end-to-end-congestion-control-00">
  <front>
    <title>Mind the Misleading Effects of LEO Mobility on End-to-End Congestion Control</title>
    <author initials="Z." surname="Lai">
      <organization></organization>
    </author>
    <author initials="Z." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Wu">
      <organization></organization>
    </author>
    <author initials="H." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Zhang">
      <organization></organization>
    </author>
    <date year="2025" month="March"/>
  </front>
  <seriesInfo name="Slides presented at ICCRG meeting during IETF 122" value=""/>
</reference>


    </references>



<?line 1103?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>TODO acknowledge.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA7V9+5MbR3Lm7/VXtMHYMCkB4AwfenC9DlMU5aVPEmVSti4c
F8dpAI2ZXgJobHdjRrMU/bdffl9mVlU3MNz1nY+xq5kBuuuRle9XzWaz0Nf9
pnpWTL6tuvpyVzTr4sVVW3d9Xe7+viteNLvLSv5odvi1b5uN/FxVxf0XTx5M
QrlYtNW1vPziSaHvT8Ky7KvLpr19VtS7dRPCqlnuyq3MsGrLdT+7OtR9tS1n
y+XN5Wz5ZLbia7Ozs9AdFtu662Sq/nYvz796+fN3YXfYLqr2WVjJqM/Cstl1
1a47dM+Kvj1Uod63/K3rH52dfX32KJRtVcpyfqkWRblbFa92fdXuqr74uS13
3b5p+0l4X93eNO3qWShmxYsn/O/RHvHpm6rc9PW2ks+228Ouln3JE/jmh2pV
l0VzXbXFv/7bqxchlIf+qmkxYigK2bYs78W8+KPuFB8pACJc86+a9vJZ8VNb
X8sGi9fLvtkfOln3co4v5Zl686wwmP2T/ZzLjvK53s6LH2W35fvDtmzTdG8P
V2U3+kZmK3f1X7gVWVDdLZtsnm6nD//TEl/Ml812tKV/qXa7enfZZXs6bDbV
bvDFp+dYbw7r9e0/1XU9X5Yh7Jp2K09ey+ECXeIf8sKb7158fXZ2xt9fzb6d
11W/nm2bP896P8z42JPHXw0fI3YJcvoTX3x1/mX8/elXj/z3x188+fpZwB8/
v/hp9u/VZdnxq6Iv28uqF9D3/b579vBhXVXVr/tN01Zz/DqXTT4UzD5sq13/
8MkXT748/0LfM2qS4QodrqgEEfuGP5YJ08rrpl6Vu2VV4I/ictMsyk1E2AkH
I9IXk8fnRAyhg+L866+f6ndd1dZCOgIzeeLVy5cvi39pDu1OxpDx3labatlX
q+K50EMnxzdE4q64X/x7szngAM8fT4tXXXeQX7+aDuZ5oBNF5Ma/mf4wjPh+
Lqj3TVu+3zanv/5+XvxUyY46IRx89cfbtwLY3gYbw3jV1ITr+dn8/Oz8i4d/
AgYKMOaPzs7P52f43xN706FcbgXriv6qKmTH+6ty18u8P1Y3OM6i2zQ3RYcJ
dScOz3857KoCY9rHOSgFTvuDrFgG6YVNvO+K62YzL54+nRa7Zl58PS32+7m8
/PWj2aPz8zMdYAAiYRAOhUiefyz9u+FXr+bFm6uqImxeHBb1cgaGMvtm01ye
RsObm5v5XplFo7wCMHoom/n64fk5/ldv95sKWCmAmS05ZsI6/Ar+Nqt3sz8f
6uXDAc6+yl7V5eQIa68Cmf5VXh1g6I/NdXEu6IN1nEDPI8YngmUhezxGMIfP
mIcOiBsrn5XL97N1W/35UO2Wt0rCv9Sz7+rZ20O3FxmB3f5fwPHR44dnTx+e
f/VQcGp2U9XtarYsu2rWrGc39bqebeQNmXDW7ev3VTe/6rebId0LKvK1Aq9B
mgI58Wphrxb66gB+P5S3Br9Hj///wq8V+NXLZXs521SrRdnP9ptDx/8oDMFN
Hz8+V2764sWbf559//L1aRjK2kthxcv3VctzIe1uqwro8/D80aOHwstlF+Wm
e9htapHzM/nQJheyXc0AYpH4m6pcAVmr9VqYVgdQb6pGOP2i3tT97UwOssLT
DX+cQOazs8ER/FCD4QrUf4hjFy91bJyH7Kf4wcYGr3ypY8uPE4rA6JTa5VVx
/iXP6RQbfsttFvu2Ei0F7LfsFYiFgaVYHVr8gGpTCDT+ygH+x7z4vqxPf37i
43+dF78cjj/+451P/4cwzMsQZrNZUS46nGUfwt+i/dWdiKydsNlj/hDKjWh/
dX+1LVS1qyj9usMeEpta1exnqFXlfr+J8qg7CGTLLoxUq3nxqsdk+UgrIdtK
frkp21VXgMWvqk15203DjcxaXDbNKs4mCgUxYZJNVmzqrRDFalIsqqvyum7a
YHyk39zKGwdBhJsr0WkOHY7quhQUXmyqooWCJgTcAKGmUC91wnXZ9YWI2SUH
lwVmMJG/KOd1EcJwAJlb0XRkg+BQk+JKVlZBddI5T/BbAAMfgZq6gIHKa9Gk
uKZluS+XMiLhtBcgCHOplkJygnk9GLmuQV4KV/XlVfGnuodsk6FXtQIeGzpA
6RDgCmcnC5WTVdk3D2BnCnzR30S+dvVfBME7CJkaExMM3GJXbMv3FMZNI7/u
5CvRKrZ7VXYWzaFXIGwFfzYTIURuxWaaF5hIFMS0bYe7YFpbqYQ3IARjquA/
eipQ9e3DbflrvT1sBRhyNEQMbAsAEAyvqG6tCznsgzBvxfxtvVptRATfg/LV
NqsDD/JvowNYQf8txLA90NYA8ud0MdXzXMvfm81tuOMpw1QQTPHhg6nNHz8S
LOSDeCNEoiqi/ixPn1asP37kiXRDKg0gk1pOIyM6g70gW1H9elUv6l6AEWkm
O6IOS7kVgEDBxSdEyIW8flOv5KxsbKgfndKCHBd5d1utBT1xdOtWbI4uYEpZ
gmxHX//k26t6zdd7UIUOMA+/VAoEiGRZl7Dpeifn8lhR8LIRmSXicFZ8ewev
Ke53VSXQI9lDeujHHz8+mMpbb+14P811fAz7wt59gxELaDjvhRuBm4BHYyNG
Kjnx2ggJ72QQweqMaGHNPwGGCsvf1yDyhQi9QyuHtYfkAlQVeyKcwgkeFNG4
I0sFFk3FdOvlkG9NU1TEE0Ps48ep6PlUu+VD0/jxIY7tm2/e5EjnZpp8HQRD
bqrNxjBF9PbD6hakTtDOGiyWIjUtxYRGEW0tGTqacR8/Ek++f/ntN89/1tXB
CARmvxKGtFIYTmXO4qbkyALs91W15+xxlkBepyxGTCkeSVSxBdYCXfK5qlM8
wsuJvgQKy/cC/gBGDYKQ/eBcyo1ySj6eG4Vpc9U1cRl7BGhknF1zIzrbpSL5
tHjz88+iVpTdofVPXr740UYXRUcF30qsL+VnMvMeulovaNx1QgQQGfJkc7z+
UwsKhgrdyR0KUnVV37stlg1wI8pYcxPuv/jlx28fRI4EqQXKFvJTGFKP9IMH
URo7d6SXCXA2eKksuq1wQ0wIKRLlxNT4vxBws5PtLCo8TSkz2TWicAr9Y8KJ
Phg/E4kBWE6mwVe3PLQghOFKIiRUVglgRQh3UCCVfiAIuQsHgozOM4L4WtXd
8iBQp4w9wTaMaPNlgmhHoBK4ysIE7srZDH22DfWPJUjXRaQvOrHXpDMkrNsp
aswHcxs4fPqFDFmMh3UhK88FYs1STJNmKXADPPJzU9lbHMve4hcw6ZsqrBqZ
Wk5SnmivKxPZqqfgoGQZy6beLalYk9VnsJ2HVzq+atw6gb8rkqi7ag4bYXmq
PwiW2wZCPBs7cd+QvjsHz7ypBceW6g0QnNNjCJksG54WRbNwe+Wwpbwp5oyo
SMAjgCisKtHzVp0DaIBb+7bpm2WzUR4JxManIBmw26BogK03/ji0FH277Aco
IpPbhhRAQ1nQU6ZvKwiVuhMuI6xphJu5MJlDK3oLRuxIneu5Jgo/3BujsyKz
U9FYEukwncALn5VmQeF3lUlkZvPwWpCp3tUU2zbEDQAvymUjHwt/s/kBMq5A
vhMob1TlW8LxVdH1tYWYIMxF+Msg0Ez/ipDbiIWeiRWZTOUIlQKfP2cVA2RS
5JIn6/XRQ7IGiJPq12VVDR7HyW0bqrolHIJE6V4Mtq2Y3bUIql52pLgEBCxz
JqtM35jIRN+09yYAOf6U4VfVugTYBYbnZ8WWwEmbTNISWCOzANzlShQr4BV1
KkWfDE6ig1VRs50WC1Hya+UGi4rOeiFbKB3YbtdBl5IpqfiJXltRi4tukWR0
fGfA9eMXvNE5KK2xmbp3WoAe2BGTzh9+JSTiHCoj8EdPt53Bxj+u+67arAEa
k6ArtTmhXUf+rtwRI2agFjCCKZQw44NhIHDSBoYjxSBfqPsQOs2UANj3ZIc3
TabCEaEMTWSbiSkXZHzbvboNyPhE6RdOeaX2mz9G3lIUq0ZwHXzUKcEYKaWj
mxKmLpX5+yq8qMJN01KcfZtKj3c2DaHsp+VcVsZTExacTRbSNUJjYysS+jZV
Degwpm+LIiQqTltujlejGt+i7AbqXlBwVFMTX4SNg6rZwXYv69Z2IDQu7FsU
7SpS/pFwdCFSyOqE5lYbMwXkiE2kCy0JQxKDf6m+ZNFkdpMQ2SfY473ih3JX
UkF/kS2JB/a9KFrFN9zG80QzH+5dg95mXd8eLi83lbDL/M2FQK0SWlOtGvhb
2lmOAFLAt0Xk46MktwPd4dj6SPoba8oHysAOZf1a3gThCzD3QutyjDv5RsTP
MueEMhtBvcBY+01zq2LDWKiop6HaXddtszOVFGRwVQveCcVGHNVjaewMKwIr
YI43Mqep6k+/eiQWrDACbm5eEIf0lFauFuQY0B2q/OBMa09UnNS38GqtHKXX
EMlyfGwlNeQxuFWhkAeWV7JLOHYEO2ohzbKdFHBqqOzuGyEGE0impNGCU1Fk
DGM2w2A3UE6CGGa9HKSCNVLvQFXnyJlyTwuGqIp5p+EGNhzWauwhoQj5APYZ
+fdgu4gPLiEnV6r86vZk56LSG39VjFGtkwwcUwriywqagYQGMtih31R3QBdr
yFAwF8KydLo51LBNqA0diBIejh/h0+qdoRWapOVjDQwvD4gdKk2Dh2cOqLqL
bGcMBMLcXIs8npAzTT8nRdrs2QScpGwsKllXFR7rEv4ooulatViZ2TCyh7Ra
ljwveskmazlR0emarsbqJyHirDHOtlrXO0d6vg4PEs1rKoOKg0F2JnA3NXpV
UJZEkOQclWpihGQXrR5XZwlyyH6xqTqF71W5iYr/4LkQ5xO6Wyqv3cAfOjHE
mBRwGgrPbsiZsxUGgYkcszz7NFtNcXI1db6YJw+fUs6HUwtSkF2LfQ2ffQ60
XHDTBmhxxMUF3nr4w/P/+Q6/XBCxJnx/IlBb1p2iwY1aKzTm5CQPIgahAJQD
TOrEBpPxG1XjJ93+IH8cuuOBMP0kQ9/BWSVgTHTl67oFbolupQuA4yVbRAEs
ypYhKsUbbk13JgD6TQEiP31JcHzm/34b/ZR/4Tmk82Ulbz0++53898unvws6
2pOv8ff52dnvws/NXvQr/Pn4S/z36/PfmQ9fPuHnXz3+Xfim6ftmaw8+4mhP
H8mDwhFlnEf8VEb/h6dnPrA+F8KPjTvoeJJ2sGQmpbNoKsAKDYJYUagkuEQV
FWiF5aast/ohuFb2eCESHqEhMt81ouZu52ygwG1WVA9gDLqhpGaSusDXtyKk
rqjTwMe2tcEiDxO+DGoVajcpEDB9Q73HzRpB2dzWqX6FBp1hVeLm2LKykWSh
8nHi01AYGaNLZn9SBiK/Uv3ljxCq1F+uSmHCy+Jb5YLh7bEqF328GsMEKEwh
dH+xfQE0FTN1QQfhjboIyPMS27RYRG5+hg8fRlFwuOt+yf0n8v6WRl3TJicM
3zwZ/bX3QbakGlH1UzAiaXJwylrUMO5h6iRHzg/5tKxW0Ul2G+B3kc2p9us+
Z9Voy9WfSvhjxtDDMoTU1/UlTI5gmBFNUw9n1ybysbhdFeU9luZvG65D9MNq
Nqy9qlerCrJ0RSeBHDtVALO2BWrUzpeiqCk7UveALlKORE7jkTxM555lYuW6
RImDXWIP7kkIqp8opAQOZatBtIMqLVhaszPXO5WKZr3OXIqQY6LoVOM5TUuC
Xmkx9LbaV6WqE2nt9CCd+EY9UWomjFeABcRTDzBUiZey+/bSELNT5WEIlYwI
QXUwZkLf0hsMj2NfqnIn/Fs+bNoVTSNoVAcRd4wtmkrXKS8R/ZmOsFWGea70
9SNyyfd8rHdFuziqXKT7FdzHdDfD6pssjbZz0pu4TDyYLGsPTOI65e6LhlK0
l7MvkkwXKIiGdFW0jKL2bb2f0mRYEEB0uwqBRJcrR+w17klHm2gFA6wRXCpV
DK2OHbYi6P7zP/8z2Mfv5ON3bd8Xfyjuf1l8Vow//ryQcfyvB8XD4qvf83WY
A7nC4XsCs00iur+pTb0xqzG6Ish7HbwO2Gjgm59Tj9Q/VEZe9TqGWgzRW8ID
oLvEDouuGTnwlUeSBg4JVdxvXL6sqBXBx3VTRaAq4N0LBbVgogaDh8cz8w1+
RvAbM0rWKnCSe189L5Nc55pkbhk1jEhNLaT1obJT0uSRd8sbWe0fhH5ELfzM
j4gffh4zo/TffRzV4lY0oHcgi1UxK/LncYKP9ATJHGtGpiDWp8UFh79wZ2NJ
Se3Qyx2zA396cMd47v1PXhPAqvpVvXfnEK+TZXuoYW5OoNyezx89hQ4+2R+6
K354KjinTOFitLW4UsKt60O5RUwe0OdT4ITrTX15ZbyB6HNYCo6sDySRFBsy
RTxERdwZBdVw84K5AiAryY7lohDtdQHzGvEYi4XFKCoCuf2V0PXlFSWfi28N
U5FltoxZEDsPLfyERVt37zPtLc4biBnmYif/TVAiT2BMVN/kGAha7jZu3sMc
4Q51GJlxCNELBbO5wh1zSzUS22tmmOZO8yGK0j4aoyg+VGz7t8594bnb3c7l
Fm5bDdzF82wtBgb3tG52IZpaUqBE9AEhPQR0HOWFlKqum40sOnNSugLIYzKX
NLH4mJcpOPi0uoaVH9zFs4DGY24oFky0ceEJIzJU3VWzIZM4nz/lsro8umLu
MfOWu4f0wz240uBJ+xjCtycdW221udWsWBLo8igAFWcA2t3qznZlf2gpMaAs
ElBlsqU8ECUvBkIEGotZ2jShInLGJ1McDKHxeikqFPWlVtVJkNwOLscNdEro
3JZ4YJGggbMtZD7z6MaUiURj6jFyaWlKG43ybbH41XUpesVlNblb6g/jeEjm
cdmsSmqdwO7Gk1BFEP2ohD5BaCGq3pq3009mUqjfc+r5aurDo18VQQka5cIi
ol/A0YUEiVMBtdNYqbm3RsOwP4lOSv/MJClhKZqursRvvnkzteh1cC8C8Xfg
irVVts2+Ie7AMulV7wcft6g/LbW/kvZoCQEC3hBXUmpqALmHTaZKZQfGn7SC
p0Gs9gZRPdoIct77buALATeAvR2HwdbCyFfCIoM8aGHqgccsUowf6B8P1CKf
hIpGYdciwpubbrhuekpjIEYOeSmGKb3YcVgBwNrjEtENEQfQGTTWE3QKDXJy
oEEKgjM8VZmSLA250mQOyVK3oWqLBfFcn8pi04WpZzkU1J7pHAR07qsosMhs
BHucV2MrQusHIGu7LTehkY2VGv9WU3eQUuWGru9eIG8cJoUCFNGDCW/aKrmW
gMO6K60LnCKQnijXIGYHmUokzKUwlZXnLBU/6vQhHgzQJCr9gpcIG2YJR0hB
TW6zcESrJGKj7zISPA4iIfiTwhBcXl/9LfjteSf+RYy0H/ZwyFmeX3aWA8Qt
HcbDTTJ0NEDiI9STM3w+RtlMvaMXZiQOGkKbSRuUv8C0yLtzrqpWu4fiXGcm
1i2avt9U8uT7qRo+McaeD7AVMZDlW3HxVRuOE3g8WxnVIn8+CLv+S9T488By
dLqqhjOcTWP25p1yVWUUSBcZ7O4rgduaepyyz2v1UOM9mfQOS3Ndbcwxsgqr
tl4zLQvKz7pttoOXJilTFM5M1HKoZz5F9dTMjXGK5MaiD3WspGjA2eUxVfbD
rlK0lRXua+gISG9byzmpFnZr/ELFC3zuYehyB87qJ7vqksqhLjITh8lnsS1X
jNSSsNY40ZXm6U3oMgS1T4achBHaPZMYTX/tenrkXKlRl3wXMy4TbGrIZo04
MDKlKMYYDwJrjD2roxNuaeFh1MGEVluCVCYVedmrHvZiAMkYU4mZquF5Nwgg
kKUN3/HMxal6VreeSwR+vRRR6vFo5WluVDoOKr8RDQSOOOq4CI3PgyISopy1
SN9t+ayoam7xQBtCk71c47g8lC2y9VTjBgl1crxM+uKDVB30RXXp8MWuMZUo
8zIFDevQK9jTG6uscF+V7ylaPBfom0NvoYtsXKpAhMMOeQz0q8H+LDTNzojY
jAGy96rtSB/BTSpfYTL23EOEDGrLAJ0Xz9U5xFwpoFj9F1fwdFGrJsZTIT7L
3Cyxg9oVTu6aPp5CoYGpqp3bL9FlMBzGYo0lQp9dnh8QPP98WvzpIJoM02aQ
4olQL/F5RzV3XbcdHYRt8ytQ3rWrkCcN2hE3mh8Sg8XbkrjHrHIxxeeFmkks
Cz1lIdMYWFSeb+tWoOVMee74iZQpoTNhudHRpVnnBxwasK0Jg7Ss5OYwnTe3
dXxHIyNK1TklMOPruzGJxcAnufaqKqlFDK27LPsus7uJgpEz3hb4XMycSk1O
ADuSNgTPFVUlbA1HTn8bklE+UwEt/NtmOJHBp0u1R72iQN2VSH1b0ncVfd7u
P6E/1V4yB296SZMCfS7HLoTyNIAjcpJGobvLHQTRAdBfZawLGl/wnduXMkNM
LUlbpupvHhtX9UA0qCMuLhsjYwyTMtk74hk8//IzV3bdNRpyK0h9HEjdAlFB
o7QcSBaR+NRphWG0QjhYWp8KdTrIQwZTOVBJVNFGx9wuTBYH6DQoDSt7mI0/
p/KTso/MgOwiWlOu4MZMDaCKsKi97F45V6ebL3t6uBfQmnEWnabg+/Y1NdJl
mu1ePUqdhcEuyZ/MAMlVH6fKHzxrEU5Zi7auKk2WpgBMeaGmRvkrA0UqEOg1
ZNKqYvSJQDRUvYkpS+TClrING8ydCwPLiPVHGjtKCZ2Nz5cvW8h5qdkYWzDD
mEt+hKxIMIGXdwdvWpb/mEMEeSPlvvcphgmqCeeiu8gjGZRXwWCn6d03ysrK
/kT5j5lf0a80FT4SPObF4Mb7qrCNpvSJGLoDo+iukPoHwDkcOG38srPknQ4e
LBNdyHXkZh35/dVV+kgUKrpxbifBcTQzFkgXt+punbjqFI01piJrnE+f+0KZ
wmWNNCegI3w9zY7hSa/+GuDQvPg2LiWNTN5gcR91tpyyghe3A8GWyxplAUPK
GSYSC41/MUc03OirHrp+ehZUqoTawzGPARQkmvplsp283KU5nrmw0Mc7zUC9
cFs5kyaJgyLr2LUR2pHZufixGEzmMo+hxdEUdR5C4beKORc4m+sqXFjE6Z3s
Hy9eTM0JoFGG8dfF50U2RrifUlqnw9Et5VWGePT0bNs9sDWeWl/usbgYxYxk
OfB2HX2s8llQ0UT/Z2IZI8cFc556+Bkdx0VxFJP6Q3FZisb72XBdss1RLMRD
IuczPv/gs9FAHB+C9YLfXwyCHAkYF+cPv7q4A6Ggv9wrflKFFUf95rCjTfeD
Ky8vPFIYiTULNYkWuqjK1rK74bIXfjaR81PD0J2rE3B6e52pr2JJaWS9K7IC
xWmWU/WjEUdkLksx66FAb24V4q6PgQMVw9z4PNRxNM4uffYGyWUzOAB+HFVg
EE9z9VBk0XvTrd0Dv64t33Jf780UjQKfis1cxn7t/PrXAZ/hKOt658wt9zxf
lVnIpDjWZw4ibFDLKwATQ6O3EeQlTS0ykzmHZb53rLpLQ6n+iBfnrEJDVL0Q
u6TeZM/IGvCUcHdVfb36gHyCwrrV9EfCgMtF44MszY/oZRkvyXubUlqhtHcD
iQBNu9cil0/zA1H5DuZPAsr3DHcIJt7Ss5Pn+5fMT1CdU3UJj3wP2DjjSupy
uYgqBSE55RjOEAfpliWFE5xEw2KdQUXChR3xhSXcJlPUskkQb4RruNSaFy+e
FY1gbzrp3wKLmLuowPAlGK2/MgbwEyKDPLFI6kn0DeOH01NmTUzZi0BMhU/B
OA+FFI1AfKGKkJ+Uq80pbABAXLLaASESK+0B5xiwCNVW7TA1ZOnJUHFvXHUh
jAnMQgT19ThCRjoHh4J9KSdHXNE6leTys5IdywgRBhDKyxJ5uycS8pkxuqUl
6n4JeBe0FwCjoPQeuzWh+YqZb5N+0k9oRa+G5+EoH2LcEdmAg+KFUc53DLP+
PB7NPCLRT67Uz8IjNSyUxhFzntIHW3YdWoRYjo4+E2h8xAwCGh3lqtnTR+hJ
sFBZS+aIUE35wTNoSjHiy8uYPPquhGvyXcbI+LF6QJriDJL3n2lTuDY+TFgp
SnNtypHQNWkD4L08CRyO50mGW1ks1JFbuNjpVemKI8M5m8akkEK/QrGNhTdm
jLWpchvzY5lmIGjwcruvW63zzsMkedl+7YkkDFOuTUseUt0kZMmyffHIkmX1
2XEGzSe9FiSND/fM+TDDnx9PlymmGhpL/xkVq7L2SDTvVaS+kffmOIHBFWfk
Q2k4TGwic9Rk5t4qlQSO50yOnvfwOnsytkbK8ggEGU7d3SZnySA/MeWI4VAP
cOSZkZt55hljym1xtaiOHqSRrFlP8p2iMd240frzU4gc4lkkmyBPTrOYquZV
1p33CMjip1o2QjWg3jF1YKVve62qCJoWJ7uEHgA3J/mQ5VhU44gbrfcYIB8j
yTS01X5jTs8sgUbZ/CDrG42uqBq1Ni+y4PMojouAMvM1iJHaiIbIWqOUh1AP
jC41d7eL+vLQHDSv8ZZHZ7mPJH+iSRYh0wphTmHeQrqP6aXNxwZ6wIz9LppZ
IpIRdVHZ0wq0m21cWmZMB5dROy2Y6bW6EY0IZJ584kHIYB5z0o6TQWiYW66X
6S4UYZZUyAiXhgOP6gOVfaNSuL7Gywyay8GhuiCcrP28Z5EkmyT3T5AReG18
VIBQyEqGSnifJhbzcCGlh0ih8GSFrOVPMhzinr9FdRRVVku6z/xVMbeSgRDq
1fTdes043lJz2gg2xl6CwSM1EmFwr7LwyHywUevaclQSbmrWwjOJnDt5LlHM
TEpc7mGRDD4k3Axf+Ueu4J0mJ8H0G48Yvz71bp60pK/nn8jbgxc0C5KEl28I
vtvMScLWDCpz1N+MsxGlKbC6NueYWeELs91V71MzExXZN8nSGbgMNXkMjURK
oQrEVcOR9ZX0puK7FGmfGpNBhrKGFfS4XDMpR6WVlv7vRZ8eTl7Xvwo+PDr7
YbHvqPZYXmM1ZMhDxkhlIBxpyii8vI21fOdP519rKeqj+XmB8ada9AbNZYW+
XrLKR2fzp/xuLkTnItD5LHIMZHc789oE99roBnZqBQrD5ZIVilg4LAD21HBA
eraszGZz8eBjM5TogzePvMZVmRXtTtnLRqm7LHbIiSXUFDBeo15mEJ4sypVq
zpOgMLVoDJPTBc0Ek63i9v8BwI/Ozt7j1AjjL+df3gXj83M5CJ7vbKaqgwOS
eQ/6XIIYp017SSvEUAmAQ3EYjN0nz9eqObDxQsJjC86pdKBAFM5+uKzCKbDf
Y5HGQFxrRS37dqwcLvQFpkIzMQhEPlkpCuJylDRqPYYBj4vcK3NxW2ad1U3E
XLWQxZMK7ddGRw/H+vOh3PWH7cT2VWpOB12JjKnCFkn9gCJp57CDq/hTuGDe
8D8fmr7OlLyoTwbVJ13GeWiPoLSTGqme8dG6C97fRQBbtb0bV0cZWHFEXRBF
FjVItKOIedN9e2umumWwD7lIF8Mx9jXOBOXLbA0Gljav5tPBbrvB5La30Y6n
MZrByiZuUgOxPp3m4odu28B0zZC6c4+wLSHlsgyHmwY9eJluk9UPDJRK3/Iw
qNGFlL9qdq9Fm8dJ/zcVLGgohw31A4eRAYDRGl/PoIrBPdSHlrks68NuaTVx
zPDdL0T6JcH7cLd4dyybI2ze6fP3z2fvUdzy4LPBF58X/PQz+V1FqNcyIInf
684dymNctgXFAfH9uxzP/lCcP8yn0ymea66/JlMdUY8NitL6O8f9G7SDO192
TUEzjgBxS3HR3iZJyc7dzxeE0gWNnirZuEGUeujQ+jW3++QiUx3gzYjV3G21
KQ3f1PGYTRDU8D72Pus2Ykc65A4oL31uLchqy5rw50J4G18ps3ansk/loBkv
yIVBbQUaV9o+S2TpqmrWa5R+1HB2VJ56DdXcDq3uNDkjVhMpUQd4b7IMAZCu
2nhaWLTQbLvVHf45UnoYrEyrwyzFSuWQl05rZCzlh6eYYcjfcgW5LNbVTfH8
xf9QVsDsA9UIFIJWPB/yw5dD9bF9mCdHrax8OEsJcZLJagSAXIe92Wsa1vvi
q99FNVxlqgVjMhbM/bKeJu0UL1y2zQ16xrnhRGmNPlsactN8a5LvFVpcCbZs
VRX0CGY1bjhA74o5xCYho41PuO7YOC8VNSQ0NA1sQzNZlR1L9TlyCbmYqLUe
4sgZRO8A8qLcfHr88Mmn/UFugEMlbje3qubTe4sHBQIHxUs4XEVz17qYWGUa
UpXp4nZYPOJKPQOgcG4jgKzAiTUzHugdIVYUnBGYdRcTVp2IUOa7gF6hLa6E
ZdQ0XCDM3RuGJhT1iuc/94gMHbgBnSWmKaOK8X5OrmLaFSRrcRnruS27HzaF
VVZ7QYz1q1AWgwCPAAB+ZuAuwc+6Aia8afNOL5a1ToZdSNUJPKvT/kkrd15l
seqQeQsQqjbrGTCG1rEtb2UjKMWuE6Uw4jzNtLumq4YvkjeQbYAVasoj8zmE
S++i4yVYo4isbkoMv8tLrzAYR4Lci+m+JeB+bPuorvkuNQ/SXLzTfg3g/7ar
hEDhI0EBZpZUmlWSqvxw6snStePma/Jbz6Fc1cyGPdRdaifqLqjVgYlhprOb
9APk9Au3h0N4rSmTyANGr0FGRKosfOCZg7ofYcF9I7C9gi2dm7mhPk5es+iu
FcqY79j6nhHtyPrAuQGBLsQ+IN5YAamPmobF5hU4UOHx+MZ1SopuJQFzBqsT
lSeo+GdwQwhGczejs5B9ujLH4bDzH7mNsxrHfNHaWg5imcjMaD5qrjNqddJp
jkl5o0UfoxY7H5X+T/bR6XzilZiK3j53UnXLcq91PROt61nryvPSfLWD0xKo
JVjDmjdCJBY8GiSAwM700qJEgGymwe8seLZltbVyOjOW6WlR664dJNLHEcOo
EiqeQtkV4+5m1t9dC9Ky0A1XmMLO3q4vE6sqhrsrVFChpCV4IPxPMe3J3teC
0W4UKRQmiQ9NVlkmrZiqDdyEVA4119RZ3+ik/75TUW88VvBbfd+aWOTNVRlN
SsZe1vFBc2eg5QxfPdkhh91xuhTrl7O4bJUbiQ6LcDf3clRw11ZyBjszALta
FBRlyERdkqSoGsHyAHNTv/ix0ZRisCZPQO/UKkZ/qx2E2SiqlzcZjk0GS2tf
yBBmWNMWv2IANIYQYlfpo45Yz7vUckFdw7Fo76N1ZeQ8opTRfVF7o+xkpYYL
KDCjWrYLVZymRnJ56UlW2JX1ZUh5VprckG+qwKaSQ3xJZpyZRjHUrF6FjJLq
PPdULWrtjJcgCZzRHK5YwuL9Ni05Fo7FzSZv6qdomQWwFtVlrQqZW655ZdN3
VqxV9trOp7LKG+3lGW3rykJWPq26LJFBCOGcle2QsrLiR5XfafnQ/5oNJCOx
wVx/DpOYcF39KiJPXWcRnV02HsO/iw5StrBwRhRTYLNCsqzkr9PFYjoBh6B0
2d4K+r+pzN7X4IdRQXlqiORdEpPDmGLWR3NRMXjXHepoxVh1Btaf3YRC4fJf
arr84V4WJtFMlnWzPHTahi62g8Db4z7Z3LS3Bgvj+dT5ejyjyl/4EVWbYaZO
bMHhr7J95CCQm+dmRJuQXq2QVSPFAKoK4SzEgUR++OuNQdKs2VTM/RU9RecZ
9inDItn2RrZEJnfYgWeFoxZrJiOBHt8/eWvVOoVi47aJSq+gHAgHrc/WFeo2
tdVw7E64MQPSSu3t7gp20jbfKrUWOnTNhkxadd7leOeCLpAC4TsVC109qpbc
nWCLTqXxPorIu11CKBXqSX0KjQbnFR6f6vxv6u2zEM7nxcushabrVin0w8O+
f+xUtQwSAcijfAh3Yg+P7z4cnp5tX2ixLrvTvz7000JhvGtm+74hiB+fGlHO
6uGLl2iB+b5jxIAeDfckusJep8pFylYmRjANIyX9amIEzLRYouxf8h31p6jP
5yJ3YV0EduAVfQ115xNhBaX3CoG36+HDohlctOArOekMu38+wwAPBp0AYr7l
skl+Ljx2YUWc7hLLsoeolcUMlTR9sOat1PQH5zE1UyRlbz65GMT/IWXVS3Kw
9r1GV995d1xTEIbjMe+u7VkA7sN47emyijGVgbcr5yXaJ0ETk5DVQTGqlUVL
dDmlCe5da5Gyphz4Qj18FtrWXQf1nRnSxJIlMZSCFlsN1prKNwf5reitv7Mu
khkTEJZ7yQslRisYpLeNkpmnvrnYd3CguzO4bD1O1m25zKLaox7jo37QAPEJ
b8vPxytQvSrqoho8VbQIzMS2nKYxSdCUy5zY8+zOipDH/wWioNnKLjk5Guj3
0fyyt1gLQ4aP9LSYEVrpnRv+moD1x2MmB4vXSgy8btYkc9PGTxgEFCHHWCCV
pYw9FNZ05ITZb/2GoznO6EZsTqqvM2coRmIYptLOYXRJscdWpS3bBRpqdGZ5
F4rPMFfm4Vuvzs4tKSqfq0awD+XzyOR5VuwPbXdQiF6ys0NaYbPW1gkKxWxH
9K6h6QWbDEzVQstrydXBmE403h5nDRZxFOyg+vLH178HcF2p5rB7svJsZyHz
zkGt9gIjpcDo7FYR8E306SX1S3Yi6gHdgMzuU9s+1hTJexwaFX/l8mrU62Lk
aikdJ9a13tgzvHLnjhr6QKGfOsANbtJQU+rbZtgSeSjRRJEzYRYCr0XRvgAb
T590qThIVLks96nvgHZrU/NpEEDypDM47/8eG5SJYRs7q+d0yfVf7/7E0mdd
n5YiHfpJzGoxt89PP7/ml6xqYSm+dxw1f3HXa7GGSRHkj2sGDfIGct4Q8kzb
uLojCrsjIw5ErSX5+C5ky/K2/l5CLoC62295o10WUz1rdFyxwaWxDQLae+Uj
V17www4msoiu2UBXi90gsRrtvxaGXXXNr6mvq0HZZo3T5KT9fMcxCkWof0vG
ZV46YDz5Jyvm+3APeDtTS/Qjr7eITf/G12d4w2F1tUYPEd1BR/VBeceG1GfK
feaUJvuK/YaAWWkHyQIY1fJkJVM121FkjduKrNgKKBPswoYUD1JHoVpou+pm
3IhKsVC7E5+OVw4y0JSEB9mmH91DiKvJ+uqSQi4PPpfRczphoZpRw0IO/L2J
VwOTIsIghdFrf49F+aCbTacd1vg+SExtTrD8BCsL03kuX2olRAobRNMGS8Bt
WulOjPENAAzchPHiYoTlf32mVGkVL4wiaOK5V15operAXjhS6szAZqDLdSBm
bLRXIAUdihZnakC9Ye2qVRiYr5kky4dNWTEqm81E2jEfOfkN8t4Y6r5RZ4z6
5v1yrleebHZDPoYOFW0ZK85TzInGQPRenMxSdW0uDF0wMdU9u1kkdjSJZd28
BKq2zh/RBffhnv8qFP4DjBx3PsA1pm3T48NQQg47I7o6NVJAsSXJTHeu/hQr
siHMykv3NipSeCwPF9xMzdeeqTh8xyMXOSTqNuv/2iBvIrkhFOUG19Z5U7dh
aQIFvbrszAgJkTABmxdPLFsyrdq9lVpRwZv7dhoZjK3/OXrplyF8plqv3+sZ
lesxt5vak4NMA2v7R1zybg763snH1WDLn/bo1Ulw+BhKdan9mRrd5ZBrZMGn
vFfjZ8556ubY/D51X8bRK6BCGtcnH7e0GguoeKyJNKZeo5YuvGiPi05WbnlB
rxmsCGUiRZOxBtlhc2jNl0lna213zY0bocXCrcYNfUWMWtOmsxCWr7rwfv/Z
HQHj2LzFRTKBFSa512akA0/0njMmL6nSOzBzsT4wXNN+iIyWV8E7I1mTjVh/
pFs3MGQo0MqyCmgI5YU28cIC7Xqj3aRgSMDb1AyuORGGqk0trLY/sj/Nac8v
4FFsd59s1S1FvODRCVqNuRyglYurq615GXmIfDQt3lKB/W7T3K7IN7XGDmTu
uTQ5gO8GFmC9aS7L6Az3kKFnLaaStQULAletyEZ2MbTciRN9g2LBn98UeYLU
w6gbw6fXWIzXaNGkWAl4WVmfMO0NZ6HDw6g0I7aS1x5l3GCKRnp5ePJNJZ3r
lyqDhNLbVarbiqnNKNaAQUkGTQRZO04ohmQ48XfFL4NUE186WpwIyjPN0bIb
UqVTDOVMrbjzV1U3nqgcZBwMrtQX1rM0gte9PW/cj4DXTZNVB+2NcLkqLG+X
m8p6QFsGjrVq8LtdvPQgj6cYlpkgMjp2PblvguYdGMRFL9llZbA+hs7speS4
SZFCZSfmUZuV/xzJHJjC2mct0WbJ2vtMFt7PSvEfP2FGrxmPz1PDE/As4Wc1
qB4cUJ3a4TXd554B3ZuymWF0unVFdxIBER1d7Km3qdZmCLj8yds6WZvxU8iv
nYMCvTDGqWMXlkGk6a3m6LP3lweWq/RSd6XXGfH8VQ4w1XUXiXgUjECYPe+h
fwNhENQKopyJWWmuUe5U+7xp7lAD4dLwnHTNI1dVFjG4TaU+inNNjx9Walhw
GLdIMB0bdw1cDYZW+ow24IkkFJSnoXDZytMGypJ8RxRFGd9TyM/H8y89TT+2
Zmo84Jd6qOqtKsWCiXFcwUgfS1l63qcW8TM9UnhJA5PWNBvxBqsk2yBD0E6N
rsTS2sMRVkP7IHjlsN41prH4aQRcDEqn9G+evzUnm81YQkzK5ufUCBT1M6JH
5y6eeI4OUaCX5r8crTZctuUi3lJ04ooCL2DL2qGmIYLXNGtuR0pEus0vPGKs
2JrNCcncdkK4IYtmmQcHBzZYXtkF1IueTK/0QmJ3wdrNycwwVFV80Libag//
zvQ8XRLBEobzCgL0h7bM2oYoIG7jZaXs1sBmhQFpbGpueQ5B9ItkR3y0PXTs
2zUpXK6pcGZlT+LzAHjM75h4or/Wpq8yRx48lqg1Ca7ueBerevfedloRn9gk
ZmT3rdi6b6XkEOiz4U17R5eHkmqXBHb0GWezadLryaOESyhZXT7Fzv7WQxh2
YIxJUNH0QLPmQa9rLp2O2uhJM/sVWcDbRbkpx2GUMRWYxEuOAbX/NGEh3T00
PEcXk9ZCL6R7H7bFrXeLheotB8dgeWV33uu5qXo54NouO4+wUXVFS+XSu+EY
DVYjT+9Ls6aNlClllFyxHb1VfR1au/I6U/lyK2tEMCGjFHAPS9+PhQ95AY4b
JSqu/e5us0+hM/Vitve3KEe9rmILZcOTTCtFf7nt3twzIss18ryLxjv8Bg5T
dz+BZ9efnnEastIIVsj49xO9hG+8ubU3BrAi4dexdm7Yl9gaK8tHtJcRHcwn
R2E8PMaDLPBMy3l6Jv++edjJqwibMT6BDiLNxotiFW/PiH5n868fmU8evrBQ
xO+fnk1lID50jt+mOuz8r46LEfnW6XHjYPrQ2WjsE9s9P9pu1plNRh6MYfWc
2QjCY6pfrQP/rvB2pN7tLzbtsssYBBmInNH+NwP3hGfAW4tQe80C0HzsXfTr
ofTh/Ivi8+L+eTHLV/bgs8fyRfZK8jn8AelA93Gx5XQ03Lg10AMt2vAlDtd+
aoV4YjDT2fzskazubP70rPgM8fB8ib/PqkdVfbDSS2vrbN2itf8bREENh7+z
AXRxjt238yQRTS236ktTT4tczy037AgDG00vITf0CYNKT+OhWRlrvF1R6xI0
CZJsza8np003upSYTHjMVmPMQpRhFRvKgr08luUHfVI2vCye3EG1Ns8PX8Rb
3oy5CsOCnnqwNtLMpVaYWPcmpg8zu8nYeIOq8ugczQlAC1rNLiQasOEnR1Os
VYV/eXta47c+JmrraiHLtdiQ0UjjG4mNKfNL+ezuf+qHZGedxI+N+Lw1CtrW
lt0SLXflm1cxjvrh3lI//mid5azzbdLJUv/JgXfp1HVYnkDVlvt6tbnV62dj
ZTGD5OVJq4VXt1gDoTRwPAOEybTiRkCHM8J2X4ruBA8s79BCH8VNrT14O/xu
gvn+9y9fP2C2BiR32XbqJL7UgEPnVVd8q9RMQUSxfRFCYj3VF3BhOFt9HkWC
YLc80haKPcez1Wi1zPkZrzV56h3AjzJawmDVnoP16sWLN/88kw3wtpOoYiRw
/+mwZUk+vWbnWmwNT8BTNd/CNxZyhGSut0OnO4ejXd9EvYQ4RxdBlo0wLfxy
Fuoj7lMi+lpIMF6JRb9m1p3JfvNKCbooTOX/wu/U1qYnO7IoNUc4albom5ki
fNWrltTF1KSsgGCAwHl8YRauvnL+1VOqhLMZ1A1DYr8rQCDsmKCmctBIimKn
yrD8eIlyTP71cxa9uWExN2txqApFRSOgTaeS2OSZx59jaMTBKTip0EkX0uBG
GusSVf3ahwHgcap42jDFo/wzewhhe80OQldaG443tej3XSj1AhM7uDytgvg4
6EkSM7ZlsNR1h3m2h73ekTe63YKda5C7mrUNYRY57yPMg2iDMK32W0cb5Zbw
fZ41GPneAoIvMsPnwz1PU2Bk2Nu3WOKB3qRNYalD3nmFgGU4al0gC/7VzcUL
TPIuJx6V1Itbtdu7B+h5RVlmEiCR4cSro4np0mtEtWtLbeuU8uzXFdPc/L5t
1V/tQ7/lyIwGL6KKdwq7khrjdtpSdtjKJEsNSg3D/Abi0hKXOQj7UKuaHwZC
IL9MJjXwzy9ItCsHuCAkwI8irymCarStHvvsiYlbFSgrH3X7dD0gLuOGZaKj
0KQGPfswOXEeem/aWsPPWteS9fuiOEovBT9EzaIgqYq8ZO+gqm0bZk3vYG/E
W2H0885SZoPlKMQKt5SGN2qmEk3mE0Cdm3VqQ7PwXWxoba2Q95/NO+slX1bu
SZ/wzgH1TSQjOaOTLLuZhAQeOGeidoxX4nIDlQNmwR3H/ZESoUFSxdE7Lyu2
WJlno9qdVd45Ru3NEQmcSFcYJYgyzVTzJkYRog8f8vSSj0YFeTYspodXWAGp
tf0x24WN7rTxPCx1bfkWPUlVlzmSQsxZV6cxlGSNlWlHJr2F+TR/mpO7lSs0
trxp1HfvzEF1Ql64nt4Og0sT4Ox8JiBnq09yGRcl8Vq1+FUmQHTJMX/M3yEf
+YTYUU0zmyyl8pyah3clkJzsQGIqwMfxnUwug9gJEOlQ8dI4zW/2/Iusg5c3
cindu7xt1EqIQXBvLU79FpAKWe1SnJlcjFbSyLHo2Y3qYPScb/Z91sBUYS1T
BrKACMR+V17Hm9ghJjEhrEQcM2ZQxGGRm/Q8V6u5f2L2wNL0pREmXitjxMxr
cFrtieI5i7vqBuE4S1mexDtyqD6U0IXg8uqmJzS/RZVSiKJmY0858EW1FQlX
r/LEEJ86xs7j9Pnsnm5kC4j365wWyTHpWPNcrBQdpiOv22uoP1mOmJNict/K
SrI6u5yBjRQUS7keL9jymaEcHnoE5bVX0hgYWpY/GDIu9X0lIs7s2+EdYGw+
T85WWKNP1n0hEKRFGcLOL5G27XCdoCEtcgTNxW2gC/n5xPJwtNpoY+VeffIC
aHSjLW+0ZqTCbSXB2ie4C16r4m02CypES3LAEi0RPKU1RV9sdYluD5yNF/b0
0W98Uh7NQyylje0gQAXWsmKYD5Ll6OLh3YrhZV3vNOTZwbEGj7L/KFoQ+2jI
OzfIDqIQDlYzTOPtu5cvHqTE5mm8MQXXjl7XK/RXGC4OJdf1lvvrtQ8MB4NW
lt17b70Emq3ifmWVajIbM7aY6Flu2AHRWscV3lqV6RgLu3MEhducV8WK3i/s
tbBZV4JXWX8YpL8yk4o1hL1VsjN1re61UlbN5jzze8g/Typffvai2dndqCv1
3bDa3RTrfMzRpVy8hG1wf2oQdqxdGYZdDjKTy4mRWyTrzC0cSBfouC5cTNdL
yfr5EjzXXK92iNnw6htIfDJZTTkFmhLp4oY5x1a4gNAZz/MYZipg/93FsktX
5s5+uHckkXnGJ7pLDON7nl4/5leWGZ5L5QfHFxkOeaSX2aBjqlioAMpx+q1V
Havw3crOlp224BRLgyULTcyLhO9nUaXb13iVi3V9jIWIQ3WJ6bX1bgjp6dHp
acW+hjG1oCGkgoaTGcOjTskwS8Rwa/3SlaC9xsjHVH3M1Ge3FIbdMgrtloHD
dnlntbmDrCDTFZu9hrzy1mGDHExkfFt1XtcfhCCVXRELj5NzU68R97TZLdKH
btxvA3ynFUh0KRqXp4LAMEvuOjqKhQ/ju8Wh3pDvevU73Yzbcld60/N86Ns8
M/Kk7slW90k7cDTzy7ByZwPsVdoWu0HibiiOBHxyswy1HG8hbs6Zo5nVZYO5
R/fCj/OEExK9+nHqC/jEdMyTIILc1FrcwWB/4epvXBHU51Het/tzP05VJ9OL
idJtlGn9NKvZaMTXfgJiyh29u3N0wbMhqbZliDdY3dVaNbsyN7FC8x/Fps0/
06l0YgG43maHYStPEGWhi11mxXDSgkO8REYsXByOh3gupjaHkYKXevZpXkrK
ezYpmqMSQD0+00GGvsYZv81qOwYdUaYBbCyNqH6Mwe04v8QMiVSubllrHkGi
opKqbnNoTQYNKs4fPhrf2uBVCXlLGGOwflFjUvxYq+xhmbJLV4lBrsULCVa2
DeEDqB2AIs1tIIlVY37EFQ9/Ibz48IkbN/kdwaNlqYj7KTc4KVwUry3saGYz
1WycHtX72OcGhc/XpRWimMlvXrmQnFnzOAmv8BLbGkm8ostuYkS5e5baAJxm
mVO9L/T295pos+kaNZUKqsuugo1723cha6SThetYa6hZvQOt9HTVNyFh3UdG
drIaaFhCs15jiXWMwXWj2lhzf/a3e6txMz4UVZPcCTXqXh9BGGLt2zBT0S50
jWbWSB5M2ccQBkR0VOkMyXs1TMU5dPEL3ThifrbNefgp8ebSsNNaQ7/38EN+
QcS1iSC1YnWKF08sK4rdV1zd17E0Ncx5urK95Dq9ubrN+tCBh6jRaSIqZiT7
rZzqRbbi59TmLBKu6mBR3lrzKLi9alVZdBX83UK+FRvJMrGKIcp0ifGlnB/K
BHgRHg17RWdtLplnMlsVCrUyBY9ngHqkyLYSmGa3rNvlYYvo2bKK3cYEk95H
qEXSTJ2s9CjEcNLA0PmjOXNXK0tVzJqPsWYGJTVVm9RSg7n3EgvWPzdpEs/Y
/oFFiEW8nHg6SCplgyfm5cn/qL8oDx/dImLXz+T1NGzEVIt+1S6ytjCDULlf
GQJU2KAG+kliIymqCWUzXk75M254DLFDmqt3MXY0vBvFEWtYbhqSo4twTeFJ
l7bAOTIsORsebCYsQ2Ru2bXjsOLNp6QxehzCbuV9t3SrpCS42PMLyDWR6l7x
lnbODyW6EFXmyeVHW/3ICl7sGsX8Bi4+pqUsMdz0zIhDQ+um2R9ZJGaIiMLk
t6bc1JZX6vbiO9qL6sDIMiIL3nHRRYD53RD+1vJmt5pEXYGXBpkgwzYeDzSt
VNUWqzl5mUXGqweTMU5t/RtShjxSddJfz8ZKvz3vvW+8hBK3bDjMpsnfyXjp
IG83BTxq7L7M/Kqmm6EaTvXOLt6bET2ueAV3Y8N6jZV82PI0s5nz0nu9f6Kg
BzdVARyV3L/WPODScyU7V5uzrV/iFMD88VWThSTt0GhSPx4pvKac+kXGupJB
AHTYfoAQ80NJLuciqec8oPjN0QHRHlbtJoZjcDp5Fwy7onmUcDTxnkuv7orb
nQCKYlR0X/CSS64ecyaAa7iNDNolr+dn5Ysjus/lk9e+25Nu+WSZDBzzA+dJ
RJMn3gLjePFWRw1nURahe24X8xQFb9IckEl05OMQ/I9TZxAb4A6cFVDsi8Gt
CJ86F2UZ8sJdFIGbao5KWr2z6Gl6II+CgcTQZarOO6IIg4LBAHjbJ+LP73jM
TarOKqTNLqOb+rDdimX9l1QkNeK4YVWXl22JO409z2787y0o7fTNesVvd3x+
fcfnn//D7PS/z++a4PTHn5j5v/DC59n8f9MLvykwRFScWthdM/z2X5jBprnj
33/zpk8sCi/IE9lD/yiriVVOx0vDC/+b/xvO8Jv9/+QMv50aqPAP3yCjLKat
ffKFwWZ8wZ/f9cIdn18ff55jxvG3v6V6sBNjfo59/zZ8NzWV0Q/08+HqP4+f
H/276/Pr48+PVz6EmVtNp8bMsfUTKzlFxJ87/6AWiEby0DPdvrXWI1mK6KLa
1JWbZMjnsazwijHKzgdg+EsUqtetabm9189Og9sXN0lxoLabx8PtTol102yU
BcbLqfWeQFTuCmKL9bvZjuK81AXi40x3Y8tW+MIGz2cTot0N/Q9/BzC8ev7j
8yMQaBFtI2YUIojaa0SfVH8tQjUzpK5B1ZFBnke5oFcBfHimWUPV6g+TtZhH
1QS3i73+9nUuQWQM/fd/AJgLy6CpuAAA

-->

</rfc>

