lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADVnQyn-R7ibSjVMp3jjBWn7S-Qear0DgEU4xnmMv6rP7q7M1Q@mail.gmail.com>
Date:   Tue, 19 Apr 2022 22:11:43 -0400
From:   Neal Cardwell <ncardwell@...gle.com>
To:     Pengcheng Yang <yangpc@...gsu.com>
Cc:     Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Yuchung Cheng <ycheng@...gle.com>, netdev@...r.kernel.org,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent
 skb when filling the rate sample

 .

On Tue, Apr 19, 2022 at 9:48 PM Pengcheng Yang <yangpc@...gsu.com> wrote:
>
> On Tue, Apr 19, 2022 at 10:00 PM Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > On Sun, 2022-04-17 at 14:51 -0400, Neal Cardwell wrote:
> > > On Sat, Apr 16, 2022 at 5:20 AM Pengcheng Yang <yangpc@...gsu.com> wrote:
> > > >
> > > > If an ACK (s)acks multiple skbs, we favor the information
> > > > from the most recently sent skb by choosing the skb with
> > > > the highest prior_delivered count. But in the interval
> > > > between receiving ACKs, we send multiple skbs with the same
> > > > prior_delivered, because the tp->delivered only changes
> > > > when we receive an ACK.
> > > >
> > > > We used RACK's solution, copying tcp_rack_sent_after() as
> > > > tcp_skb_sent_after() helper to determine "which packet was
> > > > sent last?". Later, we will use tcp_skb_sent_after() instead
> > > > in RACK.
> > > >
> > > > Signed-off-by: Pengcheng Yang <yangpc@...gsu.com>
> > > > Cc: Neal Cardwell <ncardwell@...gle.com>
> > > > ---
> > > >  include/net/tcp.h   |  6 ++++++
> > > >  net/ipv4/tcp_rate.c | 11 ++++++++---
> > > >  2 files changed, 14 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > > > index 6d50a66..fcd69fc 100644
> > > > --- a/include/net/tcp.h
> > > > +++ b/include/net/tcp.h
> > > > @@ -1042,6 +1042,7 @@ struct rate_sample {
> > > >         int  losses;            /* number of packets marked lost upon ACK */
> > > >         u32  acked_sacked;      /* number of packets newly (S)ACKed upon ACK */
> > > >         u32  prior_in_flight;   /* in flight before this ACK */
> > > > +       u32  last_end_seq;      /* end_seq of most recently ACKed packet */
> > > >         bool is_app_limited;    /* is sample from packet with bubble in pipe? */
> > > >         bool is_retrans;        /* is sample from retransmission? */
> > > >         bool is_ack_delayed;    /* is this (likely) a delayed ACK? */
> > > > @@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
> > > >                   bool is_sack_reneg, struct rate_sample *rs);
> > > >  void tcp_rate_check_app_limited(struct sock *sk);
> > > >
> > > > +static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
> > > > +{
> > > > +       return t1 > t2 || (t1 == t2 && after(seq1, seq2));
> > > > +}
> > > > +
> > > >  /* These functions determine how the current flow behaves in respect of SACK
> > > >   * handling. SACK is negotiated with the peer, and therefore it can vary
> > > >   * between different flows.
> > > > diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
> > > > index 617b818..a8f6d9d 100644
> > > > --- a/net/ipv4/tcp_rate.c
> > > > +++ b/net/ipv4/tcp_rate.c
> > > > @@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
> > > >   *
> > > >   * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
> > > >   * called multiple times. We favor the information from the most recently
> > > > - * sent skb, i.e., the skb with the highest prior_delivered count.
> > > > + * sent skb, i.e., the skb with the most recently sent time and the highest
> > > > + * sequence.
> > > >   */
> > > >  void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
> > > >                             struct rate_sample *rs)
> > > >  {
> > > >         struct tcp_sock *tp = tcp_sk(sk);
> > > >         struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
> > > > +       u64 tx_tstamp;
> > > >
> > > >         if (!scb->tx.delivered_mstamp)
> > > >                 return;
> > > >
> > > > +       tx_tstamp = tcp_skb_timestamp_us(skb);
> > > >         if (!rs->prior_delivered ||
> > > > -           after(scb->tx.delivered, rs->prior_delivered)) {
> > > > +           tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
> > > > +                              scb->end_seq, rs->last_end_seq)) {
> > > >                 rs->prior_delivered_ce  = scb->tx.delivered_ce;
> > > >                 rs->prior_delivered  = scb->tx.delivered;
> > > >                 rs->prior_mstamp     = scb->tx.delivered_mstamp;
> > > >                 rs->is_app_limited   = scb->tx.is_app_limited;
> > > >                 rs->is_retrans       = scb->sacked & TCPCB_RETRANS;
> > > > +               rs->last_end_seq     = scb->end_seq;
> > > >
> > > >                 /* Record send time of most recently ACKed packet: */
> > > > -               tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
> > > > +               tp->first_tx_mstamp  = tx_tstamp;
> > > >                 /* Find the duration of the "send phase" of this window: */
> > > >                 rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
> > > >                                                      scb->tx.first_tx_mstamp);
> > > > --
> > >
> > > Thanks for the patch! The change looks good to me, and it passes our
> > > team's packetdrill tests.
> > >
> > > One suggestion: currently this patch seems to be targeted to the
> > > net-next branch. However, since it's a bug fix my sense is that it
> > > would be best to target this to the net branch, so that it gets
> > > backported to stable releases.
> > >
> > > One complication is that the follow-on patch in this series ("tcp: use
> > > tcp_skb_sent_after() instead in RACK") is a pure re-factor/cleanup,
> > > which is more appropriate for net-next. So the plan I was trying to
> > > describe in the previous thread was that this series could be
> > > implemented as:
> > >
> > > (1) first, submit "tcp: ensure to use the most recently sent skb when
> > > filling the rate sample" to the net branch
> > > (2) wait for the fix in the net branch to be merged into the net-next branch
> > > (3) second, submit "tcp: use tcp_skb_sent_after() instead in RACK" to
> > > the net-next branch
> > >
> > > What do folks think?
> >
> > +1 for the above.
> >
> > @Pengcheng: please additionally provide a suitable 'fixes' tag for
> > patch 1/2.
>
> Fixes: b9f64820fb22 ("tcp: track data delivery rate for a TCP connection")

Thanks. That looks like the correct SHA1. However, I think there may
be a miscommunication. :-)

I think what Paolo and I are suggesting is:

(1) e-mail the patch "tcp: ensure to use the most recently sent skb
when filling the rate sample" as a submission to the net branch
("[PATCH net v3] tcp: ensure to use the most recently sent skb when
filling the rate sample"), with the "Fixes:" footer in the commit
message  in the line above your "Signed-off-by:" footer.

(2) wait for the fix in the net branch to be merged into the net-next branch

(3) submit "tcp: use tcp_skb_sent_after() instead in RACK" to the
net-next branch

thanks,
neal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ