lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iLuqGdbHkyUcTZd+Ww6vUxqNg0L4eC5Xt8bqLMDmDM18w@mail.gmail.com>
Date:   Tue, 26 Apr 2022 06:11:40 -0700
From:   Eric Dumazet <edumazet@...gle.com>
To:     Paolo Abeni <pabeni@...hat.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] net: generalize skb freeing deferral to
 per-cpu lists

On Tue, Apr 26, 2022 at 12:38 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
>
> Hello,
>
> I'm sorry for the late feedback. I have only a possibly relevant point
> below.
>
> On Fri, 2022-04-22 at 13:12 -0700, Eric Dumazet wrote:
> [...]
> > @@ -6571,6 +6577,28 @@ static int napi_threaded_poll(void *data)
> >       return 0;
> >  }
> >
> > +static void skb_defer_free_flush(struct softnet_data *sd)
> > +{
> > +     struct sk_buff *skb, *next;
> > +     unsigned long flags;
> > +
> > +     /* Paired with WRITE_ONCE() in skb_attempt_defer_free() */
> > +     if (!READ_ONCE(sd->defer_list))
> > +             return;
> > +
> > +     spin_lock_irqsave(&sd->defer_lock, flags);
> > +     skb = sd->defer_list;
>
> I *think* that this read can possibly be fused with the previous one,
> and another READ_ONCE() should avoid that.

Only the lockless read needs READ_ONCE()

For the one after spin_lock_irqsave(&sd->defer_lock, flags),
there is no need for any additional barrier.

If the compiler really wants to use multiple one-byte-at-a-time loads,
we are not going to fight, there might be good reasons for that.

(We do not want to spread READ_ONCE / WRITE_ONCE for all
loads/stores, as this has performance implications)

>
> BTW it looks like this version gives slightly better results than the
> previous one, perhpas due to the single-liked list usage?

Yes, this could be the case, or maybe it is because 10 runs are not enough
in a host with 32 RX queues, with a 50/50 split between the two NUMA nodes.

When reaching high throughput, every detail matters, like background usage on
the network, from monitoring and machine health daemons.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ