lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 08 May 2018 18:17:14 +0200
From:   Paolo Abeni <pabeni@...hat.com>
To:     John Fastabend <john.fastabend@...il.com>,
        Cong Wang <xiyou.wangcong@...il.com>,
        Jamal Hadi Salim <jhs@...atatu.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        Jiri Pirko <jiri@...nulli.us>,
        David Miller <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [net PATCH v2] net: sched, fix OOO packets with pfifo_fast

Hi all,

I'm still crashing my head on this item...

On Wed, 2018-04-18 at 09:44 -0700, John Fastabend wrote:
> There is a set of conditions
> that if met we can run without the lock. Possibly ONETXQUEUE and
> aligned cpu_map is sufficient. We could detect this case and drop
> the locking. For existing systems and high Gbps NICs I think (feel
> free to correct me) assuming a core per cpu is OK. At some point
> though we probably need to revisit this assumption.

I think we can improve measurably moving the __QDISC_STATE_RUNNING bit
fiddling around the __qdisc_run() call in the 'lockless' path, instead
of keeping it inside __qdisc_restart().

Currently, in the single sender, pkt rate below link-limit scenario we
hit the atomic bit overhead twice per xmitted packet: one for each
dequeue, plus another one for the next, failing, dequeue attempt. With
the wider scope we will hit it always only once.

After that change __QDISC_STATE_RUNNING usage will look a bit like
qdisc_lock(), for the dequeue part at least. So I'm wondering if we
could replace __QDISC_STATE_RUNNING with spin_trylock(qdisc_lock())
_and_ keep such lock held for the whole qdisc_run() !?! 

The comment above qdisc_restart() states clearly we can't, but I don't
see why !?! Acquiring qdisc_lock() and xmit lock always in the given
sequence looks safe to me. Can someone please explain? Is there some
possible deathlock condition I'm missing ?!?

It looks like the comment itself cames directly from the pre-bitkeeper
era (modulo locks name change).

Performance wise, acquiring the qdisc_lock only once per xmitted packet
should improve considerably 'locked' qdisc performance, both in the
contented and in the uncontended scenario (and some quick experiments
seems to confirm that).

Thanks,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ