lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9rzEjKg41eq5jBtsLXF+vZSEnvdomZJ-rTzx8Q=ac1ayg@mail.gmail.com>
Date:   Mon, 20 Dec 2021 18:29:49 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     WireGuard mailing list <wireguard@...ts.zx2c4.com>,
        Netdev <netdev@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC] wiregard RX packet processing.

Hi Sebastian,

Seems like you've identified two things, the use of need_resched, and
potentially surrounding napi_schedule in local_bh_{disable,enable}.

Regarding need_resched, I pulled that out of other code that seemed to
have the "same requirements", as vaguely conceived. It indeed might
not be right. The intent is to have that worker running at maximum
throughput for extended periods of time, but not preventing other
threads from running elsewhere, so that, e.g., a user's machine
doesn't have a jenky mouse when downloading a file.

What are the effects of unconditionally calling cond_resched() without
checking for if (need_resched())? Sounds like you're saying none at
all?

Regarding napi_schedule, I actually wasn't aware that it's requirement
to _only_ ever run from softirq was a strict one. When I switched to
using napi_schedule in this way, throughput really jumped up
significantly. Part of this indeed is from the batching, so that the
napi callback can then handle more packets in one go later. But I
assumed it was something inside of NAPI that was batching and
scheduling it, rather than a mistake on my part to call this from a wq
and not from a softirq.

What, then, are the effects of surrounding that in
local_bh_{disable,enable} as you've done in the patch? You mentioned
one aspect is that it will "invoke wg_packet_rx_poll() where you see
only one skb." It sounds like that'd be bad for performance, though,
given that the design of napi is really geared toward batching.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ