lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Apr 2022 23:23:01 +0100
From:   Charles-François Natali <cf.natali@...il.com>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     wireguard@...ts.zx2c4.com, netdev@...r.kernel.org,
        linux-crypto@...r.kernel.org,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        Steffen Klassert <steffen.klassert@...unet.com>
Subject: Re: [PATCH] WireGuard: restrict packet handling to non-isolated CPUs.

Hi,

On Fri, 22 Apr 2022 at 01:02, Jason A. Donenfeld <Jason@...c4.com> wrote:
>
> netdev@ - Original thread is at
> https://lore.kernel.org/wireguard/20220405212129.2270-1-cf.natali@gmail.com/
>
> Hi Charles-François,
>
> On Tue, Apr 05, 2022 at 10:21:29PM +0100, Charles-Francois Natali wrote:
> > WireGuard currently uses round-robin to dispatch the handling of
> > packets, handling them on all online CPUs, including isolated ones
> > (isolcpus).
> >
> > This is unfortunate because it causes significant latency on isolated
> > CPUs - see e.g. below over 240 usec:
> >
> > kworker/47:1-2373323 [047] 243644.756405: funcgraph_entry: |
> > process_one_work() { kworker/47:1-2373323 [047] 243644.756406:
> > funcgraph_entry: | wg_packet_decrypt_worker() { [...]
> > kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: 0.591 us | }
> > kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: ! 242.655 us
> > | }
> >
> > Instead, restrict to non-isolated CPUs.
>
> Huh, interesting... I haven't seen this feature before. What's the
> intended use case? To never run _anything_ on those cores except
> processes you choose? To run some things but not intensive things? Is it
> sort of a RT-lite?

Yes, the idea is to not run anything on those cores: no user tasks, no unbound
workqueues, etc.
Typically one would also set IRQ affinity etc to avoid those cores, to avoid
(soft)IRQS which cause significant latency as well.

This series by Frederic Weisbecker is a good introduction:
https://www.suse.com/c/cpu-isolation-introduction-part-1/

The idea is to achieve low latency and jitter.
With a reasonably tuned kernel one can reach around 10usec latency - however
whenever we start using wireguard, we can see the bound workqueues used for
round-robin dispatch cause up to 1ms stalls, which is just not
acceptable for us.
Currently our only option is to either patch the wireguard code, or
stop using it,
which would be a shame :).

> I took a look in padata/pcrypt and it doesn't look like they're
> examining the housekeeping mask at all. Grepping for
> housekeeping_cpumask doesn't appear to show many results in things like
> workqueues, but rather in core scheduling stuff. So I'm not quite sure
> what to make of this patch.

Thanks, I didn't know about padata, but after skimming through the code it does
seem that it would suffer from the same issue.

> I suspect the thing to do might be to patch both wireguard and padata,
> and send a patch series to me, the padata people, and
> netdev@...r.kernel.org, and we can all hash this out together.

Sure, I'll try to have a look at the padata code and write something up.

> Regarding your patch, is there a way to make that a bit more succinct,
> without introducing all of those helper functions? It seems awfully
> verbose for something that seems like a matter of replacing the online
> mask with the housekeeping mask.

Indeed, I wasn't really happy about that.
The reason I've written those helper functions is that the housekeeping mask
includes possible CPUs (cpu_possible_mask), so unfortunately it's not just a
matter of e.g. replacing cpu_online_mask with
housekeeping_cpumask(HK_FLAG_DOMAIN), we have to perform an AND
whenever we compute the weight, find the next CPU in the mask etc.

And I'd rather have the operations and mask in a single location instead of
scattered throughout the code, to make it easier to understand and maintain.

Happy to change to something more inline though, or open to suggestions.

Cheers,

Charles


>
> Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ