lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 Aug 2022 10:06:45 -0700
From:   Peter Oskolkov <posk@...gle.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     Peter Oskolkov <posk@...k.io>,
        Peter Zijlstra <peterz@...radead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        "H. Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
        linux-api <linux-api@...r.kernel.org>,
        Christian Brauner <christian.brauner@...ntu.com>,
        Florian Weimer <fw@...eb.enyo.de>,
        David Laight <David.Laight@...lab.com>,
        carlos <carlos@...hat.com>, Chris Kennelly <ckennelly@...gle.com>
Subject: Re: [PATCH v3 00/23] RSEQ node id and virtual cpu id extensions

On Tue, Aug 2, 2022 at 8:01 AM Mathieu Desnoyers
<mathieu.desnoyers@...icios.com> wrote:
>

[...]

> >
> > We have experimented with several approaches here. The one that we are
> > currently using is the "flat" model: we allocate vcpu IDs ignoring numa nodes.
> >
> > We did try per-numa-node vcpus, but it did not show any material improvement
> > over the "flat" model, perhaps because on our most "wide" servers the CPU
> > topology is multi-level. Chris Kennelly may provide more details here.
>
> I would really like to know more about Google's per-numa-node vcpus implementation.
> I suspect you guys may have taken a different turn somewhere in the design which
> led to these results. But having not seen that implementation, I can only guess.
>
> I notice the following Google-specific prototype extension in tcmalloc:
>
>   // This is a prototype extension to the rseq() syscall.  Since a process may
>   // run on only a few cores at a time, we can use a dense set of "v(irtual)
>   // cpus."  This can reduce cache requirements, as we only need N caches for
>   // the cores we actually run on simultaneously, rather than a cache for every
>   // physical core.
>   union {
>     struct {
>       short numa_node_id;
>       short vcpu_id;
>     };
>     int vcpu_flat;
>   };
>
> Can you tell me more about the way the numa_node_id and vcpu_id are allocated
> internally, and how they are expected to be used by userspace ?

Based on a "VCPU policy" flag passed by the userspace during rseq registration
request, our kernel would:
- do nothing re: vcpus, i.e. behave like it currently does upstream;
- allocate VCPUs in a "flat" manner, ignoring NUMA;
- populate numa_node_id with the value from the function with the same name in
  https://elixir.bootlin.com/linux/latest/source/include/linux/topology.h
  and allocate vcpu_id within the numa node in a tight manner.

Basically, if there are M threads running on node 0 and N threads
running on node 1 at time T, there will be [0,M-1] vcpu IDs associated with
node 0 and [0,N-1] vcpu IDs associated with node 1 at this moment
in time. If a thread migrates across nodes, the balance would change
accordingly.

I'm not sure how exactly tcmalloc tried to use VCPUs under this policy, and
what were the benefits expected. The simplest way would be to keep
a freelist per node_id/vcpu_id pair (basically, per vcpu_flat in the union),
but this would tend to increase the number of freelists due to thread
migrations,
so benefits should be related to memory locality, and so somewhat difficult to
measure precisely.

Chris Kennelly may offer more details here.

Thanks,
Peter

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ