[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFTs51UsQgZkc-Xg3XaBvxssVCyfA=JCS4npBGBAeWJK0yUOuw@mail.gmail.com>
Date: Fri, 28 Jan 2022 08:57:53 -0800
From: Peter Oskolkov <posk@...k.io>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Paul Turner <pjt@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Michael Jeanson <mjeanson@...icios.com>,
paulmck <paulmck@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Chris Kennelly <ckennelly@...gle.com>
Subject: Re: rseq vcpu_id ideas
On Wed, Jan 26, 2022 at 5:22 PM Mathieu Desnoyers
<mathieu.desnoyers@...icios.com> wrote:
>
> Hi Paul,
>
> I remember our LPC discussions about your virtual cpu ids ideas, and noticed some tcmalloc code
> with "prototype" fields for vcpu_id and numa node id
> (https://github.com/google/tcmalloc/blob/master/tcmalloc/internal/linux_syscall_support.h#L34).
>
> I'm currently toying with ideas very close to vcpu_ids to solve issues with overzealous
> memory allocation for LTTng-UST (user-space tracer) in use-cases where containers use few
> cores.
>
> My current thinking is that we could use your vcpu_id idea, but apply it on a per-pid-namespace
> basis rather than per-process. We may have to be clever with NUMA as well to ensure good NUMA
> locality.
>
> Do you have any thought about this, and perhaps some prototype rseq extension code you could
> share as a starting point ?
We've been using rseq vcpu extensions in production for more than a
year, with good results. We have a perfect use case, though: wide
machines (hundreds of CPUs) with many narrow processes (restricted to
a small number of CPUs). Our extension can be configured to either do
a "flat" vcpu accounting, or a "per numa node" vcpu accounting. We
currently only use "flat" accounting, I guess because most of our
processes are affined to a single numa node.
I plan to post the code to the list after the UMCG saga comes to a
clear resolution.
>
> Thanks,
>
> Mathieu
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com
Powered by blists - more mailing lists