lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXGd/T0YHG/xEAkw@slm.duckdns.org>
Date:   Thu, 21 Oct 2021 07:06:05 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Pratik Sampat <psampat@...ux.ibm.com>
Cc:     Christian Brauner <christian.brauner@...ntu.com>,
        bristot@...hat.com, christian@...uner.io, ebiederm@...ssion.com,
        lizefan.x@...edance.com, hannes@...xchg.org, mingo@...nel.org,
        juri.lelli@...hat.com, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, cgroups@...r.kernel.org,
        containers@...ts.linux.dev, containers@...ts.linux-foundation.org,
        pratik.r.sampat@...il.com
Subject: Re: [RFC 0/5] kernel: Introduce CPU Namespace

Hello,

On Thu, Oct 21, 2021 at 01:14:10PM +0530, Pratik Sampat wrote:
> I'm speculating, and please correct correct me if I'm wrong; suggesting
> an optimal number of threads to spawn to saturate the available
> resources can get convoluted right?
> 
> In the nginx example illustrated in the cover patch, it worked best
> when the thread count was N+1 (N worker threads 1 master thread),
> however different applications can work better with a different
> configuration of threads spawned based on its usecase and
> multi-threading requirements.

Yeah, I mean, the number would have to be based an ideal conditions - ie.
the cgroup needs N always-runnable threads to saturate all the available
CPUs and then applications can do what they need to do based on that
information. Note that this is equivalent to making these decisions based on
number of CPUs.

> Eventually looking at the load we maybe able to suggest more/less
> threads to spawn, but initially we may have to have to suggest threads
> to spawn as direct function of N CPUs available or N CPUs worth of
> runtime available?

That kind of dynamic tuning is best done with PSI which can reliably
indicate saturation and the degree of contention.

> > The other
> > metric would be the maximum available fractions of CPUs available to the
> > cgroup subtree if the cgroup stays saturating. This number is trickier as it
> > has to consider how much others are using but would be determined by the
> > smaller of what would be available through cpu.weight and cpu.max.
> 
> I agree, this would be a very useful metric to have. Having the
> knowledge for how much further we can scale when we're saturating our
> limits keeping in mind of the other running applications can possibly
> be really useful not just for the applications itself but also for the
> container orchestrators as well.

Similarly, availability metrics would be useful in ballpark sizing so that
applications don't have to dynamically tune across the entire range, the
actual adustments to stay saturated is likely best done through PSI, which
is the direct metric indicating resource saturation.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ