[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez01rRTsB0PcxsrzcbMeVnr2bPjigc15GpFCoKQmdzmGrg@mail.gmail.com>
Date: Mon, 19 May 2025 23:24:16 +0200
From: Jann Horn <jannh@...gle.com>
To: Alexey Gladkov <legion@...nel.org>
Cc: Chen Ridong <chenridong@...weicloud.com>, akpm@...ux-foundation.org,
Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, vbabka@...e.cz,
pfalcato@...e.de, bigeasy@...utronix.de, paulmck@...nel.org,
chenridong@...wei.com, roman.gushchin@...ux.dev, brauner@...nel.org,
pmladek@...e.com, geert@...ux-m68k.org, mingo@...nel.org,
rrangel@...omium.org, francesco@...la.it, kpsingh@...nel.org,
guoweikang.kernel@...il.com, link@...o.com, viro@...iv.linux.org.uk,
neil@...wn.name, nichen@...as.ac.cn, tglx@...utronix.de, frederic@...nel.org,
peterz@...radead.org, oleg@...hat.com, joel.granados@...nel.org,
linux@...ssschuh.net, avagin@...gle.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, lujialin4@...wei.com,
"Serge E. Hallyn" <serge@...lyn.com>, David Howells <dhowells@...hat.com>
Subject: Re: [RFC next v2 0/2] ucounts: turn the atomic rlimit to percpu_counter
On Mon, May 19, 2025 at 11:01 PM Alexey Gladkov <legion@...nel.org> wrote:
> On Mon, May 19, 2025 at 09:32:17PM +0200, Jann Horn wrote:
> > On Mon, May 19, 2025 at 3:25 PM Chen Ridong <chenridong@...weicloud.com> wrote:
> > > From: Chen Ridong <chenridong@...wei.com>
> > >
> > > The will-it-scale test case signal1 [1] has been observed. and the test
> > > results reveal that the signal sending system call lacks linearity.
> > > To further investigate this issue, we initiated a series of tests by
> > > launching varying numbers of dockers and closely monitored the throughput
> > > of each individual docker. The detailed test outcomes are presented as
> > > follows:
> > >
> > > | Dockers |1 |4 |8 |16 |32 |64 |
> > > | Throughput |380068 |353204 |308948 |306453 |180659 |129152 |
> > >
> > > The data clearly demonstrates a discernible trend: as the quantity of
> > > dockers increases, the throughput per container progressively declines.
> >
> > But is that actually a problem? Do you have real workloads that
> > concurrently send so many signals, or create inotify watches so
> > quickly, that this is has an actual performance impact?
> >
> > > In-depth analysis has identified the root cause of this performance
> > > degradation. The ucouts module conducts statistics on rlimit, which
> > > involves a significant number of atomic operations. These atomic
> > > operations, when acting on the same variable, trigger a substantial number
> > > of cache misses or remote accesses, ultimately resulting in a drop in
> > > performance.
> >
> > You're probably running into the namespace-associated ucounts here? So
> > the issue is probably that Docker creates all your containers with the
> > same owner UID (EUID at namespace creation), causing them all to
> > account towards a single ucount, while normally outside of containers,
> > each RUID has its own ucount instance?
> >
> > Sharing of rlimits between containers is probably normally undesirable
> > even without the cacheline bouncing, because it means that too much
> > resource usage in one container can cause resource allocations in
> > another container to fail... so I think the real problem here is at a
> > higher level, in the namespace setup code. Maybe root should be able
> > to create a namespace that doesn't inherit ucount limits of its owner
> > UID, or something like that...
>
> If we allow rlimits not to be inherited in the userns being created, the
> user will be able to bypass their rlimits by running a fork bomb inside
> the new userns.
>
> Or I missed your point ?
You're right, I guess it would actually still be necessary to have one
shared limit across the entire container, so rather than not having a
namespace-level ucount, maybe it would make more sense to have a
private ucount instance for a container...
(But to be clear I'm not invested in this suggestion at all, I just
looked at that patch and was wondering about alternatives if that is
actually a real performance problem...)
Powered by blists - more mailing lists