[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZfCnhPjU9dQfmDh7@P9FQF9L96D>
Date: Tue, 12 Mar 2024 12:05:40 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Jeff Layton <jlayton@...nel.org>,
Chuck Lever <chuck.lever@...cle.com>, Kees Cook <kees@...nel.org>,
Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH RFC 4/4] UNFINISHED mm, fs: use kmem_cache_charge() in
path_openat()
On Tue, Mar 12, 2024 at 10:22:54AM +0100, Vlastimil Babka wrote:
> On 3/1/24 19:53, Roman Gushchin wrote:
> > On Fri, Mar 01, 2024 at 09:51:18AM -0800, Linus Torvalds wrote:
> >> What I *think* I'd want for this case is
> >>
> >> (a) allow the accounting to go over by a bit
> >>
> >> (b) make sure there's a cheap way to ask (before) about "did we go
> >> over the limit"
> >>
> >> IOW, the accounting never needed to be byte-accurate to begin with,
> >> and making it fail (cheaply and early) on the next file allocation is
> >> fine.
> >>
> >> Just make it really cheap. Can we do that?
> >>
> >> For example, maybe don't bother with the whole "bytes and pages"
> >> stuff. Just a simple "are we more than one page over?" kind of
> >> question. Without the 'stock_lock' mess for sub-page bytes etc
> >>
> >> How would that look? Would it result in something that can be done
> >> cheaply without locking and atomics and without excessive pointer
> >> indirection through many levels of memcg data structures?
> >
> > I think it's possible and I'm currently looking into batching charge,
> > objcg refcnt management and vmstats using per-task caching. It should
> > speed up things for the majority of allocations.
> > For allocations from an irq context and targeted allocations
> > (where the target memcg != memcg of the current task) we'd probably need to
> > keep the old scheme. I hope to post some patches relatively soon.
>
> Do you think this will work on top of this series, i.e. patches 1+2 could be
> eventually put to slab/for-next after the merge window, or would it
> interfere with your changes?
Please, go on and merge them, I'll rebase on top of it, it will be even better
for my work. I made a couple of comments there, but overall they look very good
to me, thank you for doing this work!
>
> > I tried to optimize the current implementation but failed to get any
> > significant gains. It seems that the overhead is very evenly spread across
> > objcg pointer access, charge management, objcg refcnt management and vmstats.
I started working on the thing, but it's a bit more complicated than I initially
thought because:
1) there are allocations made from a !in_task() context, so we need to handle
this correctly
2) tasks can be moved between cgroups concurrently to memory allocations.
fortunately my recent changes provide a path here, but it adds to the complexity.
In alternative world where tasks can't move between cgroups the life would
be so much better (and faster too, we could remove a ton of synchronization).
3) we do have per-numa-node per-memcg stats, which are less trivial to cache
on struct task
I hope to resolve these issues somehow and post patches, but probably will need
a bit more time.
Thanks!
Powered by blists - more mailing lists