[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200821200530.GA2250889@carbon.dhcp.thefacebook.com>
Date: Fri, 21 Aug 2020 13:05:30 -0700
From: Roman Gushchin <guro@...com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Dan Schatzberg <schatzberg.dan@...il.com>,
Jens Axboe <axboe@...nel.dk>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jan Kara <jack@...e.cz>, Amir Goldstein <amir73il@...il.com>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Chris Down <chris@...isdown.name>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"open list:BLOCK LAYER" <linux-block@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:FILESYSTEMS (VFS and infrastructure)"
<linux-fsdevel@...r.kernel.org>,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>,
"open list:CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)"
<linux-mm@...ck.org>
Subject: Re: [PATCH v6 0/4] Charge loop device i/o to issuing cgroup
On Fri, Aug 21, 2020 at 09:27:56AM -0700, Shakeel Butt wrote:
> On Fri, Aug 21, 2020 at 9:02 AM Roman Gushchin <guro@...com> wrote:
> >
> > On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote:
> > > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote:
> > > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@...il.com> wrote:
> > > > >
> > > > > Much of the discussion about this has died down. There's been a
> > > > > concern raised that we could generalize infrastructure across loop,
> > > > > md, etc. This may be possible, in the future, but it isn't clear to me
> > > > > how this would look like. I'm inclined to fix the existing issue with
> > > > > loop devices now (this is a problem we hit at FB) and address
> > > > > consolidation with other cases if and when those need to be addressed.
> > > > >
> > > >
> > > > What's the status of this series?
> > >
> > > Thanks for reminding me about this. I haven't got any further
> > > feedback. I'll bug Jens to take a look and see if he has any concerns
> > > and if not send a rebased version.
> >
> > Just as a note, I stole a patch from this series called
> > "mm: support nesting memalloc_use_memcg()" to use for the bpf memory accounting.
> > I rewrote the commit log and rebased to the tot with some trivial changes.
> >
> > I just sent it upstream:
> > https://lore.kernel.org/bpf/20200821150134.2581465-1-guro@fb.com/T/#md7edb6b5b940cee1c4d15e3cef17aa8b07328c2e
> >
> > It looks like we need it for two independent sub-systems, so I wonder
> > if we want to route it first through the mm tree as a standalone patch?
> >
>
> Another way is to push that patch to 5.9-rc2 linus tree, so both block
> and mm branches for 5.10 will have it. (Not sure if that's ok.)
Ok, it looks like the patch provides a generally useful API enhancement.
And we do have at least two potential use cases for it.
Let me send it as a standalone patch to linux-mm@.
Btw, Shakeel, what do you think of s/memalloc_use_memcg()/set_active_memcg() ?
And thank you for reviews!
Roman
Powered by blists - more mailing lists