[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YLEY1RX3FhR9eWrv@carbon.DHCP.thefacebook.com>
Date: Fri, 28 May 2021 09:22:45 -0700
From: Roman Gushchin <guro@...com>
To: Ming Lei <ming.lei@...hat.com>
CC: Jan Kara <jack@...e.cz>, Tejun Heo <tj@...nel.org>,
<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, Alexander Viro <viro@...iv.linux.org.uk>,
Dennis Zhou <dennis@...nel.org>,
Dave Chinner <dchinner@...hat.com>, <cgroups@...r.kernel.org>
Subject: Re: [PATCH v5 2/2] writeback, cgroup: release dying cgwbs by
switching attached inodes
On Fri, May 28, 2021 at 10:58:04AM +0800, Ming Lei wrote:
> On Wed, May 26, 2021 at 03:25:57PM -0700, Roman Gushchin wrote:
> > Asynchronously try to release dying cgwbs by switching clean attached
> > inodes to the bdi's wb. It helps to get rid of per-cgroup writeback
> > structures themselves and of pinned memory and block cgroups, which
> > are way larger structures (mostly due to large per-cpu statistics
> > data). It helps to prevent memory waste and different scalability
> > problems caused by large piles of dying cgroups.
> >
> > A cgwb cleanup operation can fail due to different reasons (e.g. the
> > cgwb has in-glight/pending io, an attached inode is locked or isn't
> > clean, etc). In this case the next scheduled cleanup will make a new
> > attempt. An attempt is made each time a new cgwb is offlined (in other
> > words a memcg and/or a blkcg is deleted by a user). In the future an
> > additional attempt scheduled by a timer can be implemented.
> >
> > Signed-off-by: Roman Gushchin <guro@...com>
> > ---
> > fs/fs-writeback.c | 35 ++++++++++++++++++
> > include/linux/backing-dev-defs.h | 1 +
> > include/linux/writeback.h | 1 +
> > mm/backing-dev.c | 61 ++++++++++++++++++++++++++++++--
> > 4 files changed, 96 insertions(+), 2 deletions(-)
> >
>
> Hello Roman,
>
> The following kernel panic is triggered by this patch:
Hello Ming!
Thank you for the report and for trying my patches!
I think I know what it is and will fix in the next version.
Thanks!
Powered by blists - more mailing lists