[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150204182811.GC18858@htj.dyndns.org>
Date: Wed, 4 Feb 2015 13:28:11 -0500
From: Tejun Heo <tj@...nel.org>
To: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: Greg Thelen <gthelen@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Cgroups <cgroups@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@...radead.org>,
Li Zefan <lizefan@...wei.com>, Hugh Dickins <hughd@...gle.com>,
Roman Gushchin <klamm@...dex-team.ru>
Subject: Re: [RFC] Making memcg track ownership per address_space or anon_vma
On Wed, Feb 04, 2015 at 08:58:21PM +0300, Konstantin Khlebnikov wrote:
> >>Generally incidental sharing could be handled as temporary sharing:
> >>default policy (if inode isn't pinned to memory cgroup) after some
> >>time should detect that inode is no longer shared and migrate it into
> >>original cgroup. Of course task could provide hit: O_NO_MOVEMEM or
> >>even while memory cgroup where it runs could be marked as "scanner"
> >>which shouldn't disturb memory classification.
> >
> >Ditto for annotating each file individually. Let's please try to stay
> >away from things like that. That's mostly a cop-out which is unlikely
> >to actually benefit the majority of users.
>
> Process which scans all files once isn't so rare use case.
> Linux still cannot handle this pattern sometimes.
Yeah, sure, tagging usages with m/fadvise's is fine. We can just look
at the policy and ignore them for the purpose of determining who's
using the inode, but let's stay away from tagging the files on
filesystem if at all possible.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists