[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHH2K0bxvc34u1PugVQsSfxXhmN8qU6KRpiCWwOVBa6BPqMDOg@mail.gmail.com>
Date: Fri, 6 Feb 2015 15:43:11 -0800
From: Greg Thelen <gthelen@...gle.com>
To: Tejun Heo <tj@...nel.org>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Cgroups <cgroups@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@...radead.org>,
Li Zefan <lizefan@...wei.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC] Making memcg track ownership per address_space or anon_vma
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo <tj@...nel.org> wrote:
> Hello, Greg.
>
> On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
>> So this is a system which charges all cgroups using a shared inode
>> (recharge on read) for all resident pages of that shared inode. There's
>> only one copy of the page in memory on just one LRU, but the page may be
>> charged to multiple container's (shared_)usage.
>
> Yeap.
>
>> Perhaps I missed it, but what happens when a child's limit is
>> insufficient to accept all pages shared by its siblings? Example
>> starting with 2M cached of a shared file:
>>
>> A
>> +-B (usage=2M lim=3M hosted_usage=2M)
>> +-C (usage=0 lim=2M shared_usage=2M)
>> +-D (usage=0 lim=2M shared_usage=2M)
>> \-E (usage=0 lim=1M shared_usage=0)
>>
>> If E faults in a new 4K page within the shared file, then E is a sharing
>> participant so it'd be charged the 2M+4K, which pushes E over it's
>> limit.
>
> OOM? It shouldn't be participating in sharing of an inode if it can't
> match others' protection on the inode, I think. What we're doing now
> w/ page based charging is kinda unfair because in the situations like
> above the one under pressure can end up siphoning off of the larger
> cgroups' protection if they actually use overlapping areas; however,
> for disjoint areas, per-page charging would behave correctly.
>
> So, this part comes down to the same question - whether multiple
> cgroups accessing disjoint areas of a single inode is an important
> enough use case. If we say yes to that, we better make writeback
> support that too.
If cgroups are about isolation then writing to shared files should be
rare, so I'm willing to say that we don't need to handle shared
writers well. Shared readers seem like a more valuable use cases
(thin provisioning). I'm getting overwhelmed with the thought
exercise of automatically moving inodes to common ancestors and back
charging the sharers for shared_usage. I haven't wrapped my head
around how these shared data pages will get protected. It seems like
they'd no longer be protected by child min watermarks.
So I know this thread opened with the claim "both memcg and blkcg must
be looking at the same picture. Deviating them is highly likely to
lead to long-term issues forcing us to look at this again anyway, only
with far more baggage." But I'm still wondering if the following is
simpler:
(1) leave memcg as a per page controller.
(2) maintain a per inode i_memcg which is set to the common dirtying
ancestor. If not shared then it'll point to the memcg that the page
was charged to.
(3) when memcg dirtying page pressure is seen, walk up the cgroup tree
writing dirty inodes, this will write shared inodes using blkcg
priority of the respective levels.
(4) background limit wb_check_background_flush() and time based
wb_check_old_data_flush() can feel free to attack shared inodes to
hopefully restore them to non-shared state.
For non-shared inodes, this should behave the same. For shared inodes
it should only affect those in the hierarchy which is sharing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists