[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160617064330.GD2374@bbox>
Date: Fri, 17 Jun 2016 15:43:30 +0900
From: Minchan Kim <minchan@...nel.org>
To: Johannes Weiner <hannes@...xchg.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
Rik van Riel <riel@...hat.com>,
Sangwoo Park <sangwoo2.park@....com>
Subject: Re: [PATCH v1 3/3] mm: per-process reclaim
Hi Hannes,
On Thu, Jun 16, 2016 at 10:41:02AM -0400, Johannes Weiner wrote:
> On Wed, Jun 15, 2016 at 09:40:27AM +0900, Minchan Kim wrote:
> > A question is it seems cgroup2 doesn't have per-cgroup swappiness.
> > Why?
> >
> > I think we need it in one-cgroup-per-app model.
>
> Can you explain why you think that?
>
> As we have talked about this recently in the LRU balancing thread,
> swappiness is the cost factor between file IO and swapping, so the
> only situation I can imagine you'd need a memcg swappiness setting is
> when you have different cgroups use different storage devices that do
> not have comparable speeds.
>
> So I'm not sure I understand the relationship to an app-group model.
Sorry for lacking the inforamtion. I should have written more clear.
In fact, what we need is *per-memcg-swap-device*.
What I want is to avoid kill background application although memory
is overflow because cold launcing of app takes a very long time
compared to resume(ie, just switching). I also want to keep a mount
of free pages in the memory so that new application startup cannot
be stuck by reclaim activities.
To get free memory, I want to reclaim less important app rather than
killing. In this time, we can support two swap devices.
A one is zram, other is slow storage but much bigger than zram size.
Then, we can use storage swap to reclaim pages for not-important app
while we can use zram swap for for important app(e.g., forground app,
system services, daemon and so on).
IOW, we want to support mutiple swap device with one-cgroup-per-app
and the storage speed is totally different.
Powered by blists - more mailing lists