[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110609233154.GA26745@cmpxchg.org>
Date: Fri, 10 Jun 2011 01:31:54 +0200
From: Johannes Weiner <hannes@...xchg.org>
To: Ying Han <yinghan@...gle.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@...il.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Michal Hocko <mhocko@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Minchan Kim <minchan.kim@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mel Gorman <mgorman@...e.de>, Greg Thelen <gthelen@...gle.com>,
Michel Lespinasse <walken@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch 0/8] mm: memcg naturalization -rc2
On Thu, Jun 09, 2011 at 03:30:27PM -0700, Ying Han wrote:
> On Thu, Jun 9, 2011 at 11:36 AM, Johannes Weiner <hannes@...xchg.org> wrote:
> > On Thu, Jun 09, 2011 at 10:36:47AM -0700, Ying Han wrote:
> >> On Thu, Jun 9, 2011 at 1:35 AM, Johannes Weiner <hannes@...xchg.org> wrote:
> >> > On Wed, Jun 08, 2011 at 08:52:03PM -0700, Ying Han wrote:
> >> >> On Wed, Jun 8, 2011 at 8:32 AM, Johannes Weiner <hannes@...xchg.org> wrote:
> >> >> > I guess it would make much more sense to evaluate if reclaiming from
> >> >> > memcgs while there are others exceeding their soft limit is even a
> >> >> > problem. Otherwise this discussion is pretty pointless.
> >> >>
> >> >> AFAIK it is a problem since it changes the spec of kernel API
> >> >> memory.soft_limit_in_bytes. That value is set per-memcg which all the
> >> >> pages allocated above that are best effort and targeted to reclaim
> >> >> prior to others.
> >> >
> >> > That's not really true. Quoting the documentation:
> >> >
> >> > When the system detects memory contention or low memory, control groups
> >> > are pushed back to their soft limits. If the soft limit of each control
> >> > group is very high, they are pushed back as much as possible to make
> >> > sure that one control group does not starve the others of memory.
> >> >
> >> > I am language lawyering here, but I don't think it says it won't touch
> >> > other memcgs at all while there are memcgs exceeding their soft limit.
> >>
> >> Well... :) I would say that the documentation of soft_limit needs lots
> >> of work especially after lots of discussions we have after the LSF.
> >>
> >> The RFC i sent after our discussion has the following documentation,
> >> and I only cut & paste the content relevant to our conversation here:
> >>
> >> What is "soft_limit"?
> >> The "soft_limit was introduced in memcg to support over-committing the
> >> memory resource on the host. Each cgroup can be configured with
> >> "hard_limit", where it will be throttled or OOM killed by going over
> >> the limit. However, the allocation can go above the "soft_limit" as
> >> long as there is no memory contention. The "soft_limit" is the kernel
> >> mechanism for re-distributing spare memory resource among cgroups.
> >>
> >> What we have now?
> >> The current implementation of softlimit is based on per-zone RB tree,
> >> where only the cgroup exceeds the soft_limit the most being selected
> >> for reclaim.
> >>
> >> It makes less sense to only reclaim from one cgroup rather than
> >> reclaiming all cgroups based on calculated propotion. This is required
> >> for fairness.
> >>
> >> Proposed design:
> >> round-robin across the cgroups where they have memory allocated on the
> >> zone and also exceed the softlimit configured.
> >>
> >> there was a question on how to do zone balancing w/o global LRU. This
> >> could be solved by building another cgroup list per-zone, where we
> >> also link cgroups under their soft_limit. We won't scan the list
> >> unless the first list being exhausted and
> >> the free pages is still under the high_wmark.
> >>
> >> Since the per-zone memcg list design is being replaced by your
> >> patchset, some of the details doesn't apply. But the concept still
> >> remains where we would like to scan some memcgs first (above
> >> soft_limit) .
> >
> > I think the most important thing we wanted was to round-robin scan all
> > soft limit excessors instead of just the biggest one. I understood
> > this is the biggest fault with soft limits right now.
> >
> > We came up with maintaining a list of excessors, rather than a tree,
> > and from this particular implementation followed naturally that this
> > list is scanned BEFORE we look at other memcgs at all.
> >
> > This is a nice to have, but it was never the primary problem with the
> > soft limit implementation, as far as I understood.
> >
> >> > It would be a lie about the current code in the first place, which
> >> > does soft limit reclaim and then regular reclaim, no matter the
> >> > outcome of the soft limit reclaim cycle. It will go for the soft
> >> > limit first, but after an allocation under pressure the VM is likely
> >> > to have reclaimed from other memcgs as well.
> >> >
> >> > I saw your patch to fix that and break out of reclaim if soft limit
> >> > reclaim did enough. But this fix is not much newer than my changes.
> >>
> >> My soft_limit patch was developed in parallel with your patchset, and
> >> most of that wouldn't apply here.
> >> Is that what you are referring to?
> >
> > No, I meant that the current behaviour is old and we are only changing
> > it only now, so we are not really breaking backward compatibility.
> >
> >> > The second part of this is:
> >> >
> >> > Please note that soft limits is a best effort feature, it comes with
> >> > no guarantees, but it does its best to make sure that when memory is
> >> > heavily contended for, memory is allocated based on the soft limit
> >> > hints/setup. Currently soft limit based reclaim is setup such that
> >> > it gets invoked from balance_pgdat (kswapd).
> >>
> >> We had patch merged which add the soft_limit reclaim also in the global ttfp.
> >>
> >> memcg-add-the-soft_limit-reclaim-in-global-direct-reclaim.patch
> >>
> >> > It's not the pages-over-soft-limit that are best effort. It says that
> >> > it tries its best to take soft limits into account while reclaiming.
> >> Hmm. Both cases are true. The best effort pages I referring to means
> >> "the page above the soft_limit are targeted to reclaim first under
> >> memory contention"
> >
> > I really don't know where you are taking this from. That is neither
> > documented anywhere, nor is it the current behaviour.
>
> I got the email from andrew on may 27 and you were on the cc-ed :)
> Anyway, i just forwarded you that one.
I wasn't asking about this patch at all... This is the conversation:
Me:
> >> > It's not the pages-over-soft-limit that are best effort. It says that
> >> > it tries its best to take soft limits into account while reclaiming.
You:
> >> Hmm. Both cases are true. The best effort pages I referring to means
> >> "the page above the soft_limit are targeted to reclaim first under
> >> memory contention"
Me:
> > I really don't know where you are taking this from. That is neither
> > documented anywhere, nor is it the current behaviour.
And this is still my question.
Current: scan up to all pages of the biggest soft limit offender, then
reclaim from random memcgs (because of the global LRU).
After my patch: scan all memcgs according to their size, with double
the pressure on those over their soft limit.
Please tell me exactly how my patch regresses existing behaviour, a
user interface, a documented feature, etc.
If you have an even better idea, please propose it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists