lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALWz4iyj4SMMyYhbuZ3HUq-jvcZUCGarceYY7vxm4b7X=yvCMg@mail.gmail.com>
Date:	Fri, 13 Jan 2012 13:45:30 -0800
From:	Ying Han <yinghan@...gle.com>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	Johannes Weiner <hannes@...xchg.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Balbir Singh <bsingharora@...il.com>, cgroups@...r.kernel.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] mm: memcg: hierarchical soft limit reclaim

On Fri, Jan 13, 2012 at 8:34 AM, Michal Hocko <mhocko@...e.cz> wrote:
> On Fri 13-01-12 16:50:01, Johannes Weiner wrote:
>> On Fri, Jan 13, 2012 at 01:04:06PM +0100, Michal Hocko wrote:
>> > On Tue 10-01-12 16:02:52, Johannes Weiner wrote:
> [...]
>> > > +bool mem_cgroup_over_softlimit(struct mem_cgroup *root,
>> > > +                        struct mem_cgroup *memcg)
>> > > +{
>> > > + if (mem_cgroup_disabled())
>> > > +         return false;
>> > > +
>> > > + if (!root)
>> > > +         root = root_mem_cgroup;
>> > > +
>> > > + for (; memcg; memcg = parent_mem_cgroup(memcg)) {
>> > > +         /* root_mem_cgroup does not have a soft limit */
>> > > +         if (memcg == root_mem_cgroup)
>> > > +                 break;
>> > > +         if (res_counter_soft_limit_excess(&memcg->res))
>> > > +                 return true;
>> > > +         if (memcg == root)
>> > > +                 break;
>> > > + }
>> > > + return false;
>> > > +}
>> >
>> > Well, this might be little bit tricky. We do not check whether memcg and
>> > root are in a hierarchy (in terms of use_hierarchy) relation.
>> >
>> > If we are under global reclaim then we iterate over all memcgs and so
>> > there is no guarantee that there is a hierarchical relation between the
>> > given memcg and its parent. While, on the other hand, if we are doing
>> > memcg reclaim then we have this guarantee.
>> >
>> > Why should we punish a group (subtree) which is perfectly under its soft
>> > limit just because some other subtree contributes to the common parent's
>> > usage and makes it over its limit?
>> > Should we check memcg->use_hierarchy here?
>>
>> We do, actually.  parent_mem_cgroup() checks the res_counter parent,
>> which is only set when ->use_hierarchy is also set.
>
> Of course I am blind.. We do not setup res_counter parent for
> !use_hierarchy case. Sorry for noise...
> Now it makes much better sense. I was wondering how !use_hierarchy could
> ever work, this should be a signal that I am overlooking something
> terribly.
>
> [...]
>> > > @@ -2121,8 +2121,16 @@ static void shrink_zone(int priority, struct zone *zone,
>> > >                   .mem_cgroup = memcg,
>> > >                   .zone = zone,
>> > >           };
>> > > +         int epriority = priority;
>> > > +         /*
>> > > +          * Put more pressure on hierarchies that exceed their
>> > > +          * soft limit, to push them back harder than their
>> > > +          * well-behaving siblings.
>> > > +          */
>> > > +         if (mem_cgroup_over_softlimit(root, memcg))
>> > > +                 epriority = 0;
>> >
>> > This sounds too aggressive to me. Shouldn't we just double the pressure
>> > or something like that?
>>
>> That's the historical value.  When I tried priority - 1, it was not
>> aggressive enough.
>
> Probably because we want to reclaim too much. Maybe we should do
> reduce nr_to_reclaim (ugly) or reclaim only overlimit groups until certain
> priority level as Ying suggested in her patchset.

I plan to post that change on top of this, and this patch set does the
basic stuff to allow us doing further improvement.

I still like the design to skip over_soft_limit cgroups until certain
priority. One way to set up the soft limit for each cgroup is to base
on its actual working set size, and we prefer to punish A first with
lots of page cache ( cold file pages above soft limit) than reclaiming
anon pages from B ( below soft limit ). Unless we can not get enough
pages reclaimed from A, we will start reclaiming from B.

This might not be the ideal solution, but should be a good start. Thoughts?

--Ying

> --
> Michal Hocko
> SUSE Labs
> SUSE LINUX s.r.o.
> Lihovarska 1060/12
> 190 00 Praha 9
> Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ