[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160504194019.GE21490@dhcp22.suse.cz>
Date: Wed, 4 May 2016 21:40:19 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Joonsoo Kim <js1304@...il.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Minchan Kim <minchan@...nel.org>,
Alexander Potapenko <glider@...gle.com>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] mm/page_owner: use stackdepot to store stacktrace
On Thu 05-05-16 00:30:35, Joonsoo Kim wrote:
> 2016-05-04 18:21 GMT+09:00 Michal Hocko <mhocko@...nel.org>:
[...]
> > Do we really consume 512B of stack during reclaim. That sounds more than
> > worrying to me.
>
> Hmm...I checked it by ./script/stackusage and result is as below.
>
> shrink_zone() 128
> shrink_zone_memcg() 248
> shrink_active_list() 176
>
> We have a call path that shrink_zone() -> shrink_zone_memcg() ->
> shrink_active_list().
> I'm not sure whether it is the deepest path or not.
This is definitely not the deepest path. Slab shrinkers can take more
but 512B is still a lot. Some call paths are already too deep when
calling into the allocator and some of them already use GFP_NOFS to
prevent from potentially deep callchain slab shrinkers. Anyway worth
exploring for better solutions.
And I believe it would be better to solve this in the stackdepot
directly so other users do not have to invent their own ways around the
same issue. I have just checked the code and set_track uses save_stack
which does the same thing and it seems to be called from the slab
allocator. I have missed this usage before so the problem already does
exist. It would be unfair to request you to fix that in order to add a
new user. It would be great if this got addressed though.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists