lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 May 2016 16:07:14 +0900
From:	Joonsoo Kim <js1304@...il.com>
To:	Michal Hocko <mhocko@...nel.org>
Cc:	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	Mel Gorman <mgorman@...hsingularity.net>,
	Minchan Kim <minchan@...nel.org>,
	Alexander Potapenko <glider@...gle.com>,
	Linux Memory Management List <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] mm/page_owner: use stackdepot to store stacktrace

2016-05-05 4:40 GMT+09:00 Michal Hocko <mhocko@...nel.org>:
> On Thu 05-05-16 00:30:35, Joonsoo Kim wrote:
>> 2016-05-04 18:21 GMT+09:00 Michal Hocko <mhocko@...nel.org>:
> [...]
>> > Do we really consume 512B of stack during reclaim. That sounds more than
>> > worrying to me.
>>
>> Hmm...I checked it by ./script/stackusage and result is as below.
>>
>> shrink_zone() 128
>> shrink_zone_memcg() 248
>> shrink_active_list() 176
>>
>> We have a call path that shrink_zone() -> shrink_zone_memcg() ->
>> shrink_active_list().
>> I'm not sure whether it is the deepest path or not.
>
> This is definitely not the deepest path. Slab shrinkers can take more
> but 512B is still a lot. Some call paths are already too deep when
> calling into the allocator and some of them already use GFP_NOFS to
> prevent from potentially deep callchain slab shrinkers. Anyway worth
> exploring for better solutions.
>
> And I believe it would be better to solve this in the stackdepot
> directly so other users do not have to invent their own ways around the
> same issue. I have just checked the code and set_track uses save_stack
> which does the same thing and it seems to be called from the slab
> allocator. I have missed this usage before so the problem already does
> exist. It would be unfair to request you to fix that in order to add a
> new user. It would be great if this got addressed though.

Yes, fixing it in stackdepot looks more reasonable.
Then, I will just change PAGE_OWNER_STACK_DEPTH from 64 to 16 and
leave the code as is for now. With this change, we will just consume 128B stack
and would not cause stack problem. If anyone has an objection,
please let me know.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ