lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 13 May 2016 12:05:49 +0800 From: "Hillf Danton" <hillf.zj@...baba-inc.com> To: "'Michal Hocko'" <mhocko@...nel.org>, "'Andrew Morton'" <akpm@...ux-foundation.org> Cc: "'Linus Torvalds'" <torvalds@...ux-foundation.org>, "'Johannes Weiner'" <hannes@...xchg.org>, "'Mel Gorman'" <mgorman@...e.de>, "'David Rientjes'" <rientjes@...gle.com>, "'Tetsuo Handa'" <penguin-kernel@...ove.SAKURA.ne.jp>, "'Joonsoo Kim'" <js1304@...il.com>, "'Vlastimil Babka'" <vbabka@...e.cz>, <linux-mm@...ck.org>, "'LKML'" <linux-kernel@...r.kernel.org>, "'Michal Hocko'" <mhocko@...e.com> Subject: Re: [PATCH 2/2] mm, oom: protect !costly allocations some more for !CONFIG_COMPACTION > From: Michal Hocko <mhocko@...e.com> > > Joonsoo has reported that he is able to trigger OOM for !costly high > order requests (heavy fork() workload close the OOM) with the new > oom detection rework. This is because we rely only on should_reclaim_retry > when the compaction is disabled and it only checks watermarks for the > requested order and so we might trigger OOM when there is a lot of free > memory. > > It is not very clear what are the usual workloads when the compaction > is disabled. Relying on high order allocations heavily without any > mechanism to create those orders except for unbound amount of reclaim is > certainly not a good idea. > > To prevent from potential regressions let's help this configuration > some. We have to sacrifice the determinsm though because there simply is > none here possible. should_compact_retry implementation for > !CONFIG_COMPACTION, which was empty so far, will do watermark check > for order-0 on all eligible zones. This will cause retrying until either > the reclaim cannot make any further progress or all the zones are > depleted even for order-0 pages. This means that the number of retries > is basically unbounded for !costly orders but that was the case before > the rework as well so this shouldn't regress. > > Reported-by: Joonsoo Kim <iamjoonsoo.kim@....com> > Signed-off-by: Michal Hocko <mhocko@...e.com> > --- Acked-by: Hillf Danton <hillf.zj@...baba-inc.com> > mm/page_alloc.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 620ec002aea2..7e2defbfe55b 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3310,6 +3310,24 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla > enum migrate_mode *migrate_mode, > int compaction_retries) > { > + struct zone *zone; > + struct zoneref *z; > + > + if (!order || order > PAGE_ALLOC_COSTLY_ORDER) > + return false; > + > + /* > + * There are setups with compaction disabled which would prefer to loop > + * inside the allocator rather than hit the oom killer prematurely. Let's > + * give them a good hope and keep retrying while the order-0 watermarks > + * are OK. > + */ > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, > + ac->nodemask) { > + if(zone_watermark_ok(zone, 0, min_wmark_pages(zone), s/if(zone_/if (zone_/ > + ac_classzone_idx(ac), alloc_flags)) > + return true; > + } > return false; > } > #endif /* CONFIG_COMPACTION */ > -- > 2.8.1
Powered by blists - more mailing lists