lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 15 Jan 2019 14:18:03 +0100 From: Vlastimil Babka <vbabka@...e.cz> To: Mel Gorman <mgorman@...hsingularity.net>, Linux-MM <linux-mm@...ck.org> Cc: David Rientjes <rientjes@...gle.com>, Andrea Arcangeli <aarcange@...hat.com>, ying.huang@...el.com, kirill@...temov.name, Andrew Morton <akpm@...ux-foundation.org>, Linux List Kernel Mailing <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 10/25] mm, compaction: Ignore the fragmentation avoidance boost for isolation and compaction On 1/4/19 1:49 PM, Mel Gorman wrote: > When pageblocks get fragmented, watermarks are artifically boosted to > reclaim pages to avoid further fragmentation events. However, compaction > is often either fragmentation-neutral or moving movable pages away from > unmovable/reclaimable pages. As the true watermarks are preserved, allow > compaction to ignore the boost factor. > > The expected impact is very slight as the main benefit is that compaction > is slightly more likely to succeed when the system has been fragmented > very recently. On both 1-socket and 2-socket machines for THP-intensive > allocation during fragmentation the success rate was increased by less > than 1% which is marginal. However, detailed tracing indicated that > failure of migration due to a premature ENOMEM triggered by watermark > checks were eliminated. > > Signed-off-by: Mel Gorman <mgorman@...hsingularity.net> Acked-by: Vlastimil Babka <vbabka@...e.cz> > --- > mm/page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 57ba9d1da519..05c9a81d54ed 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2958,7 +2958,7 @@ int __isolate_free_page(struct page *page, unsigned int order) > * watermark, because we already know our high-order page > * exists. > */ > - watermark = min_wmark_pages(zone) + (1UL << order); > + watermark = zone->_watermark[WMARK_MIN] + (1UL << order); > if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA)) > return 0; > >
Powered by blists - more mailing lists