lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 29 Jun 2016 13:42:12 +0800 From: "Hillf Danton" <hillf.zj@...baba-inc.com> To: "'Mel Gorman'" <mgorman@...hsingularity.net> Cc: "'Johannes Weiner'" <hannes@...xchg.org>, "'Vlastimil Babka'" <vbabka@...e.cz>, "'linux-kernel'" <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>, "'Andrew Morton'" <akpm@...ux-foundation.org> Subject: [PATCH] mm, vmscan: Give up balancing node for high order allocations earlier To avoid excessive reclaim, we give up rebalancing for high order allocations right after reclaiming enough pages. Signed-off-by: Hillf Danton <hillf.zj@...baba-inc.com> --- mm/vmscan.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index ee7e531..d080fb2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3159,8 +3159,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) do { bool raise_priority = true; - - sc.nr_reclaimed = 0; + unsigned long reclaimed_pages = sc.nr_reclaimed; /* * If the number of buffer_heads in the machine exceeds the @@ -3254,7 +3253,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) * Raise priority if scanning rate is too low or there was no * progress in reclaiming pages */ - if (raise_priority || !sc.nr_reclaimed) + if (raise_priority || sc.nr_reclaimed == reclaimed_pages) sc.priority--; } while (sc.priority >= 1); --
Powered by blists - more mailing lists