lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 23 Jun 2008 09:55:29 +0900 From: Takenori Nagano <t-nagano@...jp.nec.com> To: linux-mm@...ck.org, linux-kernel@...r.kernel.org CC: Keiichi KII <kii@...ux.bs1.fc.nec.co.jp> Subject: [patch] memory reclaim more efficiently Hi, Efficiency of memory reclaim is recently one of the hot topics. (LRU splitting, pageout throttling, etc...) I would like to contribute it and I made this patch. In shrink_zone(), system can not return to user mode before it finishes to search LRU list. IMHO, it is very wasteful, since the user processes stay unnecessarily long time in shrink_zone() loop and application response time becomes relatively bad. This patch changes shrink_zone() that it finishes memory reclaim when it reclaims enough memory. the conditions to end searching: 1. order of request page is 0 2. process is not kswapd. 3. satisfy the condition to return try_to_free_pages() # nr_reclaim > SWAP_CLUSTER_MAX Signed-off-by: Takenori Nagano <t-nagano@...jp.nec.com> Signed-off-by: Keiichi Kii <k-keiichi@...jp.nec.com> --- diff -uprN linux-2.6.26-rc6.orig/mm/vmscan.c linux-2.6.26-rc6/mm/vmscan.c --- linux-2.6.26-rc6.orig/mm/vmscan.c 2008-06-13 06:22:24.000000000 +0900 +++ linux-2.6.26-rc6/mm/vmscan.c 2008-06-20 15:05:03.492700863 +0900 @@ -1224,6 +1224,9 @@ static unsigned long shrink_zone(int pri nr_reclaimed += shrink_inactive_list(nr_to_scan, zone, sc); } + if (nr_reclaimed > sc->swap_cluster_max && !sc->order + && !current_is_kswapd()) + break; } throttle_vm_writeout(sc->gfp_mask); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists