lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon,  8 Jun 2009 14:01:29 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Mel Gorman <mel@....ul.ie>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	yanmin.zhang@...el.com, Wu Fengguang <fengguang.wu@...el.com>,
	linuxram@...ibm.com
Cc:	linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH 2/3] Properly account for the number of page cache pages zone_reclaim() can reclaim

On NUMA machines, the administrator can configure zone_relcaim_mode that
is a more targetted form of direct reclaim. On machines with large NUMA
distances for example, a zone_reclaim_mode defaults to 1 meaning that clean
unmapped pages will be reclaimed if the zone watermarks are not being met.

There is a heuristic that determines if the scan is worthwhile but the
problem is that the heuristic is not being properly applied and is basically
assuming zone_reclaim_mode is 1 if it is enabled.

This patch makes zone_reclaim() makes a better attempt at working out how
many pages it might be able to reclaim given the current reclaim_mode. If it
cannot clean pages, then NR_FILE_DIRTY number of pages are not candidates. If
it cannot swap, then NR_FILE_MAPPED are not. This indirectly addresses tmpfs
as those pages tend to be dirty as they are not cleaned by pdflush or sync.

The ideal would be that the number of tmpfs pages would also be known
and account for like NR_FILE_MAPPED as swap is required to discard them.
A means of working this out quickly was not obvious but a comment is added
noting the problem.

Signed-off-by: Mel Gorman <mel@....ul.ie>
---
 mm/vmscan.c |   18 ++++++++++++++++--
 1 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index ba211c1..ffe2f32 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2380,6 +2380,21 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 {
 	int node_id;
 	int ret;
+	int pagecache_reclaimable;
+
+	/*
+	 * Work out how many page cache pages we can reclaim in this mode.
+	 *
+	 * NOTE: Ideally, tmpfs pages would be accounted as if they were
+	 *       NR_FILE_MAPPED as swap is required to discard those
+	 *       pages even when they are clean. However, there is no
+	 *       way of quickly identifying the number of tmpfs pages
+	 */
+	pagecache_reclaimable = zone_page_state(zone, NR_FILE_PAGES);
+	if (!(zone_reclaim_mode & RECLAIM_WRITE))
+		pagecache_reclaimable -= zone_page_state(zone, NR_FILE_DIRTY);
+	if (!(zone_reclaim_mode & RECLAIM_SWAP))
+		pagecache_reclaimable -= zone_page_state(zone, NR_FILE_MAPPED);
 
 	/*
 	 * Zone reclaim reclaims unmapped file backed pages and
@@ -2391,8 +2406,7 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 	 * if less than a specified percentage of the zone is used by
 	 * unmapped file backed pages.
 	 */
-	if (zone_page_state(zone, NR_FILE_PAGES) -
-	    zone_page_state(zone, NR_FILE_MAPPED) <= zone->min_unmapped_pages
+	if (pagecache_reclaimable <= zone->min_unmapped_pages
 	    && zone_page_state(zone, NR_SLAB_RECLAIMABLE)
 			<= zone->min_slab_pages)
 		return 0;
-- 
1.5.6.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ