lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100226143232.GA13001@cmpxchg.org>
Date:	Fri, 26 Feb 2010 15:32:32 +0100
From:	Johannes Weiner <hannes@...xchg.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Rik van Riel <riel@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: mm: used-once mapped file page detection

On Wed, Feb 24, 2010 at 01:39:46PM -0800, Andrew Morton wrote:
> On Mon, 22 Feb 2010 20:49:07 +0100 Johannes Weiner <hannes@...xchg.org> wrote:
> 
> > This patch makes the VM be more careful about activating mapped file
> > pages in the first place.  The minimum granted lifetime without
> > another memory access becomes an inactive list cycle instead of the
> > full memory cycle, which is more natural given the mentioned loads.
> 
> iirc from a long time ago, the insta-activation of mapped pages was
> done because people were getting peeved about having their interactive
> applications (X, browser, etc) getting paged out, and bumping the pages
> immediately was found to help with this subjective problem.
> 
> So it was a latency issue more than a throughput issue.  I wouldn't be
> surprised if we get some complaints from people for the same reasons as
> a result of this patch.

Agreed.  Although we now have other things in place to protect them once
they are active (VM_EXEC protection, lazy active list scanning).

> I guess that during the evaluation period of this change, it would be
> useful to have a /proc knob which people can toggle to revert to the
> old behaviour.  So they can verify that this patchset was indeed the
> cause of the deterioration, and so they can easily quantify any
> deterioration?

Sounds like a good idea.  By evaluation period, do you mean -mm?  Or
would this knob make it upstream as well?

	Hannes

From: Johannes Weiner <hannes@...xchg.org>
Subject: vmscan: add sysctl to revert mapped file heuristics

During the evaluation period of the used-once mapped file detection,
provide a sysctl to disable the heuristics at runtime, allowing users
to verify it as a source of problems.

Signed-off-by: Johannes Weiner <hannes@...xchg.org>
---
 include/linux/swap.h |    1 +
 kernel/sysctl.c      |    7 +++++++
 mm/vmscan.c          |    4 +++-
 3 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a2602a8..0c1e724 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -254,6 +254,7 @@ extern unsigned long shrink_all_memory(unsigned long nr_pages);
 extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
+extern int vm_rigid_filemap_protection;
 
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 8a68b24..9fa46fb 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1050,6 +1050,13 @@ static struct ctl_table vm_table[] = {
 		.extra1		= &zero,
 		.extra2		= &one_hundred,
 	},
+	{
+		.procname	= "rigid_filemap_protection",
+		.data		= &vm_rigid_filemap_protection,
+		.maxlen		= sizeof(vm_rigid_filemap_protection),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
 #ifdef CONFIG_HUGETLB_PAGE
 	{
 		.procname	= "nr_hugepages",
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 819fff7..d494153 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -565,6 +565,8 @@ enum page_references {
 	PAGEREF_ACTIVATE,
 };
 
+int vm_rigid_filemap_protection __read_mostly;
+
 static enum page_references page_check_references(struct page *page,
 						  struct scan_control *sc)
 {
@@ -586,7 +588,7 @@ static enum page_references page_check_references(struct page *page,
 		return PAGEREF_RECLAIM;
 
 	if (referenced_ptes) {
-		if (PageAnon(page))
+		if (PageAnon(page) || vm_rigid_filemap_protection)
 			return PAGEREF_ACTIVATE;
 		/*
 		 * All mapped pages start out with page table
-- 
1.6.6.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ