lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090609100229.GE14820@wotan.suse.de>
Date:	Tue, 9 Jun 2009 12:02:29 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	fengguang.wu@...el.com, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] [11/16] HWPOISON: check and isolate corrupted free pages v2

On Wed, Jun 03, 2009 at 08:46:45PM +0200, Andi Kleen wrote:
> 
> From: Wu Fengguang <fengguang.wu@...el.com>
> 
> If memory corruption hits the free buddy pages, we can safely ignore them.
> No one will access them until page allocation time, then prep_new_page()
> will automatically check and isolate PG_hwpoison page for us (for 0-order
> allocation).

It would be kinda nice if this could be done in the handler
directly (ie. take the page directly out of the allocator
or pcp list). Completely avoiding fastpaths would be a
wonderful goal.

> 
> This patch expands prep_new_page() to check every component page in a high
> order page allocation, in order to completely stop PG_hwpoison pages from
> being recirculated.
> 
> Note that the common case -- only allocating a single page, doesn't
> do any more work than before. Allocating > order 0 does a bit more work,
> but that's relatively uncommon.
> 
> This simple implementation may drop some innocent neighbor pages, hopefully
> it is not a big problem because the event should be rare enough.
> 
> This patch adds some runtime costs to high order page users.
> 
> [AK: Improved description]
> 
> v2: Andi Kleen:
> Port to -mm code
> Move check into separate function.
> Don't dump stack in bad_pages for hwpoisoned pages.
> Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> Signed-off-by: Andi Kleen <ak@...ux.intel.com>
> 
> ---
>  mm/page_alloc.c |   20 +++++++++++++++++++-
>  1 file changed, 19 insertions(+), 1 deletion(-)
> 
> Index: linux/mm/page_alloc.c
> ===================================================================
> --- linux.orig/mm/page_alloc.c	2009-06-03 19:37:39.000000000 +0200
> +++ linux/mm/page_alloc.c	2009-06-03 20:13:43.000000000 +0200
> @@ -237,6 +237,12 @@
>  	static unsigned long nr_shown;
>  	static unsigned long nr_unshown;
>  
> +	/* Don't complain about poisoned pages */
> +	if (PageHWPoison(page)) {
> +		__ClearPageBuddy(page);
> +		return;
> +	}
> +
>  	/*
>  	 * Allow a burst of 60 reports, then keep quiet for that minute;
>  	 * or allow a steady drip of one report per second.
> @@ -650,7 +656,7 @@
>  /*
>   * This page is about to be returned from the page allocator
>   */
> -static int prep_new_page(struct page *page, int order, gfp_t gfp_flags)
> +static inline int check_new_page(struct page *page)
>  {
>  	if (unlikely(page_mapcount(page) |
>  		(page->mapping != NULL)  |
> @@ -659,6 +665,18 @@
>  		bad_page(page);
>  		return 1;
>  	}
> +	return 0;
> +}
> +
> +static int prep_new_page(struct page *page, int order, gfp_t gfp_flags)
> +{
> +	int i;
> +
> +	for (i = 0; i < (1 << order); i++) {
> +		struct page *p = page + i;
> +		if (unlikely(check_new_page(p)))
> +			return 1;
> +	}
>  
>  	set_page_private(page, 0);
>  	set_page_refcounted(page);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ