lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 8 May 2017 02:58:36 +0000
From:   Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To:     Michal Hocko <mhocko@...nel.org>
CC:     Andi Kleen <andi@...stfloor.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Laurent Dufour <ldufour@...ux.vnet.ibm.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: [PATCH v2 1/2] mm: Uncharge poisoned pages

On Tue, May 02, 2017 at 08:55:07PM +0200, Michal Hocko wrote:
> On Tue 02-05-17 16:59:30, Laurent Dufour wrote:
> > On 28/04/2017 15:48, Michal Hocko wrote:
> [...]
> > > This is getting quite hairy. What is the expected page count of the
> > > hwpoison page?
> 
> OK, so from the quick check of the hwpoison code it seems that the ref
> count will be > 1 (from get_hwpoison_page).
> 
> > > I guess we would need to update the VM_BUG_ON in the
> > > memcg uncharge code to ignore the page count of hwpoison pages if it can
> > > be arbitrary.
> > 
> > Based on the experiment I did, page count == 2 when isolate_lru_page()
> > succeeds, even in the case of a poisoned page.
> 
> that would make some sense to me. The page should have been already
> unmapped therefore but memory_failure increases the ref count and 1 is
> for isolate_lru_page().

# sorry for late reply, I was on holidays last week...

Right, and the refcount taken for memory_failure is not freed after
memory_failure() returns. unpoison_memory() does free the refcount.

> 
> > In my case I think this
> > is because the page is still used by the process which is calling madvise().
> > 
> > I'm wondering if I'm looking at the right place. May be the poisoned
> > page should remain attach to the memory_cgroup until no one is using it.
> > In that case this means that something should be done when the page is
> > off-lined... I've to dig further here.
> 
> No, AFAIU the page will not drop the reference count down to 0 in most
> cases. Maybe there are some scenarios where this can happen but I would
> expect that the poisoned page will be mapped and in use most of the time
> and won't drop down 0. And then we should really uncharge it because it
> will pin the memcg and make it unfreeable which doesn't seem to be what
> we want.  So does the following work reasonable? Andi, Johannes, what do
> you think? I cannot say I would be really comfortable touching hwpoison
> code as I really do not understand the workflow. Maybe we want to move
> this uncharge down to memory_failure() right before we report success?

memory_failure() can be called for any types of page (including slab or
any kernel/driver pages), and the reported problem seems happen only on
in-use user pages, so uncharging in delete_from_lru_cache() as done below
looks better to me.

> ---
> From 8bf0791bcf35996a859b6d33fb5494e5b53de49d Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@...e.com>
> Date: Tue, 2 May 2017 20:32:24 +0200
> Subject: [PATCH] hwpoison, memcg: forcibly uncharge LRU pages
> 
> Laurent Dufour has noticed that hwpoinsoned pages are kept charged. In
> his particular case he has hit a bad_page("page still charged to cgroup")
> when onlining a hwpoison page.

> While this looks like something that shouldn't
> happen in the first place because onlining hwpages and returning them to
> the page allocator makes only little sense it shows a real problem.
> 
> hwpoison pages do not get freed usually so we do not uncharge them (at
> least not since 0a31bc97c80c ("mm: memcontrol: rewrite uncharge API")).
> Each charge pins memcg (since e8ea14cc6ead ("mm: memcontrol: take a css
> reference for each charged page")) as well and so the mem_cgroup and the
> associated state will never go away. Fix this leak by forcibly
> uncharging a LRU hwpoisoned page in delete_from_lru_cache(). We also
> have to tweak uncharge_list because it cannot rely on zero ref count
> for these pages.
> 
> Fixes: 0a31bc97c80c ("mm: memcontrol: rewrite uncharge API")
> Reported-by: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
> Signed-off-by: Michal Hocko <mhocko@...e.com>

Reviewed-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>

> ---
>  mm/memcontrol.c     | 2 +-
>  mm/memory-failure.c | 7 +++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 16c556ac103d..4cf26059adb1 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5527,7 +5527,7 @@ static void uncharge_list(struct list_head *page_list)
>  		next = page->lru.next;
>  
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
> -		VM_BUG_ON_PAGE(page_count(page), page);
> +		VM_BUG_ON_PAGE(!PageHWPoison(page) && page_count(page), page);
>  
>  		if (!page->mem_cgroup)
>  			continue;
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 8a6bd3a9eb1e..4497d9619bb4 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -541,6 +541,13 @@ static int delete_from_lru_cache(struct page *p)
>  		 */
>  		ClearPageActive(p);
>  		ClearPageUnevictable(p);
> +
> +		/*
> +		 * Poisoned page might never drop its ref count to 0 so we have to
> +		 * uncharge it manually from its memcg.
> +		 */
> +		mem_cgroup_uncharge(p);
> +
>  		/*
>  		 * drop the page count elevated by isolate_lru_page()
>  		 */
> -- 
> 2.11.0
> 
> -- 
> Michal Hocko
> SUSE Labs
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ