lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Jun 2021 16:07:54 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     wangbin <wangbin224@...wei.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     nao.horiguchi@...il.com, akpm@...ux-foundation.org,
        wuxu.wu@...wei.com
Subject: Re: [PATCH v2] mm: hugetlb: add hwcrp_hugepages to record memory
 failure on hugetlbfs

On 6/23/21 1:51 AM, wangbin wrote:
> From: Bin Wang <wangbin224@...wei.com>
> 
> In the current hugetlbfs memory failure handler, reserved huge page
> counts are used to record the number of huge pages with hwposion.
> There are two problems:

Plese review the comments from the first patch.  Reserved huge page
counts are NOT used to record the number of huge pages with hwposion.

> 
> 1. We call hugetlb_fix_reserve_counts() to change reserved counts
> in hugetlbfs_error_remove_page(). But this function is only called if
> hugetlb_unreserve_pages() fails, and hugetlb_unreserve_pages() fails
> only if kmalloc in region_del() fails, which is almost impossible.
> As a result, the reserved count is not corrected as expected when a
> memory failure occurs.
> 
> 2. Reserved counts is designed to display the number of hugepages
> reserved at mmap() time. This means that even if we fix the first
> issue, reserved counts will be confusing because we can't tell if
> it's hwposion or reserved hugepage.
> 
> This patch adds hardware corrput huge pages counts to record memory
> failure on hugetlbfs instead of reserved counts.
> 
> Signed-off-by: Bin Wang <wangbin224@...wei.com>
> ---
>  fs/hugetlbfs/inode.c    |  3 +--
>  include/linux/hugetlb.h |  3 +++
>  mm/hugetlb.c            | 30 ++++++++++++++++++++++++++++++
>  3 files changed, 34 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 926eeb9bf4eb..ffb6e7b6756b 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -986,8 +986,7 @@ static int hugetlbfs_error_remove_page(struct address_space *mapping,
>  	pgoff_t index = page->index;
>  
>  	remove_huge_page(page);
> -	if (unlikely(hugetlb_unreserve_pages(inode, index, index + 1, 1)))
> -		hugetlb_fix_reserve_counts(inode);

As mentioned, huge page reserve counts are not used to record number of
poisioned pages.  The calls to hugetlb_unreserve_pages and possibly
hugetlb_fix_reserve_counts are necessary for reserve accounting.  They
can not be removed.

> +	hugetlb_fix_hwcrp_counts(page);

This new routine just counts memory errors on 'in use' huge pages.
I do not see a call anywhere to count memory errors on huge pages
not in use.

>  
>  	return 0;
>  }
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index f7ca1a3870ea..1d5bada80aa5 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -171,6 +171,7 @@ void putback_active_hugepage(struct page *page);
>  void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
>  void free_huge_page(struct page *page);
>  void hugetlb_fix_reserve_counts(struct inode *inode);
> +void hugetlb_fix_hwcrp_counts(struct page *page);
>  extern struct mutex *hugetlb_fault_mutex_table;
>  u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
>  
> @@ -602,12 +603,14 @@ struct hstate {
>  	unsigned long free_huge_pages;
>  	unsigned long resv_huge_pages;
>  	unsigned long surplus_huge_pages;
> +	unsigned long hwcrp_huge_pages;
>  	unsigned long nr_overcommit_huge_pages;
>  	struct list_head hugepage_activelist;
>  	struct list_head hugepage_freelists[MAX_NUMNODES];
>  	unsigned int nr_huge_pages_node[MAX_NUMNODES];
>  	unsigned int free_huge_pages_node[MAX_NUMNODES];
>  	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> +	unsigned int hwcrp_huge_pages_node[MAX_NUMNODES];

I understand your requirement to count the number of memory errors on
hugetlb pages.  However, we need to think carefully about we represent
that count.

Noaya, do you have opinions on where would be the best place to store
this information?  The hugetlb memory error code has the comment 'needs
work'.  Ideally, we could isolate memory errors to a single base (4K for
x86) page and free the remaining base pages to buddy.  We could also
potentially allocate a 'replacement' hugetlb page doing something like
alloc_and_dissolve_huge_page.

If we get an error on a hugetlb page and can isolate it to a base page
and replace the huge page, is it still a huge page memory error?

IMO, we should work on isolating memory errors to a base page and
replacing the huge page.  Then, the existing count of base pages with
memory errors would be sufficient?

This is something I would like to work, but I have higher priorities
right now.

-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ