lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4a6c7ee-5b0c-2390-35c5-3a5255d77f5d@oracle.com>
Date:   Mon, 7 Jun 2021 12:13:03 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     wangbin <wangbin224@...wei.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     Naoya Horiguchi <nao.horiguchi@...il.com>,
        akpm@...ux-foundation.org, wuxu.wu@...wei.com
Subject: Re: [PATCH] mm: hugetlbfs: add hwcrp_hugepages to record memory
 failure on hugetlbfs

Resend with new e-mail for Naoya

On 6/7/21 7:16 AM, wangbin wrote:
> From: Bin Wang <wangbin224@...wei.com>
> 
> In the current hugetlbfs memory failure handler, reserved huge page
> counts are used to record the number of huge pages with hwposion.

I do not believe this is an accurate statement.  Naoya is the memory
error expert and may disagree, but I do not see anywhere where reserve
counts are being used to track huge pages with memory errors.

IIUC, the routine hugetlbfs_error_remove_page is called after
unmapping the page from all user mappings.  The routine will simply,
remove the page from the cache.  This effectively removes the page
from the file as hugetlbfs is a memory only filesystem.  The subsequent
call to hugetlb_unreserve_pages cleans up any reserve map entries
associated with the page and adjusts the reserve count if necessary.
The reserve count adjustment is based on removing the page from the
file, rather than the memory error.  The same adjustment would be made
if the page was hole punched from the file.

What specific problem are you trying to solve?  Are trying to see how
many huge pages were hit by memory errors?
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ