lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181007233848.13397-1-mike.kravetz@oracle.com>
Date:   Sun,  7 Oct 2018 16:38:47 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hugh Dickins <hughd@...gle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH RFC 0/1] hugetlbfs: fix truncate/fault races

Our DB team noticed negative hugetlb reserved page counts during development
testing.  Related meminfo fields were as follows on one system:

HugePages_Total:   47143
HugePages_Free:    45610
HugePages_Rsvd:    18446744073709551613
HugePages_Surp:        0
Hugepagesize:       2048 kB 

Code inspection revealed that the most likely cause were races with truncate
and page faults.  In fact, I could write a not too complicated program to
cause the races and recreate the issue.

Way back in 2006, Hugh Dickins created a patch (ebed4bfc8da8) with this
message:

"[PATCH] hugetlb: fix absurd HugePages_Rsvd
    
 If you truncated an mmap'ed hugetlbfs file, then faulted on the truncated
 area, /proc/meminfo's HugePages_Rsvd wrapped hugely "negative".  Reinstate my
 preliminary i_size check before attempting to allocate the page (though this
 only fixes the most obvious case: more work will be needed here)."

Looks like we need to do more work.

While looking at the code, there were many issues to correctly handle racing
and back out changes partially made.  Instead, why not just introduce a
rw mutex to prevent the races.  Page faults would take the mutex in read mode
to allow multiple faults in parallel as it works today.  Truncate code would
take the mutex in write mode and prevent faults for the duration of truncate
processing.  This seems almost too obvious.  Something must be wrong with this
approach, or others would have employed it earlier.

The following patch describes the current race in detail and adds the mutex
to prevent truncate/fault races.

Mike Kravetz (1):
  hugetlbfs: introduce truncation/fault mutex to avoid races

 fs/hugetlbfs/inode.c    | 24 ++++++++++++++++++++----
 include/linux/hugetlb.h |  1 +
 mm/hugetlb.c            | 25 +++++++++++++++++++------
 mm/userfaultfd.c        |  8 +++++++-
 4 files changed, 47 insertions(+), 11 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ