[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200316205756.146666-1-mike.kravetz@oracle.com>
Date: Mon, 16 Mar 2020 13:57:54 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Michal Hocko <mhocko@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH v2 0/2] hugetlbfs: use i_mmap_rwsem for more synchronization
v2
- Fixed a hang that could be reproduced via a ltp test [4].
Note that the issue was in one of the return paths of one of the
callers of hugetlb_page_mapping_lock_write which left a huge page
locked. The routine hugetlb_page_mapping_lock_write was not modified
in v2, and is still in need of review/comments.
- Cleaned up warnings produced on powerpc builds [5].
While discussing the issue with huge_pte_offset [1], I remembered that
there were more outstanding hugetlb races. These issues are:
1) For shared pmds, huge PTE pointers returned by huge_pte_alloc can become
invalid via a call to huge_pmd_unshare by another thread.
2) hugetlbfs page faults can race with truncation causing invalid global
reserve counts and state.
A previous attempt was made to use i_mmap_rwsem in this manner as described
at [2]. However, those patches were reverted starting with [3] due to
locking issues.
To effectively use i_mmap_rwsem to address the above issues it needs to
be held (in read mode) during page fault processing. However, during
fault processing we need to lock the page we will be adding. Lock
ordering requires we take page lock before i_mmap_rwsem. Waiting until
after taking the page lock is too late in the fault process for the
synchronization we want to do.
To address this lock ordering issue, the following patches change the
lock ordering for hugetlb pages. This is not too invasive as hugetlbfs
processing is done separate from core mm in many places. However, I
don't really like this idea. Much ugliness is contained in the new
routine hugetlb_page_mapping_lock_write() of patch 1.
The only other way I can think of to address these issues is by catching
all the races. After catching a race, cleanup, backout, retry ... etc,
as needed. This can get really ugly, especially for huge page reservations.
At one time, I started writing some of the reservation backout code for
page faults and it got so ugly and complicated I went down the path of
adding synchronization to avoid the races. Any other suggestions would
be welcome.
[1] https://lore.kernel.org/linux-mm/1582342427-230392-1-git-send-email-longpeng2@huawei.com/
[2] https://lore.kernel.org/linux-mm/20181222223013.22193-1-mike.kravetz@oracle.com/
[3] https://lore.kernel.org/linux-mm/20190103235452.29335-1-mike.kravetz@oracle.com
[4] https://lore.kernel.org/linux-mm/1584028670.7365.182.camel@lca.pw/
[5] https://lore.kernel.org/lkml/20200312183142.108df9ac@canb.auug.org.au/
Mike Kravetz (2):
hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization
hugetlbfs: Use i_mmap_rwsem to address page fault/truncate race
fs/hugetlbfs/inode.c | 30 +++++--
include/linux/fs.h | 5 ++
include/linux/hugetlb.h | 8 ++
mm/hugetlb.c | 175 +++++++++++++++++++++++++++++++++++-----
mm/memory-failure.c | 29 ++++++-
mm/migrate.c | 25 +++++-
mm/rmap.c | 17 +++-
mm/userfaultfd.c | 11 ++-
8 files changed, 263 insertions(+), 37 deletions(-)
--
2.24.1
Powered by blists - more mailing lists