lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7374a8cc-ab8d-ad25-2b8b-2451ba8f7131@oracle.com>
Date:   Thu, 5 Nov 2020 13:59:24 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc:     Hugh Dickins <hughd@...gle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        Michal Hocko <mhocko@...nel.org>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Prakash Sangappa <prakash.sangappa@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 0/4] hugetlbfs: use hinode_rwsem for pmd sharing
 synchronization

On 11/2/20 4:28 PM, Mike Kravetz wrote:
> The RFC series reverted all patches where i_mmap_rwsem was used for
> pmd sharing synchronization, and then added code to use hinode_rwsem.
> This series ends up with the same code in the end, but is structured
> as follows:
> 
> - Revert the page fault/truncation race patch which depends on i_mmap_rwsem
>   always being taken in page fault path.
> - Add hinode_rwsem to hugetlbfs specific inode and supporting routines.
> - Convert code from using i_mmap_rwsem for pmd sharing synchronization
>   to using hinode_rwsem.
> - Add code to more robustly deal with page fault/truncation races.
> 
> My hope is that this will be easier to review.

My apologies.  Please do not spend any time on this patch series and it
certainly is not something to be sent to stable.

I have created a patch to address the BUG and propose that for stable [1].

After further thought, the approach in this series also has lock ordering
issues.  Here is a simple summary of the problem.

With pmd sharing, the pte pointer returned by huge_pte_alloc is not
guaranteed to be valid.  This is because another thread could have made
a call to huge_pmd_unshare and 'unshared' the pmd page containing the
pte pointer.

The current i_mmap_rwsem locking and the inode locking proposed in this
series acquire the semaphore in read mode before calling huge_pte_alloc.
Code continues to hold the semaphore until finished with the pte pointer.
Callers of huge_pmd_unshare hold the semaphore in write mode.  Thus, the
semaphore prevents the race.

The problem with this type of approach is lock ordering.  The semaphore is
acquired in read mode during page fault processing.  The first thing this
code does is 'allocate' a pte with huge_pte_alloc.  It will then, find or
allocate a page and lock it.  Finally, it will lock the page table to update
the pte.  Two instances where we may need to take the semaphore in write
mode are page migration and memory failure.  In these cases, the first thing
we need to do is lock the page.  Only after locking the page can we locate
the semaphore which needs to be acquired in write mode.  Hence we end up with
a classic cause for ABBA deadlocks.

I'm starting to think that adding more synchronization is not the best way
to approach this issue.  Rather, we should always validate pte pointers after
acquiring the page table lock.  At the lowest level, pmd sharing is
synchronized by the page table lock.  We already do some validation of the
pte after acquiring the page table lock.  For example, checking if
huge_pte_none() is still true.  Before even checking for none, we would need
to lookup the pte again (huge_pte_offset) and compare to the pte we previously
acquired.  If they are not the same, then we would need to backout and retry.

Unless someone has another suggestion, I'll start exploring this approach.

[1] https://lore.kernel.org/linux-mm/20201105195058.78401-1-mike.kravetz@oracle.com/
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ