[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b91dd42-b903-1f6c-729a-bd9f51273986@oracle.com>
Date: Fri, 21 Dec 2018 10:20:18 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] hugetlbfs: use i_mmap_rwsem for more pmd sharing
synchronization
On 12/21/18 2:05 AM, Kirill A. Shutemov wrote:
> On Tue, Dec 18, 2018 at 02:35:56PM -0800, Mike Kravetz wrote:
>> While looking at BUGs associated with invalid huge page map counts,
>> it was discovered and observed that a huge pte pointer could become
>> 'invalid' and point to another task's page table. Consider the
>> following:
>>
>> A task takes a page fault on a shared hugetlbfs file and calls
>> huge_pte_alloc to get a ptep. Suppose the returned ptep points to a
>> shared pmd.
>>
>> Now, another task truncates the hugetlbfs file. As part of truncation,
>> it unmaps everyone who has the file mapped. If the range being
>> truncated is covered by a shared pmd, huge_pmd_unshare will be called.
>> For all but the last user of the shared pmd, huge_pmd_unshare will
>> clear the pud pointing to the pmd. If the task in the middle of the
>> page fault is not the last user, the ptep returned by huge_pte_alloc
>> now points to another task's page table or worse. This leads to bad
>> things such as incorrect page map/reference counts or invalid memory
>> references.
>>
>> To fix, expand the use of i_mmap_rwsem as follows:
>> - i_mmap_rwsem is held in read mode whenever huge_pmd_share is called.
>> huge_pmd_share is only called via huge_pte_alloc, so callers of
>> huge_pte_alloc take i_mmap_rwsem before calling. In addition, callers
>> of huge_pte_alloc continue to hold the semaphore until finished with
>> the ptep.
>> - i_mmap_rwsem is held in write mode whenever huge_pmd_unshare is called.
>>
>> Cc: <stable@...r.kernel.org>
>> Fixes: 39dde65c9940 ("shared page table for hugetlb page")
>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>
> Other the few questions below. The patch looks reasonable to me.
Thanks for taking a look.
>
>> @@ -3252,11 +3253,23 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>>
>> for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
>> spinlock_t *src_ptl, *dst_ptl;
>> +
>> src_pte = huge_pte_offset(src, addr, sz);
>> if (!src_pte)
>> continue;
>> +
>> + /*
>> + * i_mmap_rwsem must be held to call huge_pte_alloc.
>> + * Continue to hold until finished with dst_pte, otherwise
>> + * it could go away if part of a shared pmd.
>> + *
>> + * Technically, i_mmap_rwsem is only needed in the non-cow
>> + * case as cow mappings are not shared.
>> + */
>> + i_mmap_lock_read(mapping);
>
> Any reason you do lock/unlock on each iteration rather than around whole
> loop?
I am simply mirroring the page table locking. This is not necessary.
The page table lock can change while processing the range, but the
i_mmap_rwsem can not. Therefore, we can hold around the whole loop.
I will modify, test and put out an updated patch later today.
>> dst_pte = huge_pte_alloc(dst, addr, sz);
>> if (!dst_pte) {
>> + i_mmap_unlock_read(mapping);
>> ret = -ENOMEM;
>> break;
>> }
>
> ...
>
>> @@ -3772,14 +3789,18 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>> };
>>
>> /*
>> - * hugetlb_fault_mutex must be dropped before
>> - * handling userfault. Reacquire after handling
>> - * fault to make calling code simpler.
>> + * hugetlb_fault_mutex and i_mmap_rwsem must be
>> + * dropped before handling userfault. Reacquire
>> + * after handling fault to make calling code simpler.
>> */
>> hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,
>> idx, haddr);
>> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>> + i_mmap_unlock_read(mapping);
>> +
>
> Do we have order of hugetlb_fault_mutex vs. i_mmap_lock documented?
> I *looks* correct to me, but it's better to write it down somewhere.
> Mayby add to the header of mm/rmap.c?
No it is not documented. I don't think there is much/any documentation
for hugetlb_fault_mutex at all. I will add it to the lock documentation
in mm/rmap.c as you suggest.
--
Mike Kravetz
Powered by blists - more mailing lists