lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181221100528.bkvddcqom7qaxwbe@kshutemo-mobl1>
Date:   Fri, 21 Dec 2018 13:05:28 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...nel.org>,
        Hugh Dickins <hughd@...gle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Prakash Sangappa <prakash.sangappa@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] hugetlbfs: use i_mmap_rwsem for more pmd sharing
 synchronization

On Tue, Dec 18, 2018 at 02:35:56PM -0800, Mike Kravetz wrote:
> While looking at BUGs associated with invalid huge page map counts,
> it was discovered and observed that a huge pte pointer could become
> 'invalid' and point to another task's page table.  Consider the
> following:
> 
> A task takes a page fault on a shared hugetlbfs file and calls
> huge_pte_alloc to get a ptep.  Suppose the returned ptep points to a
> shared pmd.
> 
> Now, another task truncates the hugetlbfs file.  As part of truncation,
> it unmaps everyone who has the file mapped.  If the range being
> truncated is covered by a shared pmd, huge_pmd_unshare will be called.
> For all but the last user of the shared pmd, huge_pmd_unshare will
> clear the pud pointing to the pmd.  If the task in the middle of the
> page fault is not the last user, the ptep returned by huge_pte_alloc
> now points to another task's page table or worse.  This leads to bad
> things such as incorrect page map/reference counts or invalid memory
> references.
> 
> To fix, expand the use of i_mmap_rwsem as follows:
> - i_mmap_rwsem is held in read mode whenever huge_pmd_share is called.
>   huge_pmd_share is only called via huge_pte_alloc, so callers of
>   huge_pte_alloc take i_mmap_rwsem before calling.  In addition, callers
>   of huge_pte_alloc continue to hold the semaphore until finished with
>   the ptep.
> - i_mmap_rwsem is held in write mode whenever huge_pmd_unshare is called.
> 
> Cc: <stable@...r.kernel.org>
> Fixes: 39dde65c9940 ("shared page table for hugetlb page")
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>

Other the few questions below. The patch looks reasonable to me.

> @@ -3252,11 +3253,23 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>  
>  	for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
>  		spinlock_t *src_ptl, *dst_ptl;
> +
>  		src_pte = huge_pte_offset(src, addr, sz);
>  		if (!src_pte)
>  			continue;
> +
> +		/*
> +		 * i_mmap_rwsem must be held to call huge_pte_alloc.
> +		 * Continue to hold until finished  with dst_pte, otherwise
> +		 * it could go away if part of a shared pmd.
> +		 *
> +		 * Technically, i_mmap_rwsem is only needed in the non-cow
> +		 * case as cow mappings are not shared.
> +		 */
> +		i_mmap_lock_read(mapping);

Any reason you do lock/unlock on each iteration rather than around whole
loop?

>  		dst_pte = huge_pte_alloc(dst, addr, sz);
>  		if (!dst_pte) {
> +			i_mmap_unlock_read(mapping);
>  			ret = -ENOMEM;
>  			break;
>  		}

...

> @@ -3772,14 +3789,18 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  			};
>  
>  			/*
> -			 * hugetlb_fault_mutex must be dropped before
> -			 * handling userfault.  Reacquire after handling
> -			 * fault to make calling code simpler.
> +			 * hugetlb_fault_mutex and i_mmap_rwsem must be
> +			 * dropped before handling userfault.  Reacquire
> +			 * after handling fault to make calling code simpler.
>  			 */
>  			hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,
>  							idx, haddr);
>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> +			i_mmap_unlock_read(mapping);
> +

Do we have order of hugetlb_fault_mutex vs. i_mmap_lock documented?
I *looks* correct to me, but it's better to write it down somewhere.
Mayby add to the header of mm/rmap.c?

>  			ret = handle_userfault(&vmf, VM_UFFD_MISSING);
> +
> +			i_mmap_lock_read(mapping);
>  			mutex_lock(&hugetlb_fault_mutex_table[hash]);
>  			goto out;
>  		}

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ