lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Oct 2022 09:54:30 -0700
From:   Ira Weiny <ira.weiny@...el.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     Randy Dunlap <rdunlap@...radead.org>, Peter Xu <peterx@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Matthew Wilcox <willy@...radead.org>,
        kernel test robot <yujie.liu@...el.com>,
        <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH for rc] mm/shmem: Ensure proper fallback if page faults

On Sun, Oct 23, 2022 at 09:33:05PM -0700, Ira wrote:
> From: Ira Weiny <ira.weiny@...el.com>
> 
> The kernel test robot flagged a recursive lock as a result of a
> conversion from kmap_atomic() to kmap_local_folio()[Link]
> 
> The cause was due to the code depending on the kmap_atomic() side effect
> of disabling page faults.  In that case the code expects the fault to
> fail and take the fallback case.
> 
> git archaeology implied that the recursion may not be an actual bug.[1]
> However, the mmap_lock needed in the fault may be the one held.[2]
> 
> Add an explicit pagefault_disable() and a big comment to explain this
> for future souls looking at this code.
> 
> [1] https://lore.kernel.org/all/Y1MymJ%2FINb45AdaY@iweiny-desk3/
> [2] https://lore.kernel.org/all/Y1M2p9OtBGnKwGUE@x1n/
> 
> Fixes: 7a7256d5f512 ("shmem: convert shmem_mfill_atomic_pte() to use a folio")
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Randy Dunlap <rdunlap@...radead.org>
> Cc: Peter Xu <peterx@...hat.com>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Reported-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> Reported-by: kernel test robot <yujie.liu@...el.com>
> Link: https://lore.kernel.org/r/202210211215.9dc6efb5-yujie.liu@intel.com
> Signed-off-by: Ira Weiny <ira.weiny@...el.com>
> 
> ---
> Thanks to Matt and Andrew for initial diagnosis.
> Thanks to Randy for pointing out C code needs ';'  :-D
> Thanks to Andrew for suggesting an elaborate comment
> Thanks to Peter for pointing out that the mm's may be the same.
> ---
>  mm/shmem.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 8280a5cb48df..c1bca31cd485 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2424,9 +2424,16 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
>  
>  		if (!zeropage) {	/* COPY */
>  			page_kaddr = kmap_local_folio(folio, 0);
> +			/*
> +			 * The mmap_lock is held here.  Disable page faults to
> +			 * prevent deadlock should copy_from_user() fault.  The
> +			 * copy will be retried outside the mmap_lock.
> +			 */

Offline Dave Hansen and I were discussing this and he was concerned that this
comment implies that a deadlock would always occur rather than might occur.

I was not clear on this as I was thinking the read mmap_lock was non-recursive.

So I think we have 3 cases only 1 of which will actually deadlock and is, as
Dave puts it, currently theoretical.

	1) Different mm's are in play (no issue)
	2) Readlock implementation is recursive and same mm is in play (no issue)
	3) Readlock implementation is _not_ recursive (issue)

In both 1 and 2 lockdep is incorrectly flagging the issue but 3 is a problem
and I think this is what Andrea was thinking.

Is that the case?

If so the above comment is incorrectly worded and I should update it.

Ira

> +			pagefault_disable();
>  			ret = copy_from_user(page_kaddr,
>  					     (const void __user *)src_addr,
>  					     PAGE_SIZE);
> +			pagefault_enable();
>  			kunmap_local(page_kaddr);
>  
>  			/* fallback to copy_from_user outside mmap_lock */
> -- 
> 2.37.2
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ