lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Jan 2021 17:26:20 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-arm-kernel@...ts.infradead.org,
        Catalin Marinas <catalin.marinas@....com>,
        Jan Kara <jack@...e.cz>, Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Vinayak Menon <vinmenon@...eaurora.org>,
        Hugh Dickins <hughd@...gle.com>, kernel-team@...roid.com
Subject: Re: [PATCH v2 1/3] mm: Cleanup faultaround and finish_fault()
 codepaths

On Fri, Jan 08, 2021 at 05:15:15PM +0000, Will Deacon wrote:
> From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> 
> alloc_set_pte() has two users with different requirements: in the
> faultaround code, it called from an atomic context and PTE page table
> has to be preallocated. finish_fault() can sleep and allocate page table
> as needed.
> 
> PTL locking rules are also strange, hard to follow and overkill for
> finish_fault().
> 
> Let's untangle the mess. alloc_set_pte() has gone now. All locking is
> explicit.
> 
> The price is some code duplication to handle huge pages in faultaround
> path, but it should be fine, having overall improvement in readability.
> 
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Signed-off-by: Will Deacon <will@...nel.org>
> ---
>  fs/xfs/xfs_file.c       |   6 +-
>  include/linux/mm.h      |  12 ++-
>  include/linux/pgtable.h |  11 +++
>  mm/filemap.c            | 177 ++++++++++++++++++++++++++---------
>  mm/memory.c             | 199 ++++++++++++----------------------------
>  5 files changed, 213 insertions(+), 192 deletions(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 5b0f93f73837..111fe73bb8a7 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1319,17 +1319,19 @@ xfs_filemap_pfn_mkwrite(
>  	return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true);
>  }
>  
> -static void
> +static vm_fault_t
>  xfs_filemap_map_pages(
>  	struct vm_fault		*vmf,
>  	pgoff_t			start_pgoff,
>  	pgoff_t			end_pgoff)
>  {
>  	struct inode		*inode = file_inode(vmf->vma->vm_file);
> +	vm_fault_t ret;
>  
>  	xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> -	filemap_map_pages(vmf, start_pgoff, end_pgoff);
> +	ret = filemap_map_pages(vmf, start_pgoff, end_pgoff);
>  	xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> +	return ret;
>  }
>  
>  static const struct vm_operations_struct xfs_file_vm_ops = {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ecdf8a8cd6ae..801dd99f733c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -542,8 +542,8 @@ struct vm_fault {
>  					 * is not NULL, otherwise pmd.
>  					 */
>  	pgtable_t prealloc_pte;		/* Pre-allocated pte page table.
> -					 * vm_ops->map_pages() calls
> -					 * alloc_set_pte() from atomic context.
> +					 * vm_ops->map_pages() sets up a page
> +					 * table from from atomic context.
>  					 * do_fault_around() pre-allocates
>  					 * page table to avoid allocation from
>  					 * atomic context.

The typo Matthew has pointed out:

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d3d4e307fa09..358fc8616d8b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -535,7 +535,7 @@ struct vm_fault {
 					 */
 	pgtable_t prealloc_pte;		/* Pre-allocated pte page table.
 					 * vm_ops->map_pages() sets up a page
-					 * table from from atomic context.
+					 * table from atomic context.
 					 * do_fault_around() pre-allocates
 					 * page table to avoid allocation from
 					 * atomic context.
-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ