lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <mhng-c8581870-6152-43a6-9d9f-28a9cc5ce39e@palmerdabbelt-glaptop1>
Date:   Thu, 18 Jun 2020 18:53:23 -0700 (PDT)
From:   Palmer Dabbelt <palmer@...belt.com>
To:     Atish Patra <Atish.Patra@....com>,
        Will Deacon <willdeacon@...gle.com>
CC:     linux-kernel@...r.kernel.org, Atish Patra <Atish.Patra@....com>,
        aou@...s.berkeley.edu, akpm@...ux-foundation.org,
        daniel.m.jordan@...cle.com, linux-riscv@...ts.infradead.org,
        walken@...gle.com, rppt@...ux.ibm.com,
        Paul Walmsley <paul.walmsley@...ive.com>, zong.li@...ive.com
Subject:     Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range

On Wed, 17 Jun 2020 13:37:32 PDT (-0700), Atish Patra wrote:
> As per walk_page_range documentation, mmap lock should be acquired by the
> caller before invoking walk_page_range. mmap_assert_locked gets triggered
> without that. The details can be found here.
>
> http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra <atish.patra@....com>
> ---
>  arch/riscv/mm/pageattr.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> index ec2c70f84994..289a9a5ea5b5 100644
> --- a/arch/riscv/mm/pageattr.c
> +++ b/arch/riscv/mm/pageattr.c
> @@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
>
>  int set_direct_map_invalid_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
>  		.clear_mask = __pgprot(_PAGE_PRESENT)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  int set_direct_map_default_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
>  		.clear_mask = __pgprot(0)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)

+Will, who pointed out that we could avoid the lock by using apply_page_range.

Given that the bug doesn't reproduce for me, we don't otherwise use
apply_page_range, and the commit is somewhat suspect (I screwed up that PR, and
the original patch mentions avoiding caching invalid states) I'm going to just
take this as is and add it to the list of things to look at.

I've put this on fixes: walk_page_range() directly says you must take the lock
and I don't want to wait for pedantic reasons on a boot issue, even if it's one
that doesn't show up for me.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ