[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yedgj+Lo2eru8197@casper.infradead.org>
Date: Wed, 19 Jan 2022 00:51:27 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Yury Norov <yury.norov@...il.com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Nicholas Piggin <npiggin@...il.com>,
Ding Tianhong <dingtianhong@...wei.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Alexey Klimov <aklimov@...hat.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Uladzislau Rezki <urezki@...il.com>
Subject: Re: [PATCH] vmap(): don't allow invalid pages
On Tue, Jan 18, 2022 at 03:52:44PM -0800, Yury Norov wrote:
> vmap() takes struct page *pages as one of arguments, and user may provide
> an invalid pointer which would lead to DABT at address translation later.
Could we spell out 'DABT'? Presumably that's an ARM-specific thing.
Just like we don't say #PF for Intel page faults, I think this is
probably a 'data abort'?
> Currently, kernel checks the pages against NULL. In my case, however, the
> address was not NULL, and was big enough so that the hardware generated
> Address Size Abort on arm64.
>
> Interestingly, this abort happens even if copy_from_kernel_nofault() is
> used, which is quite inconvenient for debugging purposes.
>
> This patch adds a pfn_valid() check into vmap() path, so that invalid
> mapping will not be created.
>
> RFC: https://lkml.org/lkml/2022/1/18/815
> v1: use pfn_valid() instead of adding an arch-specific
> arch_vmap_page_valid(). Thanks to Matthew Wilcox for the hint.
>
> Signed-off-by: Yury Norov <yury.norov@...il.com>
Suggested-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> ---
> mm/vmalloc.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d2a00ad4e1dd..a4134ee56b10 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -477,6 +477,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
> return -EBUSY;
> if (WARN_ON(!page))
> return -ENOMEM;
> + if (WARN_ON(!pfn_valid(page_to_pfn(page))))
> + return -EINVAL;
> set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
> (*nr)++;
> } while (pte++, addr += PAGE_SIZE, addr != end);
> --
> 2.30.2
>
Powered by blists - more mailing lists