[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87lekwmjg1.fsf@all.your.base.are.belong.to.us>
Date: Fri, 17 Feb 2023 15:54:06 +0100
From: Björn Töpel <bjorn@...nel.org>
To: Alexandre Ghiti <alexghiti@...osinc.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Conor Dooley <conor@...nel.org>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-efi@...r.kernel.org
Cc: Alexandre Ghiti <alexghiti@...osinc.com>
Subject: Re: [PATCH v4 2/6] riscv: Rework kasan population functions
Alexandre Ghiti <alexghiti@...osinc.com> writes:
> Our previous kasan population implementation used to have the final kasan
> shadow region mapped with kasan_early_shadow_page, because we did not clean
> the early mapping and then we had to populate the kasan region "in-place"
> which made the code cumbersome.
>
> So now we clear the early mapping, establish a temporary mapping while we
> populate the kasan shadow region with just the kernel regions that will
> be used.
>
> This new version uses the "generic" way of going through a page table
> that may be folded at runtime (avoid the XXX_next macros).
>
> It was tested with outline instrumentation on an Ubuntu kernel
> configuration successfully.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
(One minor nit, that can be addressed later.)
Reviewed-by: Björn Töpel <bjorn@...osinc.com>
> arch/riscv/mm/kasan_init.c | 361 +++++++++++++++++++------------------
> 1 file changed, 183 insertions(+), 178 deletions(-)
> @@ -482,7 +437,37 @@ static void __init kasan_shallow_populate(void *start, void *end)
> unsigned long vend = PAGE_ALIGN((unsigned long)end);
>
> kasan_shallow_populate_pgd(vaddr, vend);
> - local_flush_tlb_all();
> +}
> +
> +static void create_tmp_mapping(void)
> +{
> + void *ptr;
> + p4d_t *base_p4d;
> +
> + /*
> + * We need to clean the early mapping: this is hard to achieve "in-place",
> + * so install a temporary mapping like arm64 and x86 do.
> + */
> + memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD);
> +
> + /* Copy the last p4d since it is shared with the kernel mapping. */
> + if (pgtable_l5_enabled) {
> + ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
> + memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D);
> + set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)],
> + pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE));
> + base_p4d = tmp_p4d;
> + } else {
> + base_p4d = (p4d_t *)tmp_pg_dir;
> + }
> +
> + /* Copy the last pud since it is shared with the kernel mapping. */
> + if (pgtable_l4_enabled) {
> + ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END)));
> + memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD);
> + set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)],
> + pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE));
> + }
> }
>
> void __init kasan_init(void)
> @@ -490,10 +475,27 @@ void __init kasan_init(void)
> phys_addr_t p_start, p_end;
> u64 i;
>
> - if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> + create_tmp_mapping();
> + csr_write(CSR_SATP, PFN_DOWN(__pa(tmp_pg_dir)) | satp_mode);
Nit: Maybe add a comment, why the sfence.vma is *not* required here. I
tripped over it.
Björn
Powered by blists - more mailing lists