[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1gvj45t.fsf@dja-thinkpad.axtens.net>
Date: Mon, 21 Jun 2021 13:19:10 +1000
From: Daniel Axtens <dja@...ens.net>
To: "Christopher M. Riedl" <cmr@...ux.ibm.com>,
linuxppc-dev@...ts.ozlabs.org
Cc: tglx@...utronix.de, x86@...nel.org,
linux-hardening@...r.kernel.org, keescook@...omium.org
Subject: Re: [RESEND PATCH v4 08/11] powerpc: Initialize and use a temporary
mm for patching
Hi Chris,
> + /*
> + * Choose a randomized, page-aligned address from the range:
> + * [PAGE_SIZE, DEFAULT_MAP_WINDOW - PAGE_SIZE]
> + * The lower address bound is PAGE_SIZE to avoid the zero-page.
> + * The upper address bound is DEFAULT_MAP_WINDOW - PAGE_SIZE to stay
> + * under DEFAULT_MAP_WINDOW with the Book3s64 Hash MMU.
> + */
> + patching_addr = PAGE_SIZE + ((get_random_long() & PAGE_MASK)
> + % (DEFAULT_MAP_WINDOW - 2 * PAGE_SIZE));
I checked and poking_init() comes after the functions that init the RNG,
so this should be fine. The maths - while a bit fiddly to reason about -
does check out.
> +
> + /*
> + * PTE allocation uses GFP_KERNEL which means we need to pre-allocate
> + * the PTE here. We cannot do the allocation during patching with IRQs
> + * disabled (ie. "atomic" context).
> + */
> + ptep = get_locked_pte(patching_mm, patching_addr, &ptl);
> + BUG_ON(!ptep);
> + pte_unmap_unlock(ptep, ptl);
> +}
>
> #if IS_BUILTIN(CONFIG_LKDTM)
> unsigned long read_cpu_patching_addr(unsigned int cpu)
> {
> - return (unsigned long)(per_cpu(text_poke_area, cpu))->addr;
> + return patching_addr;
> }
> #endif
>
> -static int text_area_cpu_up(unsigned int cpu)
> +struct patch_mapping {
> + spinlock_t *ptl; /* for protecting pte table */
> + pte_t *ptep;
> + struct temp_mm temp_mm;
> +};
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +
> +static inline int hash_prefault_mapping(pgprot_t pgprot)
> {
> - struct vm_struct *area;
> + int err;
>
> - area = get_vm_area(PAGE_SIZE, VM_ALLOC);
> - if (!area) {
> - WARN_ONCE(1, "Failed to create text area for cpu %d\n",
> - cpu);
> - return -1;
> - }
> - this_cpu_write(text_poke_area, area);
> + if (radix_enabled())
> + return 0;
>
> - return 0;
> -}
> + err = slb_allocate_user(patching_mm, patching_addr);
> + if (err)
> + pr_warn("map patch: failed to allocate slb entry\n");
>
Here if slb_allocate_user() fails, you'll print a warning and then fall
through to the rest of the function. You do return err, but there's a
later call to hash_page_mm() that also sets err. Can slb_allocate_user()
fail while hash_page_mm() succeeds, and would that be a problem?
> -static int text_area_cpu_down(unsigned int cpu)
> -{
> - free_vm_area(this_cpu_read(text_poke_area));
> - return 0;
> + err = hash_page_mm(patching_mm, patching_addr, pgprot_val(pgprot), 0,
> + HPTE_USE_KERNEL_KEY);
> + if (err)
> + pr_warn("map patch: failed to insert hashed page\n");
> +
> + /* See comment in switch_slb() in mm/book3s64/slb.c */
> + isync();
> +
The comment reads:
/*
* Synchronize slbmte preloads with possible subsequent user memory
* address accesses by the kernel (user mode won't happen until
* rfid, which is safe).
*/
isync();
I have to say having read the description of isync I'm not 100% sure why
that's enough (don't we also need stores to complete?) but I'm happy to
take commit 5434ae74629a ("powerpc/64s/hash: Add a SLB preload cache")
on trust here!
I think it does make sense for you to have that barrier here: you are
potentially about to start poking at the memory mapped through that SLB
entry so you should make sure you're fully synchronised.
> + return err;
> }
>
> + init_temp_mm(&patch_mapping->temp_mm, patching_mm);
> + use_temporary_mm(&patch_mapping->temp_mm);
>
> - pmdp = pmd_offset(pudp, addr);
> - if (unlikely(!pmdp))
> - return -EINVAL;
> + /*
> + * On Book3s64 with the Hash MMU we have to manually insert the SLB
> + * entry and HPTE to prevent taking faults on the patching_addr later.
> + */
> + return(hash_prefault_mapping(pgprot));
hmm, `return hash_prefault_mapping(pgprot);` or
`return (hash_prefault_mapping((pgprot));` maybe?
Kind regards,
Daniel
Powered by blists - more mailing lists