[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170918205609.hntcd3nfaq2gjj64@docker>
Date: Mon, 18 Sep 2017 14:56:09 -0600
From: Tycho Andersen <tycho@...ker.com>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel-hardening@...ts.openwall.com,
Marco Benatto <marco.antonio.780@...il.com>,
Juerg Haefliger <juerg.haefliger@...onical.com>,
linux-arm-kernel@...ts.infradead.org, x86@...nel.org
Subject: Re: [kernel-hardening] [PATCH v6 10/11] mm: add a user_virt_to_phys
symbol
Hi Mark,
On Thu, Sep 14, 2017 at 07:34:02PM +0100, Mark Rutland wrote:
> On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
> > We need someting like this for testing XPFO. Since it's architecture
> > specific, putting it in the test code is slightly awkward, so let's make it
> > an arch-specific symbol and export it for use in LKDTM.
> >
> > v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case
> >
> > CC: linux-arm-kernel@...ts.infradead.org
> > CC: x86@...nel.org
> > Signed-off-by: Tycho Andersen <tycho@...ker.com>
> > Tested-by: Marco Benatto <marco.antonio.780@...il.com>
> > ---
> > arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
> > arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > include/linux/xpfo.h | 5 +++++
> > 3 files changed, 113 insertions(+)
> >
> > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
> > index 342a9ccb93c1..94a667d94e15 100644
> > --- a/arch/arm64/mm/xpfo.c
> > +++ b/arch/arm64/mm/xpfo.c
> > @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size,
> >
> > xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages);
> > }
> > +
> > +/* Convert a user space virtual address to a physical address.
> > + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
> > + * arch/x86/mm/pageattr.c
> > + */
>
> When can this be called? What prevents concurrent modification of the user page
> tables?
>
> i.e. must mmap_sem be held?
Yes, it should be. Since we're moving this back into the lkdtm test
code I think it's less important, since nothing should be modifying
the tables while the thread is doing the lookup, but I'll add it in
the next version.
> > +phys_addr_t user_virt_to_phys(unsigned long addr)
>
> Does this really need to be architecture specific?
>
> Core mm code manages to walk user page tables just fine...
I think so since we don't support section mappings right now, so
p*d_sect will always be false.
> > +{
> > + phys_addr_t phys_addr;
> > + unsigned long offset;
> > + pgd_t *pgd;
> > + p4d_t *p4d;
> > + pud_t *pud;
> > + pmd_t *pmd;
> > + pte_t *pte;
> > +
> > + pgd = pgd_offset(current->mm, addr);
> > + if (pgd_none(*pgd))
> > + return 0;
>
> Can we please separate the address and return value? e.g. pass the PA by
> reference and return an error code.
>
> AFAIK, zero is a valid PA, and even if the tables exist, they might point there
> in the presence of an error.
Sure, I'll rearrange this.
> > +
> > + p4d = p4d_offset(pgd, addr);
> > + if (p4d_none(*p4d))
> > + return 0;
> > +
> > + pud = pud_offset(p4d, addr);
> > + if (pud_none(*pud))
> > + return 0;
> > +
> > + if (pud_sect(*pud) || !pud_present(*pud)) {
> > + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;
>
> Was there some problem with:
>
> phys_addr = pmd_page_paddr(*pud);
>
> ... and similar for the other levels?
>
> ... I'd rather introduce new helpers than more open-coded calculations.
I wasn't aware of these; we could define a similar set of functions
for x86 and then make it not arch-specific.
I wonder if we could also use follow_page_pte(), since we know that
the page is always present (given that it's been allocated).
Unfortunately follow_page_pte() is not exported.
Tycho
Powered by blists - more mailing lists