[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8b50991d-2ab5-4577-83e9-a2d74135c5f5@www.fastmail.com>
Date: Mon, 23 Aug 2021 20:34:05 -0700
From: "Andy Lutomirski" <luto@...nel.org>
To: "Dave Hansen" <dave.hansen@...el.com>,
"Mike Rapoport" <rppt@...nel.org>, linux-mm@...ck.org
Cc: "Andrew Morton" <akpm@...ux-foundation.org>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
"Ira Weiny" <ira.weiny@...el.com>,
"Kees Cook" <keescook@...omium.org>,
"Mike Rapoport" <rppt@...ux.ibm.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Rick P Edgecombe" <rick.p.edgecombe@...el.com>,
"Vlastimil Babka" <vbabka@...e.cz>,
"the arch/x86 maintainers" <x86@...nel.org>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables
On Mon, Aug 23, 2021, at 4:50 PM, Dave Hansen wrote:
> On 8/23/21 6:25 AM, Mike Rapoport wrote:
> > void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
> > {
> > + enable_pgtable_write(page_address(pte));
> > pgtable_pte_page_dtor(pte);
> > paravirt_release_pte(page_to_pfn(pte));
> > paravirt_tlb_remove_table(tlb, pte);
> > @@ -69,6 +73,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
> > #ifdef CONFIG_X86_PAE
> > tlb->need_flush_all = 1;
> > #endif
> > + enable_pgtable_write(pmd);
> > pgtable_pmd_page_dtor(page);
> > paravirt_tlb_remove_table(tlb, page);
> > }
>
> I would expected this to have leveraged the pte_offset_map/unmap() code
> to enable/disable write access. Granted, it would enable write access
> even when only a read is needed, but that could be trivially fixed with
> having a variant like:
>
> pte_offset_map_write()
> pte_offset_unmap_write()
I would also like to see a discussion of how races in which multiple threads or CPUs access ptes in the same page at the same time.
>
> in addition to the existing (presumably read-only) versions:
>
> pte_offset_map()
> pte_offset_unmap()
>
> Although those only work for the leaf levels, it seems a shame not to to
> use them.
>
> I'm also cringing a bit at hacking this into the page allocator. A
> *lot* of what you're trying to do with getting large allocations out and
> splitting them up is done very well today by the slab allocators. It
> might take some rearrangement of 'struct page' metadata to be more slab
> friendly, but it does seem like a close enough fit to warrant investigating.
>
Powered by blists - more mailing lists