[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <de31a1c7-1a35-831f-3eed-e4a6e77f9e44@suse.cz>
Date: Thu, 26 Aug 2021 11:01:18 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Mike Rapoport <rppt@...nel.org>,
Dave Hansen <dave.hansen@...el.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ira Weiny <ira.weiny@...el.com>,
Kees Cook <keescook@...omium.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables
On 8/26/21 10:02, Mike Rapoport wrote:
> On Mon, Aug 23, 2021 at 04:50:10PM -0700, Dave Hansen wrote:
>> On 8/23/21 6:25 AM, Mike Rapoport wrote:
>> > void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
>> > {
>> > + enable_pgtable_write(page_address(pte));
>> > pgtable_pte_page_dtor(pte);
>> > paravirt_release_pte(page_to_pfn(pte));
>> > paravirt_tlb_remove_table(tlb, pte);
>> > @@ -69,6 +73,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
>> > #ifdef CONFIG_X86_PAE
>> > tlb->need_flush_all = 1;
>> > #endif
>> > + enable_pgtable_write(pmd);
>> > pgtable_pmd_page_dtor(page);
>> > paravirt_tlb_remove_table(tlb, page);
>> > }
>>
>> I'm also cringing a bit at hacking this into the page allocator. A
>> *lot* of what you're trying to do with getting large allocations out and
>> splitting them up is done very well today by the slab allocators. It
>> might take some rearrangement of 'struct page' metadata to be more slab
>> friendly, but it does seem like a close enough fit to warrant investigating.
>
> I thought more about using slab, but it seems to me the least suitable
> option. The usecases at hand (page tables, secretmem, SEV/TDX) allocate in
> page granularity and some of them use struct page metadata, so even its
> rearrangement won't help. And adding support for 2M slabs to SLUB would be
> quite intrusive.
Agree, and there would be unnecessary memory overhead too, SLUB would be happy
to cache a 2MB block on each CPU, etc.
> I think that better options are moving such cache deeper into buddy or
> using e.g. genalloc instead of a list to deal with higher order allocations.
>
> The choice between these two will mostly depend of the API selection, i.e.
> a GFP flag or a dedicated alloc/free.
Implementing on top of buddy seem still like the better option to me.
Powered by blists - more mailing lists