[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e00b452952e7aaef0d94bc25a32261aafeeff7ea.camel@intel.com>
Date: Fri, 22 Apr 2022 16:54:52 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "Torvalds, Linus" <torvalds@...ux-foundation.org>
CC: "songliubraving@...com" <songliubraving@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"hch@...radead.org" <hch@...radead.org>,
"ast@...nel.org" <ast@...nel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"Kernel-team@...com" <Kernel-team@...com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"rppt@...nel.org" <rppt@...nel.org>,
"song@...nel.org" <song@...nel.org>,
"pmladek@...e.com" <pmladek@...e.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hpa@...or.com" <hpa@...or.com>,
"dborkman@...hat.com" <dborkman@...hat.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"bp@...en8.de" <bp@...en8.de>,
"mcgrof@...nel.org" <mcgrof@...nel.org>,
"mbenes@...e.cz" <mbenes@...e.cz>,
"npiggin@...il.com" <npiggin@...il.com>,
"imbrenda@...ux.ibm.com" <imbrenda@...ux.ibm.com>
Subject: Re: [PATCH v4 bpf 0/4] vmalloc: bpf: introduce VM_ALLOW_HUGE_VMAP
On Thu, 2022-04-21 at 19:47 -0700, Linus Torvalds wrote:
> I don't disagree, but I think the real problem is that the whole "oen
> page_order per vmalloc() area" itself is a bit broken.
Yea. It is the main reason it has to round up to huge page sizes
AFAICT. I'd really like to see it use memory a little more efficiently
if it is going to be an opt-out thing again.
>
> For example, AMD already does this "automatic TLB size" thing for
> when
> you have multiple contiguous PTE entries (shades of the old alpha
> "page size hint" thing, except it's automatic and doesn't have
> explicit hints).
>
> And I'm hoping Intel will do something similar in the future.
>
> End result? It would actually be really good to just map contiguous
> pages, but it doesn't have anything to do with the 2MB PMD size.
>
> And there's no "fixed order" needed either. If you have mapping that
> is 17 pages in size, it would still be good to allocate them as a
> block of 16 pages ("page_order = 4") and as a single page, because
> just laying them out in the page tables that way will already allow
> AMD to use a 64kB TLB entry for that 16-page block.
>
> But it would also work to just do the allocations as a set of 8, 4, 4
> and 1.
Hmm, that's neat.
>
> But the whole "one page order for one vmalloc" means that doesn't
> work
> very well.
>
> Where I disagree (violently) with Nick is his contention that (a)
> this
> is x86-specific and (b) this is somehow trivial to fix.
>
> Let's face it - the current code is broken. I think the sub-page
> issue
> is not entirely trivial, and the current design isn't even very good
> for it.
>
> But the *easy* cases are the ones that simply don't care - the ones
> that powerpc has actually been testing.
>
> So for 5.18, I think it's quite likely reasonable to re-enable
> large-page vmalloc for the easy case (ie those big hash tables).
>
> Re-enabling it *all*, considering how broken it has been, and how
> little testing it has clearly gotten? And potentially not enabling it
> on x86 because x86 is so much better at showing issues? That's not
> what I want to do.
>
> If the code is so broken that it can't be used on x86, then it's too
> broken to be enabled on powerpc and s390 too. Never mind that those
> architectures might have so limited use that they never realized how
> broken they were..
I think there is another cross-arch issue here that we shouldn't lose
sight of. There are not enough warnings in the code about the
assumptions made on the arch's. The other issues are x86 specific in
terms of who gets affected in rc1, but I dug up this prophetic
assessment:
https://lore.kernel.org/lkml/4488d39f-0698-7bfd-b81c-1e609821818f@intel.com/
That is pretty much what happened. Song came along and, in its current
state, took it as a knob that could just be flipped. Seems pretty
reasonable that it could happen again.
So IMHO, the other general issue is the lack of guard rails or warnings
for the next arch that comes along. Probably VM_FLUSH_RESET_PERMS
should get some warnings as well.
I kind of like the idea in that thread of making functions or configs
for arch's to be forced to declare they have specific properties. Does
it seem reasonable at this point? Probably not necessary as a 5.18 fix.
Powered by blists - more mailing lists