[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67aef839-0757-37b1-a42d-154c0116cbf5@intel.com>
Date: Thu, 12 May 2022 17:46:15 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "H.J. Lu" <hjl.tools@...il.com>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andi Kleen <ak@...ux.intel.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFCv2 00/10] Linear Address Masking enabling
On 5/12/22 17:08, H.J. Lu wrote:
> I am expecting applications to ask for LAM_U48 or LAM_U57, not just
> LAM.
If AMD comes along with UAI that doesn't match LAM_U48 or LAM_U57, apps
will specifically be coded to ask for one of the three? That seems like
an awfully rigid ABI.
That also seems like a surefire way to have non-portable users of this
feature. It basically guarantees that userspace code will look like this:
if (support_lam_57()) {
sys_enable_masking(LAM_57);
mask = LAM_57_MASK;
} else if (support_lam_48()) {
sys_enable_masking(LAM_48);
mask = LAM_48_MASK;
} else if (...)
... others
Which is *ENTIRELY* non-portable and needs to get patched if anything
changes in the slightest. Where, if we move that logic into the kernel,
it's something more like:
mask = sys_enable_masking(...);
if (bitmap_weight(&mask) < MINIMUM_BITS)
goto whoops;
That actually works for all underlying implementations and doesn't
hard-code any assumptions about the implementation other than a basic
sanity check.
There are three choices we'd have to make for a more generic ABI that I
can think of:
ABI Question #1:
Should userspace be asking the kernel for a specific type of masking,
like a number of bits to mask or a mask itself? If not, the enabling
syscall is dirt simple: it's "mask = sys_enable_masking()". The kernel
picks what it wants to mask unilaterally and just tells userspace.
ABI Question #2:
Assuming that userspace is asking for a specific kind of address
masking: Should that request be made in terms of an actual mask or a
number of bits? For instance, if userspace asks for 0xf000000000000000,
it would fit UAI or ARM TBI. If it asks for 0x7e00000000000000, it
would match LAM_U57 behavior.
Or, does userspace ask for "8 bits", or "6 bits" or "15 bits"?
ABI Question #3:
If userspace asks for something that the kernel can't satisfy exactly,
like "8 bits" on a LAM system, is it OK for the kernel to fall back to
the next-largest mask? For instance sys_enable_masking(bits=8), could
the kernel unilaterally return a LAM_U48 mask because LAM_U48 means
supports 15>8 bits? Or, could this "fuzzy" behavior be an opt-in?
If I had to take a shot at this today, I think I'd opt for:
mask = sys_enable_masking(bits=6, flags=FUZZY_NR_BITS);
although I'm not super confident about the "fuzzy" flag. I also don't
think I'd totally hate the "blind" interface where the kernel just gets
to pick unilaterally and takes zero input from userspace.
Powered by blists - more mailing lists