[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc54d360620d436f93785ae5e9f8a23f@AcuMS.aculab.com>
Date: Thu, 12 May 2022 13:01:07 +0000
From: David Laight <David.Laight@...LAB.COM>
To: "'Kirill A. Shutemov'" <kirill.shutemov@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
"Peter Zijlstra" <peterz@...radead.org>
CC: "x86@...nel.org" <x86@...nel.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
"Dmitry Vyukov" <dvyukov@...gle.com>,
"H . J . Lu" <hjl.tools@...il.com>,
Andi Kleen <ak@...ux.intel.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] x86: Implement Linear Address Masking support
From: Kirill A. Shutemov
> Sent: 11 May 2022 03:28
>
> Linear Address Masking feature makes CPU ignore some bits of the virtual
> address. These bits can be used to encode metadata.
>
> The feature is enumerated with CPUID.(EAX=07H, ECX=01H):EAX.LAM[bit 26].
>
> CR3.LAM_U57[bit 62] allows to encode 6 bits of metadata in bits 62:57 of
> user pointers.
>
> CR3.LAM_U48[bit 61] allows to encode 15 bits of metadata in bits 62:48
> of user pointers.
>
> CR4.LAM_SUP[bit 28] allows to encode metadata of supervisor pointers.
> If 5-level paging is in use, 6 bits of metadata can be encoded in 62:57.
> For 4-level paging, 15 bits of metadata can be encoded in bits 62:48.
>
...
> +static vaddr clean_addr(CPUArchState *env, vaddr addr)
> +{
> + CPUClass *cc = CPU_GET_CLASS(env_cpu(env));
> +
> + if (cc->tcg_ops->do_clean_addr) {
> + addr = cc->tcg_ops->do_clean_addr(env_cpu(env), addr);
The performance of a conditional indirect call will be horrid.
Over-engineered when there is only one possible function.
....
> +
> +static inline int64_t sign_extend64(uint64_t value, int index)
> +{
> + int shift = 63 - index;
> + return (int64_t)(value << shift) >> shift;
> +}
Shift of signed integers are UB.
> +vaddr x86_cpu_clean_addr(CPUState *cs, vaddr addr)
> +{
> + CPUX86State *env = &X86_CPU(cs)->env;
> + bool la57 = env->cr[4] & CR4_LA57_MASK;
> +
> + if (addr >> 63) {
> + if (env->cr[4] & CR4_LAM_SUP) {
> + return sign_extend64(addr, la57 ? 56 : 47);
> + }
> + } else {
> + if (env->cr[3] & CR3_LAM_U57) {
> + return sign_extend64(addr, 56);
> + } else if (env->cr[3] & CR3_LAM_U48) {
> + return sign_extend64(addr, 47);
> + }
> + }
That is completely horrid.
Surely it can be just:
if (addr && 1u << 63)
return addr | env->address_mask;
else
return addr & ~env->address_mask;
Where 'address_mask' is 0x7ff....
although since you really want a big gap between valid user and
valid kernel addresses allowing masked kernel addresses adds
costs elsewhere.
I've no idea how often the address masking is required?
Hopefully almost never?
copy_to/from_user() (etc) need to be able to use user addresses
without having to mask them.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists