[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZSEn6BhETrwmry6D@gmail.com>
Date: Sat, 7 Oct 2023 11:42:00 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Brian Gerst <brgerst@...il.com>
Cc: "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Mika Penttilä <mpenttil@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Uros Bizjak <ubizjak@...il.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH v2 0/6] x86: Clean up fast syscall return validation
* Brian Gerst <brgerst@...il.com> wrote:
> On Fri, Oct 6, 2023 at 2:59 PM H. Peter Anvin <hpa@...or.com> wrote:
> >
> > On 10/5/23 13:20, Ingo Molnar wrote:
> > >
> > > * Brian Gerst <brgerst@...il.com> wrote:
> > >
> > >> Looking at the compiled output, the only suboptimal code appears to be
> > >> the canonical address test, where the C code uses the CL register for
> > >> the shifts instead of immediates.
> > >>
> > >> 180: e9 00 00 00 00 jmp 185 <do_syscall_64+0x85>
> > >> 181: R_X86_64_PC32 .altinstr_aux-0x4
> > >> 185: b9 07 00 00 00 mov $0x7,%ecx
> > >> 18a: eb 05 jmp 191 <do_syscall_64+0x91>
> > >> 18c: b9 10 00 00 00 mov $0x10,%ecx
> > >> 191: 48 89 c2 mov %rax,%rdx
> > >> 194: 48 d3 e2 shl %cl,%rdx
> > >> 197: 48 d3 fa sar %cl,%rdx
> > >> 19a: 48 39 d0 cmp %rdx,%rax
> > >> 19d: 75 39 jne 1d8 <do_syscall_64+0xd8>
> > >
> > > Yeah, it didn't look equivalent - so I guess we want a C equivalent for:
> > >
> > > - ALTERNATIVE "shl $(64 - 48), %rcx; sar $(64 - 48), %rcx", \
> > > - "shl $(64 - 57), %rcx; sar $(64 - 57), %rcx", X86_FEATURE_LA57
> > >
> > > instead of the pgtable_l5_enabled() runtime test that
> > > __is_canonical_address() uses?
> > >
> >
> > I don't think such a thing (without simply duplicate the above as an
> > alternative asm, which is obviously easy enough, and still allows the
> > compiler to pick the register used) would be possible without immediate
> > patching support[*].
> >
> > Incidentally, this is a question for Uros: is there a reason this is a
> > mov to %ecx and not just %cl, which would save 3 bytes?
> >
> > Incidentally, it is possible to save one instruction and use only *one*
> > alternative immediate:
> >
> > leaq (%rax,%rax),%rdx
> > xorq %rax,%rdx
> > shrq $(63 - LA),%rdx # Yes, 63, not 64
> > # ZF=1 if canonical
> >
> > This works because if bit [x] is set in the output, then bit [x] and
> > [x-1] in the input are different (bit [-1] considered to be zero); and
> > by definition a bit is canonical if and only if all the bits [63:LA] are
> > identical, thus bits [63:LA+1] in the output must all be zero.
> >
> > The first two instructions are pure arithmetic and can thus be done in C:
> >
> > bar = foo ^ (foo << 1);
> >
> > ... leaving only one instruction needing to be patched at runtime.
> >
> > -hpa
>
> One other alternative I have been considering is comparing against
> TASK_SIZE_MAX. The only user-executable address above that is the
> long deprecated vsyscall page. IMHO it's not worth optimizing for
> that case, so just let it fall back to using IRET.
>
> if (unlikely(regs->ip >= TASK_SIZE_MAX)) return false;
>
> compiles to:
>
> 180: 48 b9 00 f0 ff ff ff movabs $0x7ffffffff000,%rcx
> 187: 7f 00 00
> 18a: 48 39 c8 cmp %rcx,%rax
> 18d: 73 39 jae 1c8 <do_syscall_64+0xc8>
>
> 0000000000000000 <.altinstr_replacement>:
> 0: 48 b9 00 f0 ff ff ff movabs $0xfffffffffff000,%rcx
> 7: ff ff 00
That sounds good - and we could do this as a separate patch on top
of your existing patches, to keep it bisectable in case there's
any problems.
Thanks,
Ingo
Powered by blists - more mailing lists