[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4byZBKAUrJ2+5EoEaTHTXpk+0FFeFvze9r+Y1dTezG7YQ@mail.gmail.com>
Date: Sat, 7 Oct 2023 11:56:58 +0200
From: Uros Bizjak <ubizjak@...il.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Ingo Molnar <mingo@...nel.org>, Brian Gerst <brgerst@...il.com>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Mika Penttilä <mpenttil@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Denys Vlasenko <dvlasenk@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH v2 0/6] x86: Clean up fast syscall return validation
On Fri, Oct 6, 2023 at 8:59 PM H. Peter Anvin <hpa@...or.com> wrote:
>
> On 10/5/23 13:20, Ingo Molnar wrote:
> >
> > * Brian Gerst <brgerst@...il.com> wrote:
> >
> >> Looking at the compiled output, the only suboptimal code appears to be
> >> the canonical address test, where the C code uses the CL register for
> >> the shifts instead of immediates.
> >>
> >> 180: e9 00 00 00 00 jmp 185 <do_syscall_64+0x85>
> >> 181: R_X86_64_PC32 .altinstr_aux-0x4
> >> 185: b9 07 00 00 00 mov $0x7,%ecx
> >> 18a: eb 05 jmp 191 <do_syscall_64+0x91>
> >> 18c: b9 10 00 00 00 mov $0x10,%ecx
> >> 191: 48 89 c2 mov %rax,%rdx
> >> 194: 48 d3 e2 shl %cl,%rdx
> >> 197: 48 d3 fa sar %cl,%rdx
> >> 19a: 48 39 d0 cmp %rdx,%rax
> >> 19d: 75 39 jne 1d8 <do_syscall_64+0xd8>
> >
> > Yeah, it didn't look equivalent - so I guess we want a C equivalent for:
> >
> > - ALTERNATIVE "shl $(64 - 48), %rcx; sar $(64 - 48), %rcx", \
> > - "shl $(64 - 57), %rcx; sar $(64 - 57), %rcx", X86_FEATURE_LA57
> >
> > instead of the pgtable_l5_enabled() runtime test that
> > __is_canonical_address() uses?
> >
>
> I don't think such a thing (without simply duplicate the above as an
> alternative asm, which is obviously easy enough, and still allows the
> compiler to pick the register used) would be possible without immediate
> patching support[*].
>
> Incidentally, this is a question for Uros: is there a reason this is a
> mov to %ecx and not just %cl, which would save 3 bytes?
The compiler uses 32-bit mode to move values between registers, even
when they are less than 32-bit wide. To avoid partial register stalls,
it uses 32-bit mode as much as possible (e.g. zero-extends from memory
to load 8-bit value, load of 32-bit constant, etc). Since the kernel
is compiled with -O2, the compiler does not care that much for the
size of instructions, and it uses full 32-bit width to initialize
register with a constant.
Please note that 8-bit movb instruction in fact represents insert into
word-mode register. The compiler does not know how this word-mode
register will be used, so to avoid partial register stalls, it takes a
cautious approach and (with -O2) moves constant to a register with a
word-width instruction.
Also, the compiler is quite eager to CSE constants. When there are two
or more uses of the same constant, it will move a constant into the
register first.
Uros.
Powered by blists - more mailing lists