[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202209022251.B14BD50B29@keescook>
Date: Sat, 3 Sep 2022 00:18:46 -0700
From: Kees Cook <keescook@...omium.org>
To: Bill Wendling <morbo@...gle.com>
Cc: Juergen Gross <jgross@...e.com>,
"Srivatsa S. Bhat (VMware)" <srivatsa@...il.mit.edu>,
Alexey Makhalov <amakhalov@...are.com>,
VMware PV-Drivers Reviewers <pv-drivers@...are.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
llvm@...ts.linux.dev, linux-hardening@...r.kernel.org
Subject: Re: [PATCH 2/2] x86/paravirt: add extra clobbers with
ZERO_CALL_USED_REGS enabled
On Fri, Sep 02, 2022 at 09:37:50PM +0000, Bill Wendling wrote:
> [...]
> callq *pv_ops+536(%rip)
Do you know which pv_ops function is this? I can't figure out where
pte_offset_kernel() gets converted into a pv_ops call....
> [...]
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -414,8 +414,17 @@ int paravirt_disable_iospace(void);
> "=c" (__ecx)
> #define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS, "=a" (__eax)
>
> -/* void functions are still allowed [re]ax for scratch */
> +/*
> + * void functions are still allowed [re]ax for scratch.
> + *
> + * The ZERO_CALL_USED REGS feature may end up zeroing out callee-saved
> + * registers. Make sure we model this with the appropriate clobbers.
> + */
> +#ifdef CONFIG_ZERO_CALL_USED_REGS
> +#define PVOP_VCALLEE_CLOBBERS "=a" (__eax), PVOP_VCALL_CLOBBERS
> +#else
> #define PVOP_VCALLEE_CLOBBERS "=a" (__eax)
> +#endif
> #define PVOP_CALLEE_CLOBBERS PVOP_VCALLEE_CLOBBERS
I don't think this should depend on CONFIG_ZERO_CALL_USED_REGS; it should
always be present.
I've only been looking at this just now, so many I'm missing
something. The callee clobbers are for functions with return values,
yes?
For example, 32-bit has to manually deal with doing a 64-bit value return,
and even got it wrong originally, fixing it in commit 0eb592dbba40
("x86/paravirt: return full 64-bit result"), with:
-#define PVOP_VCALLEE_CLOBBERS "=a" (__eax)
+#define PVOP_VCALLEE_CLOBBERS "=a" (__eax), "=d" (__edx)
But the naming is confusing, since these aren't actually clobbers,
they're input constraints marked as clobbers (the "=" modifier).
Regardless, the note in the comments ...
...
* However, x86_64 also have to clobber all caller saved registers, which
* unfortunately, are quite a bit (r8 - r11)
...
... would indicate that ALL the function argument registers need to be
marked as clobbers (i.e. the compiler can't figure this out on its own).
I was going to say it seems like they're missing from EXTRA_CLOBBERS,
but it's not used with any of the macros using PVOP_VCALLEE_CLOBBERS,
and then I saw the weird alternatives patching that encodes the clobbers
a second time (CLBR_ANY vs CLBR_RET_REG) via:
#define _paravirt_alt(insn_string, type, clobber) \
"771:\n\t" insn_string "\n" "772:\n" \
".pushsection .parainstructions,\"a\"\n" \
_ASM_ALIGN "\n" \
_ASM_PTR " 771b\n" \
" .byte " type "\n" \
" .byte 772b-771b\n" \
" .short " clobber "\n" \
".popsection\n"
And after reading the alternatives patching code which parses this via
the following struct:
/* These all sit in the .parainstructions section to tell us what to patch. */
struct paravirt_patch_site {
u8 *instr; /* original instructions */
u8 type; /* type of this instruction */
u8 len; /* length of original instruction */
};
... I see it _doesn't use the clobbers_ at all! *head explode* I found
that removal in commit 27876f3882fd ("x86/paravirt: Remove clobbers from
struct paravirt_patch_site")
So, I guess the CLBR_* can all be entirely removed. But back to my other
train of thought...
It seems like all the input registers need to be explicitly listed in
the PVOP_VCALLEE_CLOBBERS list (as you have), but likely should be done
unconditionally and for 32-bit as well.
-Kees
(Also, please CC linux-hardening@...r.kernel.org.)
--
Kees Cook
Powered by blists - more mailing lists