[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171122163546.ikjzkyzg3n5fyi6z@pd.tnic>
Date: Wed, 22 Nov 2017 17:35:46 +0100
From: Borislav Petkov <bp@...en8.de>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
Juergen Gross <jgross@...e.com>,
Andy Lutomirski <luto@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <alexander.levin@...izon.com>,
live-patching@...r.kernel.org, Jiri Slaby <jslaby@...e.cz>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Mike Galbraith <efault@....de>,
Chris Wright <chrisw@...s-sol.org>,
Alok Kataria <akataria@...are.com>,
Rusty Russell <rusty@...tcorp.com.au>,
virtualization@...ts.linux-foundation.org,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
xen-devel@...ts.xenproject.org,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 07/13] x86/paravirt: Simplify ____PVOP_CALL()
On Wed, Oct 04, 2017 at 10:58:28AM -0500, Josh Poimboeuf wrote:
> Remove the inline asm duplication in ____PVOP_CALL().
>
> Also add 'IS_ENABLED(CONFIG_X86_32)' to the return variable logic,
> making the code clearer and rendering the comment unnecessary.
>
> Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
> ---
> arch/x86/include/asm/paravirt_types.h | 36 +++++++++++++----------------------
> 1 file changed, 13 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index ab7aabe6b668..01f9e10983c1 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -529,29 +529,19 @@ int paravirt_disable_iospace(void);
> rettype __ret; \
> PVOP_CALL_ARGS; \
> PVOP_TEST_NULL(op); \
Newline here...
> - /* This is 32-bit specific, but is okay in 64-bit */ \
> - /* since this condition will never hold */ \
> - if (sizeof(rettype) > sizeof(unsigned long)) { \
> - asm volatile(pre \
> - paravirt_alt(PARAVIRT_CALL) \
> - post \
> - : call_clbr, ASM_CALL_CONSTRAINT \
> - : paravirt_type(op), \
> - paravirt_clobber(clbr), \
> - ##__VA_ARGS__ \
> - : "memory", "cc" extra_clbr); \
> - __ret = (rettype)((((u64)__edx) << 32) | __eax); \
> - } else { \
> - asm volatile(pre \
> - paravirt_alt(PARAVIRT_CALL) \
> - post \
> - : call_clbr, ASM_CALL_CONSTRAINT \
> - : paravirt_type(op), \
> - paravirt_clobber(clbr), \
> - ##__VA_ARGS__ \
> - : "memory", "cc" extra_clbr); \
> - __ret = (rettype)(__eax & PVOP_RETMASK(rettype)); \
> - } \
> + asm volatile(pre \
> + paravirt_alt(PARAVIRT_CALL) \
> + post \
> + : call_clbr, ASM_CALL_CONSTRAINT \
> + : paravirt_type(op), \
> + paravirt_clobber(clbr), \
> + ##__VA_ARGS__ \
> + : "memory", "cc" extra_clbr); \
... and here goes a long way towards readability. :)
> + if (IS_ENABLED(CONFIG_X86_32) && \
> + sizeof(rettype) > sizeof(unsigned long)) \
> + __ret = (rettype)((((u64)__edx) << 32) | __eax);\
> + else \
> + __ret = (rettype)(__eax & PVOP_RETMASK(rettype));\
> __ret; \
> })
--
Regards/Gruss,
Boris.
Good mailing practices for 400: avoid top-posting and trim the reply.
Powered by blists - more mailing lists