[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170524141957.GA8174@potion>
Date: Wed, 24 May 2017 16:19:57 +0200
From: Radim Krčmář <rkrcmar@...hat.com>
To: Nick Desaulniers <nick.desaulniers@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86: dynamically allocate large struct in em_fxrstor
2017-05-23 23:24-0700, Nick Desaulniers:
> Fixes the warning:
>
> arch/x86/kvm/emulate.c:4018:12: warning: stack frame size of 1080 bytes in
> function
> 'em_fxrstor' [-Wframe-larger-than=]
> static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
> ^
>
> Found with CONFIG_FRAME_WARN set to 1024.
>
> Signed-off-by: Nick Desaulniers <nick.desaulniers@...il.com>
> ---
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> @@ -4017,30 +4017,38 @@ static int fxrstor_fixup(struct x86_emulate_ctxt *ctxt,
>
> static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
> {
> - struct fxregs_state fx_state;
> + struct fxregs_state *fx_state;
> int rc;
>
> rc = check_fxsr(ctxt);
> if (rc != X86EMUL_CONTINUE)
> return rc;
>
> - rc = segmented_read_std(ctxt, ctxt->memop.addr.mem, &fx_state, 512);
> + fx_state = kmalloc(sizeof(*fx_state), GFP_KERNEL);
> + fx_state = kmalloc(sizeof(*fx_state), GFP_KERNEL);
fx_state must be 16 byte aligned and x86 ARCH_KMALLOC_MINALIGN is 8, so
this needs manual correction.
Also, please kmalloc also fxregs_state in fxrstor_fixup and em_fxsave so
we again have only one storage type.
> + if (!fx_state)
> + return -ENOMEM;
The caller does not understand -ENOMEM. The appropriate return value is
X86EMUL_UNHANDLEABLE.
> if (ctxt->mode < X86EMUL_MODE_PROT64)
> - rc = fxrstor_fixup(ctxt, &fx_state);
> + rc = fxrstor_fixup(ctxt, fx_state);
Ah, fxrstor_fixup most likely got inlined and both of them put ~512 byte
fxregs_state on the stack ... noinline attribute should solve the
warning too.
Thanks.
Powered by blists - more mailing lists