[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160310111935.GB13102@gmail.com>
Date: Thu, 10 Mar 2016 12:19:35 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Andy Shevchenko <andy.shevchenko@...il.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
Borislav Petkov <bp@...en8.de>,
Fenghua Yu <fenghua.yu@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Oleg Nesterov <oleg@...hat.com>,
"Yu, Yu-cheng" <yu-cheng.yu@...el.com>
Subject: Re: Got FPU related warning on Intel Quark during boot
I've Cc:-ed more FPU developers. Mail quoted below. I don't have a Quark system to
test this on, but maybe others have an idea why this warning triggers?
My thinking is that it's related to:
58122bf1d856 x86/fpu: Default eagerfpu=on on all CPUs
Thanks,
Ingo
* Andy Shevchenko <andy.shevchenko@...il.com> wrote:
> Today tried first time after long break to boot Intel Quark SoC with
> most recent linux-next. Got the following warning:
>
> [ 14.714533] WARNING: CPU: 0 PID: 823 at
> arch/x86/include/asm/fpu/internal.h:163 fpu__clear+0x8c/0x160
> [ 14.726603] Modules linked in:
> [ 14.729910] CPU: 0 PID: 823 Comm: kworker/u2:0 Not tainted
> 4.5.0-rc7-next-20160310+ #137
> [ 14.738307] 00000000 00000000 ce691e20 c12b6fc9 ce691e50 c1049fd1
> c1978c6c 00000000
> [ 14.747000] 00000337 c196b530 000000a3 c102050c 000000a3 ce587ac0
> 00000000 ce653000
> [ 14.755722] ce691e64 c104a095 00000009 00000000 00000000 ce691e74
> c102050c ce587500
> [ 14.764468] Call Trace:
> [ 14.767172] [<c12b6fc9>] dump_stack+0x16/0x1d
> [ 14.771889] [<c1049fd1>] __warn+0xd1/0xf0
> [ 14.776253] [<c102050c>] ? fpu__clear+0x8c/0x160
> [ 14.781234] [<c104a095>] warn_slowpath_null+0x25/0x30
> [ 14.786648] [<c102050c>] fpu__clear+0x8c/0x160
> [ 14.791447] [<c101f347>] flush_thread+0x57/0x60
> [ 14.796341] [<c113a5cc>] flush_old_exec+0x4cc/0x600
> [ 14.801594] [<c117ab20>] load_elf_binary+0x2b0/0x1060
> [ 14.807010] [<c1111220>] ? get_user_pages_remote+0x50/0x60
> [ 14.812898] [<c12c4687>] ? _copy_from_user+0x37/0x40
> [ 14.818236] [<c1139f82>] search_binary_handler+0x62/0x150
> [ 14.824007] [<c113b19c>] do_execveat_common+0x45c/0x600
> [ 14.829647] [<c113b35f>] do_execve+0x1f/0x30
> [ 14.834289] [<c1059941>] call_usermodehelper_exec_async+0x91/0xe0
> [ 14.840765] [<c17f2310>] ret_from_kernel_thread+0x20/0x40
> [ 14.846540] [<c10598b0>] ? umh_complete+0x40/0x40
> [ 14.851626] ---[ end trace 137ff5893f9b85bf ]---
>
> Is it know issue? Or what could I try to fix it?
>
> Reproducibility: 3 of 3.
>
> --
> With Best Regards,
> Andy Shevchenko
Powered by blists - more mailing lists