[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56fdfffc-9a5e-4f9e-ae5b-57dd27d647cc@linux.intel.com>
Date: Fri, 13 Dec 2024 14:45:18 +0800
From: "Ning, Hongyu" <hongyu.ning@...ux.intel.com>
To: David Woodhouse <dwmw2@...radead.org>, kexec@...ts.infradead.org
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Kai Huang <kai.huang@...el.com>, Nikolay Borisov <nik.borisov@...e.com>,
linux-kernel@...r.kernel.org, Simon Horman <horms@...nel.org>,
Dave Young <dyoung@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
jpoimboe@...nel.org, bsz@...zon.de
Subject: Re: [PATCH v5 13/20] x86/kexec: Mark relocate_kernel page as ROX
instead of RWX
On 2024/12/12 18:13, David Woodhouse wrote:
> On Thu, 2024-12-12 at 11:03 +0800, Ning, Hongyu wrote:
>>
>> Hi David,
>>
>> I've hit some kdump/kexec regression issue for guest kernel in KVM/QEMU
>> based VM and reported in https://bugzilla.kernel.org/show_bug.cgi?id=219592.
>>
>> based on further git bisect, it seems to be related with this commit,
>> would you help to take a look?
>
> Thanks for the report; I'll take a look. Please could you share your
> kernel .config?
>
kernel config updated in the bugzilla
https://bugzilla.kernel.org/show_bug.cgi?id=219592
> Also, you say that this is in QEMU running on an IA64 host. Is that
> true, or did you mean x86_64 host? Are you using OVMF or SeaBIOS as the
> QEMU firmware?
>
You're right, it's x86_64 host, I miss-selected it in bugzilla.
I'm using OVMF as the QEMU firmware.
> In the short term, I think that just reverting the 'offending' commit
> should be OK. I'd *prefer* not to leave the page RWX for the whole time
> period that the image is loaded, but that's how it's been on i386 for
> ever anyway.
And your latest patch
https://lore.kernel.org/kexec/9c68688625f409104b16164da30aa6d3eb494e5d.camel@infradead.org/
could fix this issue now.
Powered by blists - more mailing lists