[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250116095422.61742-1-yuntao.wang@linux.dev>
Date: Thu, 16 Jan 2025 17:54:22 +0800
From: Yuntao Wang <yuntao.wang@...ux.dev>
To: bhe@...hat.com
Cc: akpm@...ux-foundation.org,
ebiederm@...ssion.com,
io@...icci.it,
kexec@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org,
pavel@....cz,
rafael@...nel.org,
regressions@...ts.linux.dev,
ytcoode@...il.com
Subject: Re: [REGRESSION] Kernel booted via kexec fails to resume from hibernation
On Wed, 15 Jan 2025 12:04:10 +0800, Baoquan He <bhe@...hat.com> wrote:
> On 01/14/25 at 02:16pm, Roberto Ricci wrote:
> > On 2025-01-13 Mon 22:28:48 +0100, Roberto Ricci wrote:
> > > I can reproduce this with kernel 6.13-rc7 in a qemu x86_64 virtual machine
> > > running Void Linux, with the following commands:
> > >
> > > ```
> > > # kexec -l /boot/vmlinuz-6.13.0-rc7 --initrd=/boot/initramfs-6.13.0-rc7 --reuse-cmdline
> > > # reboot
> > > # printf reboot >/sys/power/disk
> > > # printf disk >/sys/power/state
> > > ```
> >
> > Looks like it's the kernel performing the kexec which causes the issue,
> > not the target one. E.g.: kexec-ing 6.7 from 6.13-rc7 makes resume from
> > hibernation fail; but if I kexec 6.13-rc7 from 6.7, then it works fine.
>
> I tried the latest mainline kernel with your above command execution
> series, I didn't see the problem you reported. Can you try kexec from
> 6.7 to 6.7 or something like that and try to bisect a specific criminal
> commit?
>
> As for below commit, it seems not a suspect.
> 18d565ea95fe ("kexec_file: fix incorrect temp_start value in locate_mem_hole_top_down()")
>
> If possible, can you revert below two commits altogether to have a try?
> I am not sure if they caused the problem.
>
> 18d565ea95fe kexec_file: fix incorrect temp_start value in locate_mem_hole_top_down()
> 816d334afa85 kexec: modify the meaning of the end parameter in kimage_is_destination_range()
>
> Thanks
> Baoquan
I'm sorry that my two commits caused these weird issues.
Although I'm currently unable to identify the cause of these problems,
I'm REALLY curious as to why this happened.
Hope we make some progress this time.
Powered by blists - more mailing lists