[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZDYGpBRENQ6NDo0G@chenyu5-mobl1>
Date: Wed, 12 Apr 2023 09:17:24 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
CC: Len Brown <len.brown@...el.com>, Ye Bin <yebin10@...wei.com>,
"Pavankumar Kondeti" <quic_pkondeti@...cinc.com>,
<linux-pm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Yifan Li <yifan2.li@...el.com>
Subject: Re: [PATCH v2 2/2] PM: hibernate: Do not get block device
exclusively in test_resume mode
On 2023-04-11 at 18:21:36 +0200, Rafael J. Wysocki wrote:
> On Tue, Apr 11, 2023 at 6:23 AM Chen Yu <yu.c.chen@...el.com> wrote:
> >
> > The system refused to do a test_resume because it found that the
> > swap device has already been taken by someone else. Specificly,
>
> "Specifically" I suppose.
>
Yes, will fix it.
> > the swsusp_check()->blkdev_get_by_dev(FMODE_EXCL) is supposed to
> > do this check.
> >
> > Steps to reproduce:
> > dd if=/dev/zero of=/swapfile bs=$(cat /proc/meminfo |
> > awk '/MemTotal/ {print $2}') count=1024 conv=notrunc
> > mkswap /swapfile
> > swapon /swapfile
> > swap-offset /swapfile
> > echo 34816 > /sys/power/resume_offset
> > echo test_resume > /sys/power/disk
> > echo disk > /sys/power/state
> >
> > PM: Using 3 thread(s) for compression
> > PM: Compressing and saving image data (293150 pages)...
> > PM: Image saving progress: 0%
> > PM: Image saving progress: 10%
> > ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> > ata1.00: configured for UDMA/100
> > ata2: SATA link down (SStatus 0 SControl 300)
> > ata5: SATA link down (SStatus 0 SControl 300)
> > ata6: SATA link down (SStatus 0 SControl 300)
> > ata3: SATA link down (SStatus 0 SControl 300)
> > ata4: SATA link down (SStatus 0 SControl 300)
> > PM: Image saving progress: 20%
> > PM: Image saving progress: 30%
> > PM: Image saving progress: 40%
> > PM: Image saving progress: 50%
> > pcieport 0000:00:02.5: pciehp: Slot(0-5): No device found
> > PM: Image saving progress: 60%
> > PM: Image saving progress: 70%
> > PM: Image saving progress: 80%
> > PM: Image saving progress: 90%
> > PM: Image saving done
> > PM: hibernation: Wrote 1172600 kbytes in 2.70 seconds (434.29 MB/s)
> > PM: S|
> > PM: hibernation: Basic memory bitmaps freed
> > PM: Image not found (code -16)
> >
> > This is because when using the swapfile as the hibernation storage,
> > the block device where the swapfile is located has already been mounted
> > by the OS distribution(usually been mounted as the rootfs). This is not
>
> "usually mounted"
>
OK, will fix it.
> > an issue for normal hibernation, because software_resume()->swsusp_check()
> > happens before the block device(rootfs) mount. But it is a problem for the
> > test_resume mode. Because when test_resume happens, the block device has
> > been mounted already.
> >
> > Thus remove the FMODE_EXCL for test_resume mode. This would not be a
> > problem because in test_resume stage, the processes have already been
> > frozen, and the race condition described in
> > Commit 39fbef4b0f77 ("PM: hibernate: Get block device exclusively in swsusp_check()")
> > is unlikely to happen.
> >
> > Fixes: 39fbef4b0f77 ("PM: hibernate: Get block device exclusively in swsusp_check()")
> > Reported-by: Yifan Li <yifan2.li@...el.com>
> > Suggested-by: Pavankumar Kondeti <quic_pkondeti@...cinc.com>
> > Signed-off-by: Chen Yu <yu.c.chen@...el.com>
> > ---
> > kernel/power/hibernate.c | 5 +++--
> > kernel/power/swap.c | 5 +++--
> > 2 files changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> > index aa551b093c3f..defc2257b052 100644
> > --- a/kernel/power/hibernate.c
> > +++ b/kernel/power/hibernate.c
> > @@ -688,18 +688,19 @@ static int load_image_and_restore(void)
> > {
> > int error;
> > unsigned int flags;
> > + fmode_t mode = snapshot_test ? FMODE_READ : (FMODE_READ | FMODE_EXCL);
>
> fmode_t mode = FMODE_READ;
>
> if (snapshot_test)
> mode |= FMODE_EXCL;
>
> pretty please, and analogously below.
>
OK, will fix it in next version.
thanks,
Chenyu
Powered by blists - more mailing lists