[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0gMZ1mPOJ697buOmCWxj8TkJmP3bazGQQOXb5tVvymJdQ@mail.gmail.com>
Date: Fri, 15 Jul 2022 19:46:15 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Greg KH <gregkh@...uxfoundation.org>,
Oliver Neukum <oneukum@...e.com>,
Wedson Almeida Filho <wedsonaf@...gle.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
Arjan van de Ven <arjan@...ux.intel.com>,
Len Brown <len.brown@...el.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Linux PM <linux-pm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 4/4] PM: hibernate: don't set PF_FREEZER_SKIP flag when
manipulating /dev/snapshot
On Sun, Jul 10, 2022 at 4:26 AM Tetsuo Handa
<penguin-kernel@...ove.sakura.ne.jp> wrote:
>
> Since khungtaskd skips threads with PF_FREEZER_SKIP flag set, currently
> we can't report unbounded uninterruptible sleep when something went wrong
> while manipulating /dev/snapshot interface.
>
> Let's change snapshot_{open,read,write}() to use mutex_lock_killable()
> and change snapshot_release() to use mutex_lock(), so that khungtaskd can
> report unbounded uninterruptible sleep, by not setting PF_FREEZER_SKIP
> flag.
>
> Since /dev/snapshot is exclusive due to hibernate_acquire(), we could
> choose mutex_trylock() for snapshot_{open,read,write}() as with
> snapshot_ioctl(). But until we confirm that this patch does not
> break something, let's stay mutex_lock_killable().
>
> Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> Cc: "Rafael J. Wysocki" <rafael@...nel.org>
> Cc: Len Brown <len.brown@...el.com>
> Cc: Pavel Machek <pavel@....cz>
> ---
> This patch is only compile tested. Need to review if somewhere depends
> on PF_FREEZER_SKIP flag being set.
Yes, it does. The process operating the snapshot device cannot be
frozen, which is why it sets PF_FREEZER_SKIP in the first place.
>
> kernel/power/user.c | 19 +++++++++++--------
> 1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/power/user.c b/kernel/power/user.c
> index 32dd5a855e8c..9936efa07022 100644
> --- a/kernel/power/user.c
> +++ b/kernel/power/user.c
> @@ -68,7 +68,8 @@ static int snapshot_open(struct inode *inode, struct file *filp)
> break;
> }
>
> - lock_system_sleep();
> + if (mutex_lock_killable(&system_transition_mutex))
> + return -EINTR;
>
> if (!hibernate_acquire()) {
> error = -EBUSY;
> @@ -102,7 +103,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
> data->dev = 0;
>
> Unlock:
> - unlock_system_sleep();
> + mutex_unlock(&system_transition_mutex);
>
> return error;
> }
> @@ -111,7 +112,7 @@ static int snapshot_release(struct inode *inode, struct file *filp)
> {
> struct snapshot_data *data;
>
> - lock_system_sleep();
> + mutex_lock(&system_transition_mutex);
>
> swsusp_free();
> data = filp->private_data;
> @@ -128,7 +129,7 @@ static int snapshot_release(struct inode *inode, struct file *filp)
> PM_POST_HIBERNATION : PM_POST_RESTORE);
> hibernate_release();
>
> - unlock_system_sleep();
> + mutex_unlock(&system_transition_mutex);
>
> return 0;
> }
> @@ -140,7 +141,8 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
> ssize_t res;
> loff_t pg_offp = *offp & ~PAGE_MASK;
>
> - lock_system_sleep();
> + if (mutex_lock_killable(&system_transition_mutex))
> + return -EINTR;
>
> data = filp->private_data;
> if (!data->ready) {
> @@ -161,7 +163,7 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
> *offp += res;
>
> Unlock:
> - unlock_system_sleep();
> + mutex_unlock(&system_transition_mutex);
>
> return res;
> }
> @@ -173,7 +175,8 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
> ssize_t res;
> loff_t pg_offp = *offp & ~PAGE_MASK;
>
> - lock_system_sleep();
> + if (mutex_lock_killable(&system_transition_mutex))
> + return -EINTR;
>
> data = filp->private_data;
>
> @@ -195,7 +198,7 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
> if (res > 0)
> *offp += res;
> unlock:
> - unlock_system_sleep();
> + mutex_unlock(&system_transition_mutex);
>
> return res;
> }
> --
> 2.18.4
>
Powered by blists - more mailing lists