[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211206145600.848736634@linuxfoundation.org>
Date: Mon, 6 Dec 2021 15:55:51 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Ioanna Alifieraki <ioanna-maria.alifieraki@...onical.com>,
Corey Minyard <cminyard@...sta.com>
Subject: [PATCH 5.10 034/130] ipmi: Move remove_work to dedicated workqueue
From: Ioanna Alifieraki <ioanna-maria.alifieraki@...onical.com>
commit 1d49eb91e86e8c1c1614c72e3e958b6b7e2472a9 upstream.
Currently when removing an ipmi_user the removal is deferred as a work on
the system's workqueue. Although this guarantees the free operation will
occur in non atomic context, it can race with the ipmi_msghandler module
removal (see [1]) . In case a remove_user work is scheduled for removal
and shortly after ipmi_msghandler module is removed we can end up in a
situation where the module is removed fist and when the work is executed
the system crashes with :
BUG: unable to handle page fault for address: ffffffffc05c3450
PF: supervisor instruction fetch in kernel mode
PF: error_code(0x0010) - not-present page
because the pages of the module are gone. In cleanup_ipmi() there is no
easy way to detect if there are any pending works to flush them before
removing the module. This patch creates a separate workqueue and schedules
the remove_work works on it. When removing the module the workqueue is
drained when destroyed to avoid the race.
[1] https://bugs.launchpad.net/bugs/1950666
Cc: stable@...r.kernel.org # 5.1
Fixes: 3b9a907223d7 (ipmi: fix sleep-in-atomic in free_user at cleanup SRCU user->release_barrier)
Signed-off-by: Ioanna Alifieraki <ioanna-maria.alifieraki@...onical.com>
Message-Id: <20211115131645.25116-1-ioanna-maria.alifieraki@...onical.com>
Signed-off-by: Corey Minyard <cminyard@...sta.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/char/ipmi/ipmi_msghandler.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -203,6 +203,8 @@ struct ipmi_user {
struct work_struct remove_work;
};
+struct workqueue_struct *remove_work_wq;
+
static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index)
__acquires(user->release_barrier)
{
@@ -1272,7 +1274,7 @@ static void free_user(struct kref *ref)
struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
/* SRCU cleanup must happen in task context. */
- schedule_work(&user->remove_work);
+ queue_work(remove_work_wq, &user->remove_work);
}
static void _ipmi_destroy_user(struct ipmi_user *user)
@@ -5166,6 +5168,13 @@ static int ipmi_init_msghandler(void)
atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
+ remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
+ if (!remove_work_wq) {
+ pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
+ rv = -ENOMEM;
+ goto out;
+ }
+
initialized = true;
out:
@@ -5191,6 +5200,8 @@ static void __exit cleanup_ipmi(void)
int count;
if (initialized) {
+ destroy_workqueue(remove_work_wq);
+
atomic_notifier_chain_unregister(&panic_notifier_list,
&panic_block);
Powered by blists - more mailing lists