[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160725203523.963832287@linuxfoundation.org>
Date: Mon, 25 Jul 2016 13:55:17 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Junichi Nomura <j-nomura@...jp.nec.com>,
Corey Minyard <cminyard@...sta.com>,
Ye Feng <yefeng.yl@...baba-inc.com>
Subject: [PATCH 4.4 062/146] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg()
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Junichi Nomura <j-nomura@...jp.nec.com>
commit ae4ea9a2460c7fee2ae8feeb4dfe96f5f6c3e562 upstream.
Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
That could lead to following list corruption problems:
1) low-level function treats smi_msg as not connected to list
handle_one_recv_msg() could end up calling smi_send(), which
assumes the msg is not connected to list.
For example, the following sequence could corrupt list by
doing list_add_tail() for the entry still connected to other list.
handle_new_recv_msgs()
msg = list_entry(waiting_rcv_msgs)
handle_one_recv_msg(msg)
handle_ipmb_get_msg_cmd(msg)
smi_send(msg)
spin_lock(xmit_msgs_lock)
list_add_tail(msg)
spin_unlock(xmit_msgs_lock)
2) race between multiple handle_new_recv_msgs() instances
handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
handle_one_recv_msg() then retakes the lock and list_del() it.
If others call handle_new_recv_msgs() during the window shown below
list_del() will be done twice for the same smi_msg.
handle_new_recv_msgs()
spin_lock(waiting_rcv_msgs_lock)
msg = list_entry(waiting_rcv_msgs)
spin_unlock(waiting_rcv_msgs_lock)
|
| handle_one_recv_msg(msg)
|
spin_lock(waiting_rcv_msgs_lock)
list_del(msg)
spin_unlock(waiting_rcv_msgs_lock)
Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces")
Signed-off-by: Jun'ichi Nomura <j-nomura@...jp.nec.com>
[Added a comment to describe why this works.]
Signed-off-by: Corey Minyard <cminyard@...sta.com>
Tested-by: Ye Feng <yefeng.yl@...baba-inc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/char/ipmi/ipmi_msghandler.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -3819,6 +3819,7 @@ static void handle_new_recv_msgs(ipmi_sm
while (!list_empty(&intf->waiting_rcv_msgs)) {
smi_msg = list_entry(intf->waiting_rcv_msgs.next,
struct ipmi_smi_msg, link);
+ list_del(&smi_msg->link);
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
flags);
@@ -3828,11 +3829,14 @@ static void handle_new_recv_msgs(ipmi_sm
if (rv > 0) {
/*
* To preserve message order, quit if we
- * can't handle a message.
+ * can't handle a message. Add the message
+ * back at the head, this is safe because this
+ * tasklet is the only thing that pulls the
+ * messages.
*/
+ list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
break;
} else {
- list_del(&smi_msg->link);
if (rv == 0)
/* Message handled */
ipmi_free_smi_msg(smi_msg);
Powered by blists - more mailing lists