[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <7727de24-6867-1a37-3fed-1bdb1fd6084f@ce.jp.nec.com>
Date: Fri, 10 Jun 2016 04:31:52 +0000
From: Junichi Nomura <j-nomura@...jp.nec.com>
To: "minyard@....org" <minyard@....org>,
"openipmi-developer@...ts.sourceforge.net"
<openipmi-developer@...ts.sourceforge.net>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"cminyard@...sta.com" <cminyard@...sta.com>
Subject: [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before
handle_one_recv_msg()
Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
That could lead to following list corruption problems:
1) low-level function treats smi_msg as not connected to list
handle_one_recv_msg() could end up calling smi_send(), which
assumes the msg is not connected to list.
For example, the following sequence could corrupt list by
doing list_add_tail() for the entry still connected to other list.
handle_new_recv_msgs()
msg = list_entry(waiting_rcv_msgs)
handle_one_recv_msg(msg)
handle_ipmb_get_msg_cmd(msg)
smi_send(msg)
spin_lock(xmit_msgs_lock)
list_add_tail(msg)
spin_unlock(xmit_msgs_lock)
2) race between multiple handle_new_recv_msgs() instances
handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
handle_one_recv_msg() then retakes the lock and list_del() it.
If others call handle_new_recv_msgs() during the window shown below
list_del() will be done twice for the same smi_msg.
handle_new_recv_msgs()
spin_lock(waiting_rcv_msgs_lock)
msg = list_entry(waiting_rcv_msgs)
spin_unlock(waiting_rcv_msgs_lock)
|
| handle_one_recv_msg(msg)
|
spin_lock(waiting_rcv_msgs_lock)
list_del(msg)
spin_unlock(waiting_rcv_msgs_lock)
Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces")
Signed-off-by: Jun'ichi Nomura <j-nomura@...jp.nec.com>
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 94fb407..94e4a88 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
while (!list_empty(&intf->waiting_rcv_msgs)) {
smi_msg = list_entry(intf->waiting_rcv_msgs.next,
struct ipmi_smi_msg, link);
+ list_del(&smi_msg->link);
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
flags);
@@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
* To preserve message order, quit if we
* can't handle a message.
*/
+ list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
break;
} else {
- list_del(&smi_msg->link);
if (rv == 0)
/* Message handled */
ipmi_free_smi_msg(smi_msg);
Powered by blists - more mailing lists