lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 9 Jun 2016 23:36:55 -0500
From:	Corey Minyard <minyard@....org>
To:	Junichi Nomura <j-nomura@...jp.nec.com>,
	"openipmi-developer@...ts.sourceforge.net" 
	<openipmi-developer@...ts.sourceforge.net>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"cminyard@...sta.com" <cminyard@...sta.com>
Subject: Re: [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before
 handle_one_recv_msg()

I actually just wrote this exact patch, moments ago.  But you deserve 
credit, I'll use yours :).

-corey

On 06/09/2016 11:31 PM, Junichi Nomura wrote:
> Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
> SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
> for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
> That could lead to following list corruption problems:
>
> 1) low-level function treats smi_msg as not connected to list
>
>    handle_one_recv_msg() could end up calling smi_send(), which
>    assumes the msg is not connected to list.
>
>    For example, the following sequence could corrupt list by
>    doing list_add_tail() for the entry still connected to other list.
>
>      handle_new_recv_msgs()
>        msg = list_entry(waiting_rcv_msgs)
>        handle_one_recv_msg(msg)
>          handle_ipmb_get_msg_cmd(msg)
>            smi_send(msg)
>              spin_lock(xmit_msgs_lock)
>              list_add_tail(msg)
>              spin_unlock(xmit_msgs_lock)
>
> 2) race between multiple handle_new_recv_msgs() instances
>
>    handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
>    handle_one_recv_msg() then retakes the lock and list_del() it.
>
>    If others call handle_new_recv_msgs() during the window shown below
>    list_del() will be done twice for the same smi_msg.
>
>    handle_new_recv_msgs()
>      spin_lock(waiting_rcv_msgs_lock)
>      msg = list_entry(waiting_rcv_msgs)
>      spin_unlock(waiting_rcv_msgs_lock)
>    |
>    | handle_one_recv_msg(msg)
>    |
>      spin_lock(waiting_rcv_msgs_lock)
>      list_del(msg)
>      spin_unlock(waiting_rcv_msgs_lock)
>
> Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces")
> Signed-off-by: Jun'ichi Nomura <j-nomura@...jp.nec.com>
>
> diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> index 94fb407..94e4a88 100644
> --- a/drivers/char/ipmi/ipmi_msghandler.c
> +++ b/drivers/char/ipmi/ipmi_msghandler.c
> @@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
>   	while (!list_empty(&intf->waiting_rcv_msgs)) {
>   		smi_msg = list_entry(intf->waiting_rcv_msgs.next,
>   				     struct ipmi_smi_msg, link);
> +		list_del(&smi_msg->link);
>   		if (!run_to_completion)
>   			spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
>   					       flags);
> @@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
>   			 * To preserve message order, quit if we
>   			 * can't handle a message.
>   			 */
> +			list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
>   			break;
>   		} else {
> -			list_del(&smi_msg->link);
>   			if (rv == 0)
>   				/* Message handled */
>   				ipmi_free_smi_msg(smi_msg);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ