lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Jul 2015 14:29:28 +0200
From:	Oliver Neukum <oneukum@...e.com>
To:	Eugene Shatokhin <eugene.shatokhin@...alab.ru>
Cc:	LKML <linux-kernel@...r.kernel.org>, linux-usb@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: Several races in "usbnet" module (kernel 4.1.x)

On Fri, 2015-07-24 at 20:38 +0300, Eugene Shatokhin wrote:
> 21.07.2015 15:04, Oliver Neukum пишет:

> > your analysis is correct and it looks like in addition to your proposed
> > fix locking needs to be simplified and a common lock to be taken.
> > Suggestions?
> 
> Just an idea, I haven't tested it.
> 
> How about moving the operations with dev->done under &list->lock in 
> defer_bh, while keeping dev->done.lock too and changing 

Why keep dev->done.lock?
Does it make sense at all?

> usbnet_terminate_urbs() as described below?
> 
> Like this:
> @@ -428,12 +428,12 @@ static enum skb_state defer_bh(struct usbnet *dev, 
> struct sk_buff *skb,
>   	old_state = entry->state;
>   	entry->state = state;
>   	__skb_unlink(skb, list);
> -	spin_unlock(&list->lock);
>   	spin_lock(&dev->done.lock);
>   	__skb_queue_tail(&dev->done, skb);
>   	if (dev->done.qlen == 1)
>   		tasklet_schedule(&dev->bh);
> -	spin_unlock_irqrestore(&dev->done.lock, flags);
> +	spin_unlock(&dev->done.lock);
> +	spin_unlock_irqrestore(&list->lock, flags);
>   	return old_state;
>   }
> -------------------
> 
> usbnet_terminate_urbs() can then be changed as follows:
> 
> @@ -749,6 +749,20 @@ EXPORT_SYMBOL_GPL(usbnet_unlink_rx_urbs);
> 
>  
> /*-------------------------------------------------------------------------*/
> 
> +static void wait_skb_queue_empty(struct sk_buff_head *q)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +	while (!skb_queue_empty(q)) {
> +		spin_unlock_irqrestore(&q->lock, flags);
> +		schedule_timeout(msecs_to_jiffies(UNLINK_TIMEOUT_MS));
> +		set_current_state(TASK_UNINTERRUPTIBLE);

I suppose you want to invert those lines

> +		spin_lock_irqsave(&q->lock, flags);
> +	}
> +	spin_unlock_irqrestore(&q->lock, flags);
> +}
> +

Your changes make sense, but it locks to me as if a lock would
become totally redundant.

	Regards
		Oliver


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ