lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Dec 2013 12:55:43 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	Michael Dalton <mwdalton@...gle.com>
CC:	"Michael S. Tsirkin" <mst@...hat.com>,
	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
	Eric Dumazet <edumazet@...gle.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	lf-virt <virtualization@...ts.linux-foundation.org>
Subject: Re: [PATCH net-next 2/3] virtio-net: use per-receive queue page frag
 alloc for mergeable bufs

On 12/27/2013 05:56 AM, Eric Dumazet wrote:
> On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote:
>> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst@...hat.com> wrote:
>>> So there isn't a conflict with respect to locking.
>>>
>>> Is it problematic to use same page_frag with both GFP_ATOMIC and with
>>> GFP_KERNEL? If yes why?
>> I believe it is safe to use the same page_frag and I will send out a
>> followup patchset using just the per-receive page_frags. For future
>> consideration, Eric noted that disabling NAPI before GFP_KERNEL
>> allocs can potentially inhibit virtio-net network processing for some
>> time (e.g., during a blocking memory allocation or preemption).
> Yep, using napi_disable() in the refill process looks quite inefficient
> to me, it not buggy.
>
> napi_disable() is a big hammer, while whole idea of having a process to
> block on GFP_KERNEL allocations is to allow some asynchronous behavior.
>
> I have hard time to convince myself virtio_net is safe anyway with this
> work queue thing.
>
> virtnet_open() seems racy for example :
>
>         for (i = 0; i < vi->max_queue_pairs; i++) {
>                 if (i < vi->curr_queue_pairs)
>                         /* Make sure we have some buffers: if oom use wq. */
>                         if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
>                                 schedule_delayed_work(&vi->refill, 0);
>                 virtnet_napi_enable(&vi->rq[i]);
>
>
> What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ?

Then napi_disable() in refill_work() will busy wait until napi is
enabled by virtnet_napi_enable() which looks safe. Looks like the real
issue is in virtnet_restore() who calls try_fill_recv() in neither napi
context nor napi disabled context.
>
> refill_work() will happily conflict with another cpu, two cpus could 
> call try_fill_recv() at the same time, or worse napi_enable() would crash.
>
> I do not have time to make a full check, but I guess there are
> other races like this one.
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index c51a98867a40..b8e2adb5d0c2 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -726,16 +726,18 @@ again:
>  static int virtnet_open(struct net_device *dev)
>  {
>  	struct virtnet_info *vi = netdev_priv(dev);
> +	bool refill = false;
>  	int i;
>  
>  	for (i = 0; i < vi->max_queue_pairs; i++) {
>  		if (i < vi->curr_queue_pairs)
>  			/* Make sure we have some buffers: if oom use wq. */
>  			if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
> -				schedule_delayed_work(&vi->refill, 0);
> +				refill = true;
>  		virtnet_napi_enable(&vi->rq[i]);
>  	}
> -
> +	if (refill)
> +		schedule_delayed_work(&vi->refill, 0);
>  	return 0;
>  }
>  
>
>
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ