[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y5hfj3vl.fsf@rustcorp.com.au>
Date: Mon, 03 Dec 2012 12:25:42 +1030
From: Rusty Russell <rusty@...tcorp.com.au>
To: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
krkumar2@...ibm.com, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: kvm@...r.kernel.org, bhutchings@...arflare.com,
jwhan@...ewood.snu.ac.kr, shiyer@...hat.com,
Jason Wang <jasowang@...hat.com>
Subject: Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info
Jason Wang <jasowang@...hat.com> writes:
> To support multiqueue transmitq/receiveq, the first step is to separate queue
> related structure from virtnet_info. This patch introduce send_queue and
> receive_queue structure and use the pointer to them as the parameter in
> functions handling sending/receiving.
OK, seems like a straightforward xform: a few nit-picks:
> +/* Internal representation of a receive virtqueue */
> +struct receive_queue {
> + /* Virtqueue associated with this receive_queue */
> + struct virtqueue *vq;
> +
> + struct napi_struct napi;
> +
> + /* Number of input buffers, and max we've ever had. */
> + unsigned int num, max;
Weird whitespace here.
> +
> + /* Work struct for refilling if we run low on memory. */
> + struct delayed_work refill;
I can't really see the justificaiton for a refill per queue. Just have
one work iterate all the queues if it happens, unless it happens often
(in which case, we need to look harder at this anyway).
> struct virtnet_info {
> struct virtio_device *vdev;
> - struct virtqueue *rvq, *svq, *cvq;
> + struct virtqueue *cvq;
> struct net_device *dev;
> struct napi_struct napi;
You leave napi here, and take it away in the next patch. I think it's
supposed to go away now.
Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists