[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110719190945.GB8667@redhat.com>
Date: Tue, 19 Jul 2011 22:09:45 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Shirley Ma <mashirle@...ibm.com>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
jasowang@...hat.com
Subject: Re: [PATCH net-next]vhost: fix condition check for # of outstanding
dma buffers
On Tue, Jul 19, 2011 at 11:37:58AM -0700, Shirley Ma wrote:
> Signed-off-by: Shirley Ma <xma@...ibm.com>
> ---
>
> drivers/vhost/net.c | 6 ++++--
> 1 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 70ac604..83cb738 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -189,8 +189,10 @@ static void handle_tx(struct vhost_net *net)
> break;
> }
> /* If more outstanding DMAs, queue the work */
> - if (unlikely(vq->upend_idx - vq->done_idx >
> - VHOST_MAX_PEND)) {
> + if (unlikely((vq->upend_idx - vq->done_idx >
> + VHOST_MAX_PEND) ||
> + (vq->upend_idx - vq->done_idx >
> + VHOST_MAX_PEND - UIO_MAXIOV))) {
Could you please explain why this makes sense please?
VHOST_MAX_PEND is 128 UIO_MAXIOV is 1024 so
the result is negative?
I thought upend_idx - done_idx is exactly the number
of buffers, so once we get too many we stop until
one gets freed?
> tx_poll_start(net, sock);
> set_bit(SOCK_ASYNC_NOSPACE, &sock->flags);
> break;
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists