lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1311176589.8573.33.camel@localhost.localdomain>
Date:	Wed, 20 Jul 2011 08:43:09 -0700
From:	Shirley Ma <mashirle@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	jasowang@...hat.com
Subject: Re: [PATCH net-next]vhost: fix condition check for # of
 outstanding dma buffers

On Wed, 2011-07-20 at 13:28 +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 19, 2011 at 01:56:25PM -0700, Shirley Ma wrote:
> > On Tue, 2011-07-19 at 22:09 +0300, Michael S. Tsirkin wrote:
> > > On Tue, Jul 19, 2011 at 11:37:58AM -0700, Shirley Ma wrote:
> > > > Signed-off-by: Shirley Ma <xma@...ibm.com>
> > > > ---
> > > > 
> > > >  drivers/vhost/net.c |    6 ++++--
> > > >  1 files changed, 4 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > > > index 70ac604..83cb738 100644
> > > > --- a/drivers/vhost/net.c
> > > > +++ b/drivers/vhost/net.c
> > > > @@ -189,8 +189,10 @@ static void handle_tx(struct vhost_net
> *net)
> > > >                               break;
> > > >                       }
> > > >                       /* If more outstanding DMAs, queue the
> work */
> > > > -                     if (unlikely(vq->upend_idx - vq->done_idx
> >
> > > > -                                  VHOST_MAX_PEND)) {
> > > > +                     if (unlikely((vq->upend_idx - vq->done_idx
> >
> > > > +                                     VHOST_MAX_PEND) ||
> > > > +                                  (vq->upend_idx - vq->done_idx
> >
> > > > +                                      VHOST_MAX_PEND -
> > > UIO_MAXIOV))) {
> > > 
> > > Could you please explain why this makes sense please?
> > > VHOST_MAX_PEND is 128 UIO_MAXIOV is 1024 so
> > > the result is negative?
> > 
> > I thought it is equal to:
> > 
> > if (vq->upend_idx > vq->done_idx) 
> >       check vq->upend_idx - vq->done_idx > VHOST_MAX_PEND
> > if (vq->upend_idx < vq->done_idx)
> >       check vq->upend_idx + UIO_MAXIOV - vq->done_idx >
> VHOST_MAX_PEND
> >       
> 
> Check it out: upend_idx == done_idx == 0 does not satisfy the
> above conditions but does trigger in your code, right?

We don't hit upend_idx == done_idx == 0. Only upend_idx == done_idx ==
UIO_MAXIOV could happen if the lower device has issue and never DMA any
packets out.

> Better keep it simple. Maybe:
> 
>         if (unlikely(vq->upend_idx - vq->done_idx > VHOST_MAX_PEND) ||
>                 (unlikely(vq->upend_idx < vq->done_idx) &&
>                 unlikely(vq->upend_idx + UIO_MAXIOV - vq->done_idx >
>                          VHOST_MAX_PEND)))
> 
> ?
> 
> Also, please add commit log documenting what does the patch
> fix: something like:
>         'the test for # of outstanding buffers returned
>          incorrect results when due to wrap around,
>          upend_idx < done_idx'?

Sure, will modify it and resubmit.

> > > I thought upend_idx - done_idx is exactly the number
> > > of buffers, so once we get too many we stop until
> > > one gets freed?
> > 
> > They are index, so in vhost zerocopy callback, we can get the idx
> right
> > away.
> > 
> > > 
> > > >                               tx_poll_start(net, sock);
> > > >                               set_bit(SOCK_ASYNC_NOSPACE,
> > > &sock->flags);
> > > >                               break;
> > > > 

Thanks
Shirley

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ