lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1322726977.3259.3.camel@lappy>
Date:	Thu, 01 Dec 2011 10:09:37 +0200
From:	Sasha Levin <levinsasha928@...il.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	Rusty Russell <rusty@...tcorp.com.au>, Avi Kivity <avi@...hat.com>,
	linux-kernel@...r.kernel.org,
	virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
	markmc@...hat.com
Subject: Re: [PATCH] virtio-ring: Use threshold for switching to indirect
 descriptors

On Thu, 2011-12-01 at 09:58 +0200, Michael S. Tsirkin wrote:
> On Thu, Dec 01, 2011 at 01:12:25PM +1030, Rusty Russell wrote:
> > On Wed, 30 Nov 2011 18:11:51 +0200, Sasha Levin <levinsasha928@...il.com> wrote:
> > > On Tue, 2011-11-29 at 16:58 +0200, Avi Kivity wrote:
> > > > On 11/29/2011 04:54 PM, Michael S. Tsirkin wrote:
> > > > > > 
> > > > > > Which is actually strange, weren't indirect buffers introduced to make
> > > > > > the performance *better*? From what I see it's pretty much the
> > > > > > same/worse for virtio-blk.
> > > > >
> > > > > I know they were introduced to allow adding very large bufs.
> > > > > See 9fa29b9df32ba4db055f3977933cd0c1b8fe67cd
> > > > > Mark, you wrote the patch, could you tell us which workloads
> > > > > benefit the most from indirect bufs?
> > > > >
> > > > 
> > > > Indirects are really for block devices with many spindles, since there
> > > > the limiting factor is the number of requests in flight.  Network
> > > > interfaces are limited by bandwidth, it's better to increase the ring
> > > > size and use direct buffers there (so the ring size more or less
> > > > corresponds to the buffer size).
> > > > 
> > > 
> > > I did some testing of indirect descriptors under different workloads.
> > 
> > MST and I discussed getting clever with dynamic limits ages ago, but it
> > was down low on the TODO list.  Thanks for diving into this...
> > 
> > AFAICT, if the ring never fills, direct is optimal.  When the ring
> > fills, indirect is optimal (we're better to queue now than later).
> > 
> > Why not something simple, like a threshold which drops every time we
> > fill the ring?
> > 
> > struct vring_virtqueue
> > {
> > ...
> >         int indirect_thresh;
> > ...
> > }
> > 
> > virtqueue_add_buf_gfp()
> > {
> > ...
> > 
> >         if (vq->indirect &&
> >             (vq->vring.num - vq->num_free) + out + in > vq->indirect_thresh)
> >                 return indirect()
> > ...
> > 
> > 	if (vq->num_free < out + in) {
> >                 if (vq->indirect && vq->indirect_thresh > 0)
> >                         vq->indirect_thresh--;
> >         
> > ...
> > }
> > 
> > Too dumb?
> > 
> > Cheers,
> > Rusty.
> 
> We'll presumably need some logic to increment is back,
> to account for random workload changes.
> Something like slow start?

We can increment it each time the queue was less than 10% full, it
should act like slow start, no?

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ