lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1346789466.4162.181.camel@haakon2.linux-iscsi.org>
Date:	Tue, 04 Sep 2012 13:11:06 -0700
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
	kvm@...r.kernel.org, rusty@...tcorp.com.au, jasowang@...hat.com,
	mst@...hat.com, virtualization@...ts.linux-foundation.org,
	Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
	target-devel <target-devel@...r.kernel.org>,
	Asias He <asias@...hat.com>
Subject: Re: [PATCH 5/5] virtio-scsi: introduce multiqueue support

On Tue, 2012-09-04 at 08:46 +0200, Paolo Bonzini wrote:
> Il 04/09/2012 04:21, Nicholas A. Bellinger ha scritto:
> >> @@ -112,6 +118,9 @@ static void virtscsi_complete_cmd(struct virtio_scsi *vscsi, void *buf)
> >>  	struct virtio_scsi_cmd *cmd = buf;
> >>  	struct scsi_cmnd *sc = cmd->sc;
> >>  	struct virtio_scsi_cmd_resp *resp = &cmd->resp.cmd;
> >> +	struct virtio_scsi_target_state *tgt = vscsi->tgt[sc->device->id];
> >> +
> >> +	atomic_dec(&tgt->reqs);
> >>  
> > 
> > As tgt->tgt_lock is taken in virtscsi_queuecommand_multi() before the
> > atomic_inc_return(tgt->reqs) check, it seems like using atomic_dec() w/o
> > smp_mb__after_atomic_dec or tgt_lock access here is not using atomic.h
> > accessors properly, no..?
> 
> No, only a single "thing" is being accessed, and there is no need to
> order the decrement with respect to preceding or subsequent accesses to
> other locations.
> 
> In other words, tgt->reqs is already synchronized with itself, and that
> is enough.
> 
> (Besides, on x86 smp_mb__after_atomic_dec is a nop).
> 

So the implementation detail wrt to requests to the same target being
processed in FIFO ordering + only being able to change the queue when no
requests are pending helps understand this code more.  Thanks for the
explanation on that bit..

However, it's still my understanding that the use of atomic_dec() in the
completion path mean that smp_mb__after_atomic_dec() is a requirement to
be proper portable atomic.hcode, no..?  Otherwise tgt->regs should be
using something other than an atomic_t, right..?

> >> +static int virtscsi_queuecommand_multi(struct Scsi_Host *sh,
> >> +				       struct scsi_cmnd *sc)
> >> +{
> >> +	struct virtio_scsi *vscsi = shost_priv(sh);
> >> +	struct virtio_scsi_target_state *tgt = vscsi->tgt[sc->device->id];
> >> +	unsigned long flags;
> >> +	u32 queue_num;
> >> +
> >> +	/* Using an atomic_t for tgt->reqs lets the virtqueue handler
> >> +	 * decrement it without taking the spinlock.
> >> +	 */
> >> +	spin_lock_irqsave(&tgt->tgt_lock, flags);
> >> +	if (atomic_inc_return(&tgt->reqs) == 1) {
> >> +		queue_num = smp_processor_id();
> >> +		while (unlikely(queue_num >= vscsi->num_queues))
> >> +			queue_num -= vscsi->num_queues;
> >> +		tgt->req_vq = &vscsi->req_vqs[queue_num];
> >> +	}
> >> +	spin_unlock_irqrestore(&tgt->tgt_lock, flags);
> >> +	return virtscsi_queuecommand(vscsi, tgt, sc);
> >> +}
> >> +
> > 
> > The extra memory barriers to get this right for the current approach are
> > just going to slow things down even more for virtio-scsi-mq..
> 
> virtio-scsi multiqueue has a performance benefit up to 20% (for a single
> LUN) or 40% (on overall bandwidth across multiple LUNs).  I doubt that a
> single memory barrier can have that much impact. :)
> 

I've no doubt that this series increases the large block high bandwidth
for virtio-scsi, but historically that has always been the easier
workload to scale.  ;)

> The way to go to improve performance even more is to add new virtio APIs
> for finer control of the usage of the ring.  These should let us avoid
> copying the sg list and almost get rid of the tgt_lock; even though the
> locking is quite efficient in virtio-scsi (see how tgt_lock and vq_lock
> are "pipelined" so as to overlap the preparation of two requests), it
> should give a nice improvement and especially avoid a kmalloc with small
> requests.  I may have some time for it next month.
> 
> > Jen's approach is what we will ultimately need to re-architect in SCSI
> > core if we're ever going to move beyond the issues of legacy host_lock,
> > so I'm wondering if maybe this is the direction that virtio-scsi-mq
> > needs to go in as well..?
> 
> We can see after the block layer multiqueue work goes in...  I also need
> to look more closely at Jens's changes.
> 

Yes, I think Jen's new approach is providing some pretty significant
gains for raw block drivers with extremly high packet (small block
random I/O) workloads, esp with hw block drivers that support genuine mq
with hw num_queues > 1.

He also has virtio-blk converted to run in num_queues=1 mode.

> Have you measured the host_lock to be a bottleneck in high-iops
> benchmarks, even for a modern driver that does not hold it in
> queuecommand?  (Certainly it will become more important as the
> virtio-scsi queuecommand becomes thinner and thinner).

This is exactly why it would make such a good vehicle to re-architect
SCSI core.  I'm thinking it can be the first sw LLD we attempt to get
running on an (currently) future scsi-mq prototype.

>   If so, we can
> start looking at limiting host_lock usage in the fast path.
> 

That would be a good incremental step for SCSI core, but I'm not sure
that that we'll be able to scale compared to blk-mq without a
new-approach for sw/hw LLDs along the lines of what Jen's is doing.

> BTW, supporting this in tcm-vhost should be quite trivial, as all the
> request queues are the same and all serialization is done in the
> virtio-scsi driver.
> 

Looking forward to that too..  ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ