lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1346154857-12487-1-git-send-email-pbonzini@redhat.com>
Date:	Tue, 28 Aug 2012 13:54:12 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	linux-kernel@...r.kernel.org
Cc:	linux-scsi@...r.kernel.org, kvm@...r.kernel.org,
	rusty@...tcorp.com.au, jasowang@...hat.com, mst@...hat.com,
	virtualization@...ts.linux-foundation.org
Subject: [PATCH 0/5] Multiqueue virtio-scsi

Hi all,

this series adds multiqueue support to the virtio-scsi driver, based
on Jason Wang's work on virtio-net.  It uses a simple queue steering
algorithm that expects one queue per CPU.  LUNs in the same target always
use the same queue (so that commands are not reordered); queue switching
occurs when the request being queued is the only one for the target.
Also based on Jason's patches, the virtqueue affinity is set so that
each CPU is associated to one virtqueue.

I tested the patches with fio, using up to 32 virtio-scsi disks backed
by tmpfs on the host, and 1 LUN per target.

FIO configuration
-----------------
[global]
rw=read
bsrange=4k-64k
ioengine=libaio
direct=1
iodepth=4
loops=20

overall bandwidth (MB/s)
-----------------

# of targets    single-queue    multi-queue, 4 VCPUs    multi-queue, 8 VCPUs
1                  540               626                     599
2                  795               965                     925
4                  997              1376                    1500
8                 1136              2130                    2060
16                1440              2269                    2474
24                1408              2179                    2436
32                1515              1978                    2319

(These numbers for single-queue are with 4 VCPUs, but the impact of adding
more VCPUs is very limited).

avg bandwidth per LUN (MB/s)
---------------------

# of targets    single-queue    multi-queue, 4 VCPUs    multi-queue, 8 VCPUs
1                  540               626                     599
2                  397               482                     462
4                  249               344                     375
8                  142               266                     257
16                  90               141                     154
24                  58                90                     101
32                  47                61                      72

Testing this may require an irqbalance daemon that is built from git,
due to http://code.google.com/p/irqbalance/issues/detail?id=37.
Alternatively you can just set the affinity manually in /proc.

Rusty, can you please give your Acked-by to the first two patches?

Jason Wang (2):
  virtio-ring: move queue_index to vring_virtqueue
  virtio: introduce an API to set affinity for a virtqueue

Paolo Bonzini (3):
  virtio-scsi: allocate target pointers in a separate memory block
  virtio-scsi: pass struct virtio_scsi to virtqueue completion function
  virtio-scsi: introduce multiqueue support

 drivers/lguest/lguest_device.c         |    1 +
 drivers/remoteproc/remoteproc_virtio.c |    1 +
 drivers/s390/kvm/kvm_virtio.c          |    1 +
 drivers/scsi/virtio_scsi.c             |  200 ++++++++++++++++++++++++--------
 drivers/virtio/virtio_mmio.c           |   11 +-
 drivers/virtio/virtio_pci.c            |   58 ++++++++-
 drivers/virtio/virtio_ring.c           |   17 +++
 include/linux/virtio.h                 |    4 +
 include/linux/virtio_config.h          |   21 ++++
 9 files changed, 253 insertions(+), 61 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ