lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 14 Nov 2015 11:12:09 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	xen-devel@...ts.xen.org
Cc:	linux-kernel@...r.kernel.org, roger.pau@...rix.com,
	konrad.wilk@...cle.com, felipe.franciosi@...rix.com, axboe@...com,
	avanzini.arianna@...il.com, rafal.mielniczuk@...rix.com,
	jonathan.davies@...rix.com, david.vrabel@...rix.com,
	Bob Liu <bob.liu@...cle.com>
Subject: [PATCH v5 00/10] xen-block: multi hardware-queues/rings support

Note: These patches were based on original work of Arianna's internship for
GNOME's Outreach Program for Women.

After using blk-mq api, a guest has more than one(nr_vpus) software request
queues associated with each block front. These queues can be mapped over several
rings(hardware queues) to the backend, making it very easy for us to run
multiple threads on the backend for a single virtual disk.

By having different threads issuing requests at the same time, the performance
of guest can be improved significantly.

Test was done based on null_blk driver:
dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk"
domU: v4.3-rc7 16vcpus 10GB

[test]
rw=read
direct=1
ioengine=libaio
bs=4k
time_based
runtime=30
filename=/dev/xvdb
numjobs=16
iodepth=64
iodepth_batch=64
iodepth_batch_complete=64
group_reporting

Results:
iops1: After commit("xen/blkfront: make persistent grants per-queue").
iops2: After commit("xen/blkback: make persistent grants and free pages pool per-queue").

Queues:			  1 	   4 	  	  8 	 	 16
Iops orig(k):		810 	1064 		780 		700
Iops1(k):		810     1230(~20%)	1024(~20%)	850(~20%)
Iops2(k):		810     1410(~35%)	1354(~75%)      1440(~100%)

With 4 queues after this series we can get ~75% increase in IOPS, and
performance won't drop if incresing queue numbers.

Please find the respective chart in this link:
https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0

---
v5:
 * Rebase to xen/tip.git tags/for-linus-4.4-rc0-tag.
 * Comments from Konrad.

v4:
 * Rebase to v4.3-rc7.
 * Comments from Roger.

v3:
 * Rebased to v4.2-rc8.

Bob Liu (10):
  xen/blkif: document blkif multi-queue/ring extension
  xen/blkfront: separate per ring information out of device info
  xen/blkfront: pseudo support for multi hardware queues/rings
  xen/blkfront: split per device io_lock
  xen/blkfront: negotiate number of queues/rings to be used with backend
  xen/blkback: separate ring information out of struct xen_blkif
  xen/blkback: pseudo support for multi hardware queues/rings
  xen/blkback: get the number of hardware queues/rings from blkfront
  xen/blkfront: make persistent grants per-queue
  xen/blkback: make pool of persistent grants and free pages per-queue

 drivers/block/xen-blkback/blkback.c | 386 ++++++++++---------
 drivers/block/xen-blkback/common.h  |  78 ++--
 drivers/block/xen-blkback/xenbus.c  | 359 ++++++++++++------
 drivers/block/xen-blkfront.c        | 718 ++++++++++++++++++++++--------------
 include/xen/interface/io/blkif.h    |  48 +++
 5 files changed, 971 insertions(+), 618 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists