lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1423988345-4005-1-git-send-email-bob.liu@oracle.com>
Date:	Sun, 15 Feb 2015 16:18:55 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	xen-devel@...ts.xen.org
Cc:	david.vrabel@...rix.com, linux-kernel@...r.kernel.org,
	roger.pau@...rix.com, konrad.wilk@...cle.com,
	felipe.franciosi@...rix.com, axboe@...com, hch@...radead.org,
	avanzini.arianna@...il.com, Bob Liu <bob.liu@...cle.com>
Subject: [RFC PATCH 00/10] Multi-queue support for xen-block driver

This patchset convert the Xen PV block driver to the multi-queue block layer API
by sharing and using multiple I/O rings between the frontend and backend.

History:
It's based on the result of Arianna's internship for GNOME's Outreach Program
for Women, in which she was mentored by Konrad Rzeszutek Wilk. I also worked on
this patchset with her at that time, and now fully take over this task.
I've got her authorization to "change authorship or SoB to the patches as you
like."

A few words on block multi-queue layer:
Multi-queue block layer improved block scalability a lot by split single request
queue to per-processor software queues and hardware dispatch queues. The linux
blk-mq API will handle software queues, while specific block driver must deal
with hardware queues.

The xen/block implementation:
1) Convert to blk-mq api with only one hardware queue.
2) Use more rings to act as multi hardware queues.
3) Negotiate number of hardware queues, the same as xen-net driver. The
backend notify "multi-queue-max-queues" to frontend, then the front write back
final number to "multi-queue-num-queues".

Test result:
fio's IOmeter emulation on a 16 cpus domU with a null_blk device, hardware
queue number was 16.
nr_fio_jobs      IOPS(before)   IOPS(after)	  Diff
	1                 57k             58k       0%
	4                 95k 	         201k    +210%
	8                 89k            372k    +410%
       16                 68k            284k    +410%
       32                 65k            196k    +300%
       64                 63k            183k    +290%

More results are coming, there was also big improvement on both write-IOPS and
latency.

Any comments or suggestions are welcome.
Thank you,
-Bob Liu

Bob Liu (10):
  xen/blkfront: convert to blk-mq API
  xen/blkfront: drop legacy block layer support
  xen/blkfront: reorg info->io_lock after using blk-mq API
  xen/blkfront: separate ring information to an new struct
  xen/blkback: separate ring information out of struct xen_blkif
  xen/blkfront: pseudo support for multi hardware queues
  xen/blkback: pseudo support for multi hardware queues
  xen/blkfront: negotiate hardware queue number with backend
  xen/blkback: get hardware queue number from blkfront
  xen/blkfront: use work queue to fast blkif interrupt return

 drivers/block/xen-blkback/blkback.c | 370 ++++++++-------
 drivers/block/xen-blkback/common.h  |  54 ++-
 drivers/block/xen-blkback/xenbus.c  | 415 +++++++++++------
 drivers/block/xen-blkfront.c        | 894 +++++++++++++++++++++---------------
 4 files changed, 1018 insertions(+), 715 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ