lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Apr 2011 13:37:32 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Jens Axboe <jaxboe@...ionio.com>
Cc:	linux-kernel@...r.kernel.org
Subject: submitting read(1%)/write(99%) IO within a kernel thread, vs doing
 it in userspace (aio) with CFQ shows drastic drop. Ideas?


I was hoping you could shed some light at a peculiar problem I am seeing
(this is with the PV block backend I posted recently [1]).

I am using the IOmeter fio test, with two threads and modified it slightly
(please see at the bottom). The "disk" the I/Os are being done on is an iSCSI disk
that on the other side is LIO TCM 10G RAMdisk. The network is 1GB and
the line speed when doing just full blow random reads or full random writes
is 112MB/s (native or from the guest).

I launch a guest and inside the guest I run the 'fio iometer'. When launching
the guest I have the option of using two different block backends:
the kernel one (simple code [1] doing 'submit_bio') or the userspace one (which
uses the AIO library and opens the disk using O_DIRECT). The throughput and submit
latency are widely different for this particular workload. If I swap the IO
scheduler in the host for the iSCSI disk from 'cfq' to deadline or noop - throughput
and latencies become the same (however CPU usage is not, but that is not important here).
Here is a simple table with the numbers:

IOmeter       |       |      |          |
64K, randrw   |  NOOP | CFQ  | deadline |
randrwmix=80  |       |      |          |
--------------+-------+------+----------+
blkback       |103/27 |32/10 | 102/27   |
--------------+-------+------+----------+
QEMU qdisk    |103/27 |102/27| 102/27   |

What I found out is that if I pollute the ring request with just one
different type of I/O operation (so 99% is WRITE, and I stick 1% READ on it)
the I/O  plummets if I use the kernel thread. But that problem does
not show up when the I/O operations are plumbed through the AIO library.
And if I switch over from the CFQ scheduler the numbers go up again.
The host and the guest are both running Fedora Core 13 x86_64.


Any ideas what the kernel AIO library or CFQ might be doing differently?

The two code pieces simplified:

The kernel thread is quite simple, it does:

	while (!kthread_should_stop()) {
		struct blk_plug plug;

		.. snip..

		blk_start_plug(&plug);

		if (do_block_io_op(blkif))
			blkif->waiting_reqs = 1;

		blk_finish_plug(&plug);

	}

 and 'do_block_io_op' picks up the requests from the ring buffer:

	rc = blk_rings->common.req_cons;
	rp = blk_rings->common.sring->req_prod;

	while (rc != rp) {
		.. snip ..	
		switch (req.operation) {
		case BLKIF_OP_READ:
			dispatch_rw_block_io(blkif, &req, pending_req);
			break;
		case BLKIF_OP_WRITE:
			blkif->st_wr_req++;
			dispatch_rw_block_io(blkif, &req, pending_req);
		.. snip..
		cond_resched();
	}

and the 'dispatch_rw_block_io' takes the request (which can contain up
to 11 pages - so 88 512byte sectors if desired) and sets up 'bio's mapping
to these pages and then

	for (i = 0; i < nbio; i++)
		submit_bio(operation, biolist[i]);

That is it. The interesting thing is that the requests can only contain one
type - either all of the pages are READ or all WRITE (I am ignoring barrieris here).

The userspace code is similar. It has a thread that does:

    rc = blkdev->rings.common.req_cons;
    rp = blkdev->rings.common.sring->req_prod;

    while (rc != rp) {
	.. snip..
	.. picks up the request from the ring buffer and ../
            /* run i/o in aio mode */
            ioreq_runio_qemu_aio(ioreq);

and 'ioreq_runio_qemu_aio':

    switch (ioreq->req.operation) {
    case BLKIF_OP_READ:
        bdrv_aio_readv(blkdev->bs, ioreq->start / BLOCK_SIZE,
                       &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                       qemu_aio_complete, ioreq);
	.. snip..
    case BLKIF_OP_WRITE_BARRIER:
        bdrv_aio_writev(blkdev->bs, ioreq->start / BLOCK_SIZE,

and the 'bdrv_aio_[read|write]v' ends up calling either io_prep_preadv
or io_prep_writev and then io_submit.


The iometer file:

# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
numjobs=2
timeout=60

[/dev/xvda]
#bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
#bssplit=512/10:1k/5:2k/5:4k
bs=64K
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1	Linear
# iodepth=4	Very Light
# iodepth=8	Light
# iodepth=64	Moderate
# iodepth=256	Heavy
iodepth=256
write_bw_log=iometer
write_lat_log=iometer


[1]: http://lwn.net/Articles/439629/
    I updated it a bit (move the plug/unplug higher in the calling chain), so would suggest 
    git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/xen-blkback-v3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ