lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Dec 2011 11:40:49 -0500
From:	Chris Mason <chris.mason@...cle.com>
To:	Jens Axboe <axboe@...nel.dk>
Cc:	Shaohua Li <shli@...nel.org>,
	Dave Kleikamp <dave.kleikamp@...cle.com>, linux-aio@...ck.org,
	linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
	Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH] AIO: Don't plug the I/O queue in do_io_submit()

On Thu, Dec 15, 2011 at 05:15:26PM +0100, Jens Axboe wrote:
> On 2011-12-15 02:09, Shaohua Li wrote:
> > 2011/12/14 Dave Kleikamp <dave.kleikamp@...cle.com>:
> >> Asynchronous I/O latency to a solid-state disk greatly increased
> >> between the 2.6.32 and 3.0 kernels. By removing the plug from
> >> do_io_submit(), we observed a 34% improvement in the I/O latency.
> >>
> >> Unfortunately, at this level, we don't know if the request is to
> >> a rotating disk or not.
> >>
> >> Signed-off-by: Dave Kleikamp <dave.kleikamp@...cle.com>
> >> Cc: linux-aio@...ck.org
> >> Cc: Chris Mason <chris.mason@...cle.com>
> >> Cc: Jens Axboe <axboe@...nel.dk>
> >> Cc: Andi Kleen <ak@...ux.intel.com>
> >> Cc: Jeff Moyer <jmoyer@...hat.com>
> >>
> >> diff --git a/fs/aio.c b/fs/aio.c
> >> index 78c514c..d131a2c 100644
> >> --- a/fs/aio.c
> >> +++ b/fs/aio.c
> >> @@ -1696,7 +1696,6 @@ long do_io_submit(aio_context_t ctx_id, long nr,
> >>        struct kioctx *ctx;
> >>        long ret = 0;
> >>        int i = 0;
> >> -       struct blk_plug plug;
> >>        struct kiocb_batch batch;
> >>
> >>        if (unlikely(nr < 0))
> >> @@ -1716,8 +1715,6 @@ long do_io_submit(aio_context_t ctx_id, long nr,
> >>
> >>        kiocb_batch_init(&batch, nr);
> >>
> >> -       blk_start_plug(&plug);
> >> -
> >>        /*
> >>         * AKPM: should this return a partial result if some of the IOs were
> >>         * successfully submitted?
> >> @@ -1740,7 +1737,6 @@ long do_io_submit(aio_context_t ctx_id, long nr,
> >>                if (ret)
> >>                        break;
> >>        }
> >> -       blk_finish_plug(&plug);
> >>
> >>        kiocb_batch_free(&batch);
> >>        put_ioctx(ctx);
> > can you explain why this can help? Note, in 3.1 kernel we now force flush
> > plug list if the list is too long, which will remove a lot of latency.
> 
> I think that would indeed be an interesting addition to test on top of
> the 3.0 kernel being used.
> 
> This is a bit of a sticky situation. We want the plugging and merging on
> rotational storage, and on SSDs we want the batch addition to the queue
> to avoid hammering on the queue lock. At this level, we have no idea.
> But we don't want to introduce longer latencies. So the question is, are
> these latencies due to long queues (and hence would be helped with the
> auto-replug on 3.1 and newer), or are they due to the submissions
> running for too long. If the latter, then we can either look into
> reducing the time spent between submitting the individual pieces. Or at
> least not holding up too long.

Each io_submit call is sending down about 34K of IO to two different devices.
The latencies were measured just on the process writing the redo
logs, so it is a very specific subset of the overall benchmark.

The patched kernel only does 4x more iops for the redo logs than the
unpatched kernel, so we're talking ~8K ios here.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ