[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49y4lqmq5r.fsf@segfault.boston.devel.redhat.com>
Date: Fri, 17 Apr 2015 16:54:40 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Shaohua Li <shli@...com>
Cc: <linux-kernel@...r.kernel.org>, <axboe@...nel.dk>
Subject: Re: [PATCH] blk: clean up plug
Shaohua Li <shli@...com> writes:
> Current code looks like inner plug gets flushed with a
> blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
> to current->plug, while only outmost plug is assigned to current->plug.
> So inner plug always has empty request/callback list, which makes
> blk_flush_plug_list() a nop. This tries to make the code more clear.
>
> Signed-off-by: Shaohua Li <shli@...com>
Hi, Shaohua,
I agree that this looks like a clean-up with no behavioral change, and
it looks good to me. However,it does make me scratch my head about the
numbers I was seeing. Here's the table from that other email thread[1]:
device| vanilla | patch1 | dio-noplug | noflush-nested
------+------------+----------------+---------------+-----------------
rssda | 701,684 | 1,168,527 | 1,342,177 | 1,297,612
| 100% | +66% | +91% | +85%
vdb0 | 358,264 | 902,913 | 906,850 | 922,327
| 100% | +152% | +153% | +157%
Patch1 refers to the first patch in this series, which fixes the merge
logic for single-queue blk-mq devices. Each column after that includes
that first patch. In dio-noplug, I removed the blk_plug from the
direct-io code path (so there is no nesting at all). This is a control,
since it is what I expect the outcome of the noflush-nested column to
actually be. Then, the noflush-nested column leaves the blk_plug in
place in the dio code, but includes the patch that prevents nested
blk_plug's from being flushed. All numbers are the average of 5 runs.
With the exception of the vanilla run on rssda (the first run was
faster, causing the average to go up), the standard deviation is very
small.
For the dio-noplug column, if the inner plug really was a noop, then why
would we see any change in performance? Like I said, I agree with your
reading of the code and the patch. Color me confused. I'll poke at it
more next week. For now, I think your patch is fine.
Reviewed-by: Jeff Moyer <jmoyer@...hat.com>
Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
1/2 of aforementioned thread). ;)
-Jeff
[1] https://lkml.org/lkml/2015/4/16/366
> ---
> block/blk-core.c | 24 ++++++++++++------------
> 1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 794c3e7..d3161f3 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -3018,21 +3018,20 @@ void blk_start_plug(struct blk_plug *plug)
> {
> struct task_struct *tsk = current;
>
> + /*
> + * If this is a nested plug, don't actually assign it.
> + */
> + if (tsk->plug)
> + return;
> +
> INIT_LIST_HEAD(&plug->list);
> INIT_LIST_HEAD(&plug->mq_list);
> INIT_LIST_HEAD(&plug->cb_list);
> -
> /*
> - * If this is a nested plug, don't actually assign it. It will be
> - * flushed on its own.
> + * Store ordering should not be needed here, since a potential
> + * preempt will imply a full memory barrier
> */
> - if (!tsk->plug) {
> - /*
> - * Store ordering should not be needed here, since a potential
> - * preempt will imply a full memory barrier
> - */
> - tsk->plug = plug;
> - }
> + tsk->plug = plug;
> }
> EXPORT_SYMBOL(blk_start_plug);
>
> @@ -3179,10 +3178,11 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
>
> void blk_finish_plug(struct blk_plug *plug)
> {
> + if (plug != current->plug)
> + return;
> blk_flush_plug_list(plug, false);
>
> - if (plug == current->plug)
> - current->plug = NULL;
> + current->plug = NULL;
> }
> EXPORT_SYMBOL(blk_finish_plug);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists