[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DA2E03A.2080607@fusionio.com>
Date: Mon, 11 Apr 2011 13:04:26 +0200
From: Jens Axboe <jaxboe@...ionio.com>
To: NeilBrown <neilb@...e.de>
CC: Mike Snitzer <snitzer@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hch@...radead.org" <hch@...radead.org>,
"dm-devel@...hat.com" <dm-devel@...hat.com>,
"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>
Subject: Re: [PATCH 05/10] block: remove per-queue plugging
On 2011-04-11 12:59, NeilBrown wrote:
> On Mon, 11 Apr 2011 11:19:58 +0200 Jens Axboe <jaxboe@...ionio.com> wrote:
>
>> On 2011-04-11 06:50, NeilBrown wrote:
>
>>> The only explanation I can come up with is that very occasionally schedule on
>>> 2 separate cpus calls blk_flush_plug for the same task. I don't understand
>>> the scheduler nearly well enough to know if or how that can happen.
>>> However with this patch in place I can write to a RAID1 constantly for half
>>> an hour, and without it, the write rarely lasts for 3 minutes.
>>
>> Or perhaps if the request_fn blocks, that would be problematic. So the
>> patch is likely a good idea even for that case.
>>
>> I'll merge it, changing it to list_splice_init() as I think that would
>> be more clear.
>
> OK - though I'm not 100% the patch fixes the problem - just that it hides the
> symptom for me.
> I might try instrumenting the code a bit more and see if I can find exactly
> where it is re-entering flush_plug_list - as that seems to be what is
> happening.
It's definitely a good thing to add, to avoid the list fudging on
schedule. Whether it's your exact problem, I can't tell.
> And yeah - list_split_init is probably better. I just never remember exactly
> what list_split means and have to look it up every time, where as
> list_add/list_del are very clear to me.
splice, no split :-)
>>> From 687b189c02276887dd7d5b87a817da9f67ed3c2c Mon Sep 17 00:00:00 2001
>>> From: NeilBrown <neilb@...e.de>
>>> Date: Thu, 7 Apr 2011 13:16:59 +1000
>>> Subject: [PATCH] Enhance new plugging support to support general callbacks.
>>>
>>> md/raid requires an unplug callback, but as it does not uses
>>> requests the current code cannot provide one.
>>>
>>> So allow arbitrary callbacks to be attached to the blk_plug.
>>>
>>> Cc: Jens Axboe <jaxboe@...ionio.com>
>>> Signed-off-by: NeilBrown <neilb@...e.de>
>>> ---
>>> block/blk-core.c | 13 +++++++++++++
>>> include/linux/blkdev.h | 7 ++++++-
>>> 2 files changed, 19 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/block/blk-core.c b/block/blk-core.c
>>> index 725091d..273d60b 100644
>>> --- a/block/blk-core.c
>>> +++ b/block/blk-core.c
>>> @@ -2644,6 +2644,7 @@ void blk_start_plug(struct blk_plug *plug)
>>>
>>> plug->magic = PLUG_MAGIC;
>>> INIT_LIST_HEAD(&plug->list);
>>> + INIT_LIST_HEAD(&plug->cb_list);
>>> plug->should_sort = 0;
>>>
>>> /*
>>> @@ -2717,9 +2718,21 @@ static void flush_plug_list(struct blk_plug *plug)
>>> local_irq_restore(flags);
>>> }
>>>
>>> +static void flush_plug_callbacks(struct blk_plug *plug)
>>> +{
>>> + while (!list_empty(&plug->cb_list)) {
>>> + struct blk_plug_cb *cb = list_first_entry(&plug->cb_list,
>>> + struct blk_plug_cb,
>>> + list);
>>> + list_del(&cb->list);
>>> + cb->callback(cb);
>>> + }
>>> +}
>>> +
>>> static void __blk_finish_plug(struct task_struct *tsk, struct blk_plug *plug)
>>> {
>>> flush_plug_list(plug);
>>> + flush_plug_callbacks(plug);
>>>
>>> if (plug == tsk->plug)
>>> tsk->plug = NULL;
>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>> index 32176cc..3e5e604 100644
>>> --- a/include/linux/blkdev.h
>>> +++ b/include/linux/blkdev.h
>>> @@ -857,8 +857,13 @@ extern void blk_put_queue(struct request_queue *);
>>> struct blk_plug {
>>> unsigned long magic;
>>> struct list_head list;
>>> + struct list_head cb_list;
>>> unsigned int should_sort;
>>> };
>>> +struct blk_plug_cb {
>>> + struct list_head list;
>>> + void (*callback)(struct blk_plug_cb *);
>>> +};
>>>
>>> extern void blk_start_plug(struct blk_plug *);
>>> extern void blk_finish_plug(struct blk_plug *);
>>> @@ -876,7 +881,7 @@ static inline bool blk_needs_flush_plug(struct task_struct *tsk)
>>> {
>>> struct blk_plug *plug = tsk->plug;
>>>
>>> - return plug && !list_empty(&plug->list);
>>> + return plug && (!list_empty(&plug->list) || !list_empty(&plug->cb_list));
>>> }
>>>
>>> /*
>>
>> Maybe I'm missing something, but why do you need those callbacks? If
>> it's to use plugging yourself, perhaps we can just ensure that those
>> don't get assigned in the task - so it would be have to used with care.
>>
>> It's not that I disagree to these callbacks, I just want to ensure I
>> understand why you need them.
>>
>
> I'm sure one of us is missing something (probably both) but I'm not
> sure what.
>
> The callback is central.
>
> It is simply to use plugging in md.
> Just like blk-core, md will notice that a blk_plug is active and will put
> requests aside. I then need something to call in to md when blk_finish_plug
But this is done in __make_request(), so md devices should not be
affected at all. This is the part of your explanation that I do not
connect with the code.
If md itself is putting things on the plug list, why is it doing that?
> is called so that put-aside requests can be released.
> As md can be built as a module, that call must be a call-back of some sort.
> blk-core doesn't need to register blk_plug_flush because that is never in a
> module, so it can be called directly. But the md equivalent could be in a
> module, so I need to be able to register a call back.
>
> Does that help?
Not really. Is the problem that _you_ would like to stash things aside,
not the fact that __make_request() puts things on a task plug list?
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists