[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49638pl9eo.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 02 Dec 2009 09:47:59 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Corrado Zoccolo <czoccolo@...il.com>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [PATCH 4/4] cfq-iosched: fix corner cases in idling logic
Vivek Goyal <vgoyal@...hat.com> writes:
> On Wed, Dec 02, 2009 at 03:14:22PM +0100, Corrado Zoccolo wrote:
>> Hi Jeff,
>> On Wed, Dec 2, 2009 at 2:42 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
>> > Corrado Zoccolo <czoccolo@...il.com> writes:
>> >
>> >> Idling logic was disabled in some corner cases, leading to unfair share
>> >> for noidle queues.
>> >> * the idle timer was not armed if there were other requests in the
>> >> driver. unfortunately, those requests could come from other workloads,
>> >> or queues for which we don't enable idling. So we will check only
>> >> pending requests from the active queue
>> >> * rq_noidle check on no-idle queue could disable the end of tree idle if
>> >> the last completed request was rq_noidle. Now, we will disable that
>> >> idle only if all the queues served in the no-idle tree had rq_noidle
>> >> requests.
>> >>
>> >> Reported-by: Vivek Goyal <vgoyal@...hat.com>
>> >> Signed-off-by: Corrado Zoccolo <czoccolo@...il.com>
>> >
>> >> @@ -2606,17 +2608,27 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
>> >> cfq_clear_cfqq_slice_new(cfqq);
>> >> }
>> >> /*
>> >> - * If there are no requests waiting in this queue, and
>> >> - * there are other queues ready to issue requests, AND
>> >> - * those other queues are issuing requests within our
>> >> - * mean seek distance, give them a chance to run instead
>> >> - * of idling.
>> >> + * Idling is not enabled on:
>> >> + * - expired queues
>> >> + * - idle-priority queues
>> >> + * - async queues
>> >> + * - queues with still some requests queued
>> >> + * - when there is a close cooperator
>> >> */
>> >
>> > I'm not sure this logic is correct. Is this for the 2.6.33 branch?
>> Yes.
>> > If so, the coop flag now means that multiple processes share the same
>> > cfqq. Are you sure this is the right thing to do for close cooperators?
>> I'm not sure. I didn't change the logic for close cooperators:
Heh, right you are.
>> - else if (cfqq_empty && !cfq_close_cooperator(cfqd, cfqq) &&
>> - sync && !rq_noidle(rq))
>> - cfq_arm_slice_timer(cfqd);
>> + else if (sync && cfqq_empty &&
>> + !cfq_close_cooperator(cfqd, cfqq)) {
>> + cfqd->noidle_tree_requires_idle |= !rq_noidle(rq);
>>
>> I changed the rq_noidle part, and rewrote the comment to be aligned
>> with the code.
>> So I don't mind if you improve (or just remove) the close cooperator part.
>> Probably, you should do a test where close cooperating processes are competing
>> with a sequential reader, to see the effect of idling or not on them.
>>
>
> I also can't find what's wrong with this. Previously we were not merging
> close cooperators in a single queue. So if we found a close cooperator
> we chose to not idle and move to that close cooperator. Now we try to
> merge all the close cooperators in a single queue. But that merging has
> not taken place yet and will happen when next request comes.
The coop flag is not set until the merge has taken place.
> A normal sequential reader will not find the close cooperator. Only the
> queues which should be merged will find the close cooperator. If anyway
> these queues are going to be merged soon, there is proably no point in
> continuing to idle on this queue in case we found a close cooperator.
>
> So, to me even in new code by jeff, it probably is fine to continue with
> policy of not idling if we found a close cooperator.
That would mean changing the check from cfqq_coop to cfqq->new_queue !=
NULL.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists