[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49fwzsbvga.fsf@segfault.boston.devel.redhat.com>
Date: Fri, 09 Jul 2010 10:07:01 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Corrado Zoccolo <czoccolo@...il.com>
Cc: Jens Axboe <axboe@...nel.dk>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Vivek Goyal <vgoyal@...hat.com>
Subject: Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
Corrado Zoccolo <czoccolo@...il.com> writes:
> On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
>> Corrado Zoccolo <czoccolo@...il.com> writes:
>>
>>> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
>>>> Corrado Zoccolo <czoccolo@...il.com> writes:
>>>>
>>>>> Hi Jens,
>>>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>>>> suspected for some regressions on high end hardware.
>>>>> The two patches from this series:
>>>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>>>> used by the upper layers.
>>>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>>>> possibly idling requests from the same queue on the no-idle tree will
>>>>> clear the noidle_tree_requires_idle flag.
>>>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>>>> restoring the behaviour pre-8e55063 patch.
>>>>
>>>> Hi, Corrado,
>>>>
>>>> I ran your kernel through my tests. Here are the results, up against
>>>> vanilla, deadline, and the blk_yield patch set:
>>>>
>>> Hi Jeff,
>>> can you also add cfq with 8e55063 reverted to the testing mix?
>>
>> Sure, the results now look like this:
>>
>> just just
>> fs_mark fio mixed
>> -------------------------------+--------------
>> deadline 529.44 151.4 | 450.0 78.2
>> vanilla cfq 107.88 164.4 | 6.6 137.2
>> blk_yield cfq 530.82 158.7 | 113.2 78.6
>> corrado cfq 110.16 220.6 | 7.0 159.8
>> 8e55063 revert 559.66 198.9 | 16.1 153.3
>>
>> I had accidentally run your patch set (corrado cfq) on ext3, so the
>> numbers were a bit off (everything else was run against ext4). The
>> corrected numbers above reflect the performance on ext4, which is much
>> better for the sequential reader, but still not great for the fs_mark
>> run. Reverting 8e55063 definitely gets us into better shape. However,
>> if we care about the mixed workload, then it won't be enough.
>
> Wondering why deadline performs so well in the fs_mark workload. Is it
> because it doesn't distinguish between sync and async writes?
It performs well because it doesn't do any idling.
> Maybe we can achieve something similar by putting all sync writes
> (that are marked as REQ_NOIDLE) in the noidle tree? This, coupled with
> making jbd(2) perform sync writes, should make the yield automatic,
> since they all live in the same tree for which we don't idle between
> queues, and should be able to provide fairness compared to a
> sequential reader (that lives in the other tree).
>
> Can you test the attached patch, where I also added your changes to
> make jbd(2) to perform sync writes?
I'm not sure what kernel you generated that patch against. I'm working
with 2.6.35-rc3 or later, and your patch does not apply there.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists