lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e5e476b1001131330l75f5d949u13b615d408206c92@mail.gmail.com>
Date:	Wed, 13 Jan 2010 22:30:52 +0100
From:	Corrado Zoccolo <czoccolo@...il.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Shaohua Li <shaohua.li@...el.com>, jens.axboe@...cle.com,
	linux-kernel@...r.kernel.org, jmoyer@...hat.com,
	guijianfeng@...fujitsu.com, yanmin_zhang@...ux.intel.com
Subject: Re: [PATCH]cfq-iosched: don't stop async queue with async requests 
	pending

On Wed, Jan 13, 2010 at 12:10 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Wed, Jan 13, 2010 at 03:44:42PM +0800, Shaohua Li wrote:
>> My SSD speed of direct write is about 80m/s, while I test page writeback,
>> the speed can only go to 68m/s. Below patch fixes this.
>> It appears we missused cfq_should_idle in cfq_may_dispatch. cfq_should_idle
>> means a queue should idle because it's seekless sync queue or it's the last queue,
>> which is to maintain service tree time slice. So it doesn't mean the
>> last queue is always a sync queue. If the last queue is asyn queue,
>> we definitely shouldn't stop dispatch requests because of pending async
>> requests.
>>
>> Signed-off-by: Shaohua Li <shaohua.li@...el.com>
>>
>> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> index 918c7fd..8198079 100644
>> --- a/block/cfq-iosched.c
>> +++ b/block/cfq-iosched.c
>> @@ -2222,7 +2222,8 @@ static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq)
>>       /*
>>        * Drain async requests before we start sync IO
>>        */
>> -     if (cfq_should_idle(cfqd, cfqq) && cfqd->rq_in_driver[BLK_RW_ASYNC])
>> +     if (cfq_cfqq_sync(cfqq) && cfq_should_idle(cfqd, cfqq)
>> +             && cfqd->rq_in_driver[BLK_RW_ASYNC])
>>               return false;
>
> So are we driving queue depth as 1 when pure buffered writes are going on?
> Because in that case service_tree->count=1 and cfq_should_idle() will
> return 1 and looks like we will not dispatch next write till previous
> write is over?

Yes, it seems so. It has to be fixed.

>
> A general question. Why do we need to drain async requests before we start
> sync IO? How does that help?
>
> A related question, even if we have to do that, why do we check for
> cfq_should_idle()? Why can't we just do following.
>
>        if (cfq_cfqq_sync(cfqq) && cfqd->rq_in_driver[BLK_RW_ASYNC])
>
This would wait also for seeky queues. I think we drain to avoid
disrupting a sync
stream with far writes, but it is not needed when the queues are already seeky.

Thanks,
Corrado
> Thanks
> Vivek
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ