lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49ws2iz3gg.fsf@segfault.boston.devel.redhat.com>
Date:	Mon, 26 Oct 2009 09:31:43 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	jens.axboe@...cle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH/RFC 0/4] cfq: implement merging and breaking up of  cfq_queues

Corrado Zoccolo <czoccolo@...il.com> writes:

> Hi Jeff,
> this series looks good.

Hi, Corrado.  Thanks again for the review!

> I like in particular the fact that you move seekiness detection in the cfqq.
> This can help with processes that issue sequential reads and seeky
> writes, or vice versa.
> Probably, also the think time could be made per-cfqq, so that the
> decision whether we should idle for a given cfqq is more precise.

I'll have to think about that one.  It would be good to know Jens'
opinion on the matter, too.

> On Fri, Oct 23, 2009 at 11:14 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
>> Hi,
>>
>> This is a follow-up patch to the original close cooperator support for
>> CFQ.  The problem is that some programs (NFSd, dump(8), iscsi target
>> mode driver, qemu) interleave sequential I/Os between multiple threads
>> or processes.  The result is that there are large delays due to CFQ's
>> idling logic that leads to very low throughput.
>
> You identified the problem in the idling logic, that reduces the
> throughput in this particular scenario, in which various threads or
> processes issue (in random order) the I/O requests with different I/O
> contexts on behalf of a single entity.
> In this case, any idling between those threads is detrimental.
> Ideally, such cases should be already spotted, since think time should
> be high for such processes, so I wonder if this indicates a problem in
> the current think time logic.

For read-test2, the readers are not dependent upon each other.  That is,
each process reads the blocks assigned to it, so they do no
"thinking", or waiting for the other processes, in between I/Os.

> Can you send me your read-test, so I can investigate it?

Sure thing.  I didn't write it, it was provided by Moritoshi Oshiro to
aid in reproducing the dump issue.  You can find it here:

  http://people.redhat.com/jmoyer/read-test2.tar.gz

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ