lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 28 Oct 2009 09:27:07 +0100 From: Jens Axboe <jens.axboe@...cle.com> To: Corrado Zoccolo <czoccolo@...il.com> Cc: Linux-Kernel <linux-kernel@...r.kernel.org>, Jeff Moyer <jmoyer@...hat.com> Subject: Re: [PATCH 0/5] cfq-iosched: improve latency for no-idle queues (v3) On Mon, Oct 26 2009, Corrado Zoccolo wrote: > [rebased on top of Jeff's latest changes for 2.6.33. Various code style improvements over v1 & v2] > > This patch series is intended to improve I/O latency, addressing an often > neglected, important subset of workloads: the ones for which cfq currently > prefers not to do any idling. > > Those are the ones that would benefit most from having low latency, in fact > they are any of: > * processes with large think times (e.g. interactive ones like file managers) > * seeky (e.g. programs faulting in their code at startup) > * or marked as no-idle from upper levels. > > The patch series addresses this by: > * reducing queues' timeslice when many queues have pending I/O > * separating queues with different priorities and different characteristics in > different service trees, each with an allocated time slice > * enable idling when switching between service trees, even for queues that > would not have idling enabled otherwise. > > This provides various benefits: > * service tree insertion code is simplified, since it doesn't need to cope with > priorities any more. > * high priority no_idle queues are no longer penalized when competing with > lower priority, idling queues > * seeky and no_idle queues have their fair share of disk time, without > penalizing NCQ drives' performances, since they can all dispatch together, > filling up the available NCQ slots. > > On a non-NCQ capable drive, a workload of 4 random readers competing with > sequential writer, the maximum latency experienced by readers decreased from > > 500ms to about 160ms. Thanks Corrado, this is indeed good stuff. Only style issue left was the one in cfq_get_avg_queues(), I just corrected that manually. I have committed this in a test branch based off for-2.6.33 and will do some testing with it, then merge it into for-2.6.33 if it looks good. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists