lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Nov 2009 09:33:47 -0800
From:	Nauman Rafique <>
To:	Vivek Goyal <>
Cc:	Corrado Zoccolo <>,,,,,,,,,,,,,,,,
Subject: Re: [RFC] Workload type Vs Groups (Was: Re: [PATCH 02/20] blkio: 
	Change CFQ to use CFS like queue time stamps)

On Fri, Nov 6, 2009 at 2:22 PM, Vivek Goyal <> wrote:
> On Wed, Nov 04, 2009 at 10:18:15PM +0100, Corrado Zoccolo wrote:
>> Hi Vivek,
>> On Wed, Nov 4, 2009 at 12:43 AM, Vivek Goyal <> wrote:
>> > o Previously CFQ had one service tree where queues of all theree prio classes
>> >  were being queued. One side affect of this time stamping approach is that
>> >  now single tree approach might not work and we need to keep separate service
>> >  trees for three prio classes.
>> >
>> Single service tree is no longer true in cfq for-2.6.33.
>> Now we have a matrix of service trees, with first dimension being the
>> priority class, and second dimension being the workload type
>> (synchronous idle, synchronous no-idle, async).
>> You can have a look at the series: .
>> It may have other interesting influences on your work, as the idle
>> introduced at the end of the synchronous no-idle tree, that provides
>> fairness also for seeky or high-think-time queues.
> Hi All,
> I am now rebasing my patches to for-2.6.33 branch. There are significant
> number of changes in that branch, especially changes from corrado bring
> in an interesting question.
> Currently corrado has introduced the functinality of kind of grouping the
> cfq queues based on workload type and gives the time slots to these sub
> groups (sync-idle, sync-noidle, async).
> I was thinking of placing groups on top of this model, so that we select
> the group first and then select the type of workload and then finally
> the queue to run.
> Corrodo came up with an interesting suggestion (in a private mail), that
> what if we implement workload type at top and divide the share among
> groups with-in workoad type.
> So one would first select the workload to run and then select group
> with-in workload and then cfq queue with-in group.
> The advantage of this approach are.
> - for sync-noidle group, we will not idle per group. We will idle only
>  only at root level. (Well if we don't idle on the group once it becomes
>  empty, we will not see fairness for group. So it will be fairness vs
>  throughput call).
> - It allows us to limit system wide share of workload type. So for
>  example, one can kind of fix system wide share of async queues.
>  Generally it might not be very prudent to allocate a group 50% of
>  disk share and then that group decides to just do async IO and sync
>  IO in rest of the groups suffer.
> Disadvantage
> - The definition of fairness becomes bit murkier. Now fairness will be
>  achieved for a group with-in the workload type. So if a group is doing
>  IO of type sync-idle as well as sync-noidle and other group is doing
>  IO of type only sync-noidle, then first group will get overall more
>  disk time even if both the groups have same weight.
> Looking for some feedback about which appraoch makes more sense before I
> write patches.

On the first look, the first option did make some sense. But isn't the
whole point of adding cgroups is to support fairness, or isolation? If
we are adding cgroups support in a way that does not support
isolation, there is not much point to the whole effort.

The first approach seems to be directed towards keeping good overall
throughput. Fairness and isolation might always come with a
possibility of the loss in overall throughput. The assumption is that
once someone is using cgroup, the overall system efficiency is a
concern which is secondary to the performance we are supporting for
each cgroup.

Also, the second approach is cleaner design. For each cgroup, we will
need one data structure, instead of having 3, one for each workload
type. And all the new functionality should still live under a config
option; so if someone does not want cgroups, they are just turn them
off and we will be back to just one set of threads for each workload

> Thanks
> Vivek
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists