lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Jul 2014 11:13:17 -0400
From:	Rik van Riel <>
To:	Vincent Guittot <>
CC:	linux-kernel <>,
	Peter Zijlstra <>,
	Michael Neuling <>,
	Ingo Molnar <>, Paul Turner <>,,,,
	Nicolas Pitre <>
Subject: Re: [PATCH] sched: make update_sd_pick_busiest return true on a busier

Hash: SHA1

On 07/25/2014 11:02 AM, Vincent Guittot wrote:
> On 25 July 2014 16:02, Rik van Riel <> wrote: On
> 07/23/2014 03:41 AM, Vincent Guittot wrote:
>>>> Regarding your issue with "perf bench numa mem" that is not
>>>> spread on all nodes, SD_PREFER_SIBLING flag (of DIE level)
>>>> should do the job by reducing the capacity of  "not local
>>>> DIE" group at NUMA level to 1 task during the load balance
>>>> computation. So you should have 1 task per sched_group at
>>>> NUMA level.
>> Looking at the code some more, it is clear why this does not 
>> happen. If sd->flags & SD_NUMA, then SD_PREFER_SIBLING will never
>> be set.
> I don't have a lot of experience on NUMA system and how their 
> sched_domain topology is described but IIUC, you don't have other 
> sched_domain level than NUMA ones ? otherwise the flag should be 
> present in one of the non NUMA level (SMT, MC or DIE)

The system I am testing on has 3 or 4 sched_domain levels,
one for each HT sibling(?), one for each core, one for
each node/socket, and one parent domain for the whole system.

SD_PREFER_SIBLING should be set at the HT sibling level
and at the core level.

However, it is not set at the levels above that.

That means the SD_PREFER_SIBLING flag does its thing within
each CPU core and between cores on a socket, but not between
NUMA nodes...

>> On a related note, that part of the load balancing code probably 
>> needs to be rewritten to deal with unequal
>> group_capacity_factors anyway.

> AFAICT, sgs->avg_load is weighted by the capacity in
> update_sg_lb_stats

Indeed, I dug into that code after sending the email, and found
that piece of the code just before I read Peter's email pointing
it out to me.

> I'm working on a patchset that get ride of capacity_factor (as 
> mentioned by Peter) and directly uses capacity instead. I should
> send the v4 next week.

I am looking forward to anything that will make this code easier
to follow :)

- -- 
All rights reversed
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird -

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists