lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 15 Oct 2010 19:05:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Nikhil Rao <ncrao@...gle.com>
Cc:	Ingo Molnar <mingo@...e.hu>, Mike Galbraith <efault@....de>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Venkatesh Pallipadi <venki@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] sched: drop group_capacity to 1 only if local
 group has extra capacity

On Fri, 2010-10-15 at 09:13 -0700, Nikhil Rao wrote:

> >> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> >> index 0dd1021..da0c688 100644
> >> --- a/kernel/sched_fair.c
> >> +++ b/kernel/sched_fair.c
> >> @@ -2030,6 +2030,7 @@ struct sd_lb_stats {
> >>       unsigned long this_load;
> >>       unsigned long this_load_per_task;
> >>       unsigned long this_nr_running;
> >> +     unsigned long this_group_capacity;
> >>
> >>       /* Statistics of the busiest group */
> >>       unsigned long max_load;
> >> @@ -2546,15 +2547,18 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu,
> >>               /*
> >>                * In case the child domain prefers tasks go to siblings
> >>                * first, lower the sg capacity to one so that we'll try
> >> -              * and move all the excess tasks away.
> >> +              * and move all the excess tasks away. We lower capacity only
> >> +              * if the local group can handle the extra capacity.
> >>                */
> >> -             if (prefer_sibling)
> >> +             if (prefer_sibling && !local_group &&
> >> +                 sds->this_nr_running < sds->this_group_capacity)
> >>                       sgs.group_capacity = min(sgs.group_capacity, 1UL);
> >>
> >>               if (local_group) {
> >>                       sds->this_load = sgs.avg_load;
> >>                       sds->this = sg;
> >>                       sds->this_nr_running = sgs.sum_nr_running;
> >> +                     sds->this_group_capacity = sgs.group_capacity;
> >>                       sds->this_load_per_task = sgs.sum_weighted_load;
> >>               } else if (update_sd_pick_busiest(sd, sds, sg, &sgs, this_cpu)) {
> >>                       sds->max_load = sgs.avg_load;

OK, but then you assume that local_group will always be the first group
served, nor is there any purpose for adding sds->this_group_capacity,
you could keep that local to this function.

For regular balancing local_group will be the first, since we only
ascend the domain tree on the local groups. But its not true for no_hz
balancing afaikt.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ