lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Feb 2008 12:16:21 -0700
From:	"Gregory Haskins" <ghaskins@...ell.com>
To:	"Paul Jackson" <pj@....com>
Cc:	<a.p.zijlstra@...llo.nl>, <mingo@...e.hu>, <arjan@...radead.org>,
	<tglx@...utronix.de>, <dhaval@...ux.vnet.ibm.com>,
	<vatsa@...ux.vnet.ibm.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH 0/2] reworking load_balance_monitor

>>> On Thu, Feb 14, 2008 at  1:15 PM, in message
<20080214121544.941d91f1.pj@....com>, Paul Jackson <pj@....com> wrote: 
> Peter wrote of:
>> the lack of rd->load_balance.
> 
> Could you explain to me a bit what that means?
> 
> Does this mean that the existing code would, by default (default being
> a single sched domain, covering the entire system's CPUs) load balance
> across the entire system, but with your rework, not so load balance
> there?  That seems unlikely.
> 
> In any event, from my rather cpuset-centric perspective, there are only
> two common cases to consider.
> 
>  1. In the default case, build_sched_domains() gets called once,
>     at init, with a cpu_map of all non-isolated CPUs, and we should
>     forever after load balance across all those non-isolated CPUs.
> 
>  2. In some carefully managed systems using the per-cpuset
>     'sched_load_balance' flags, we tear down that first default
>     sched domain, by calling detach_destroy_domains() on it, and we
>     then setup some number of sched_domains (typically in the range
>     of two to ten, though I suppose we should design to scale to
>     hundreds of sched domains, on systems with thousands of CPUs)
>     by additional calls to build_sched_domains(), such that their
>     CPUs don't overlap (pairwise disjoint) and such that the union
>     of all their CPUs may, or may not, include all non-isolated CPUs
>     (some CPUs might be left 'out in the cold', intentionally, as
>     essentially additional isolated CPUs.)  We would then expect load
>     balancing within each of these pair-wise disjoint sched domains,
>     but not between one of them and another.


Hi Paul,
  I think it will still work as you describe.  We create a new root-domain object for each pair-wise disjoint sched-domain.  In your case (1) above, we would only have one instance of a root-domain which contains (of course) a single instance of the rd->load_balance object.  This would, in fact operate like the global variable that Peter is suggesting it replace (IIUC).  However, for case (2), we would instantiate a root-domain object per pairwise-disjoint sched-domain, and therefore each one would have its own instance of rd->load_balance.

HTH
-Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ