lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Oct 2007 02:21:08 -0700
From:	Paul Jackson <pj@....com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	mingo@...e.hu, akpm@...ux-foundation.org, menage@...gle.com,
	linux-kernel@...r.kernel.org, dino@...ibm.com, cpw@....com
Subject: Re: [patch] sched: fix sched-domains partitioning by cpusets

Nick wrote:
> Sorry for the confusion: I only meant the sched.c part of that
> patch, not the full thing.

Ah - ok.  We're getting closer then.  Good.

Let me be sure I've got this right then.

You prefer the interface from your proposed patch, by which the
cpuset code passes sched domain requests to the scheduler code a single
cpumask that will define a sched domain:

    int partition_sched_domains(cpumask_t *partition)

and I am suggesting instead a new and different interface:

    void partition_sched_domains(int ndoms_new, cpumask_t *doms_new)

In the first API, one cpumask is passed in, and a single sched
domain is formed, taking those CPUs from any sched domain they
might have already been a member of, into this new sched domain.

In the second API, the entire flat partitioning is passed in,
giving an array of masks, one mask for each desired sched domain.
The passed in masks do not overlap, but might not cover all CPUs.

Question -- how does one turn off load balancing on some CPUs
using the first API?

    Does one do this by forming singleton sched domains of one
    CPU each?  Is there any downside to doing this?

    The simplest cpuset code to work with this would end up exposing
    this method of disabling load balancing to user space, forcing
    users to create cpusets with one CPU each to be able do disable
    load balancing.

    However a little bit of additional kernel cpuset code could hide
    this detail from user space, by recognizing when the user had
    asked to turn off load balancing on some larger cpuset, and by
    then calling partition_sched_domains() multiple times, once for
    each CPU in that cpuset.

There might be an even simpler way.  If the kernel/sched.c routines
detach_destroy_domains() and build_sched_domains() were exposed as
external routines, then the cpuset code could call them directly,
removing the partition_sched_domains() routine from sched.c entirely.
Would this be worth persuing?

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ