lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Jun 2007 21:31:33 -0700
From:	"Li, Tong N" <tong.n.li@...el.com>
To:	"Willy Tarreau" <w@....eu>
Cc:	<linux-kernel@...r.kernel.org>, "Ingo Molnar" <mingo@...e.hu>,
	"Con Kolivas" <kernel@...ivas.org>,
	"Linus Torvalds" <torvalds@...ux-foundation.org>,
	"Arjan van de Ven" <arjan@...radead.org>,
	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	"Barnes, Jesse" <jesse.barnes@...el.com>,
	"William Lee Irwin III" <wli@...omorphy.com>,
	"Bill Huey \(hui\)" <billh@...ppy.monkey.org>, <vatsa@...ibm.com>,
	<balbir@...ibm.com>, "Nick Piggin" <npiggin@...e.de>,
	"Bill Davidsen" <davidsen@....com>,
	"John Kingman" <kingman@...ragegear.com>,
	"Peter Williams" <pwil3058@...pond.net.au>, <seuler.shi@...il.com>
Subject: RE: [RFC] Extend Linux to support proportional-share scheduling

Willy,

These are all good comments. Regarding the cache penalty, I've done some
measurements using benchmarks like SPEC OMP on an 8-processor SMP and
the performance with this patch was nearly identical to that with the
mainline. I'm sure some apps may suffer from the potentially more
migrations with this design. In the end, I think what we want is to
balance fairness and performance. This design currently emphasizes on
fairness, but it could be changed to relax fairness when performance
does become an issue (which could even be a user-tunable knob depending
on which aspect the user cares more).

Thanks,

  tong

> -----Original Message-----
> From: Willy Tarreau [mailto:w@....eu]
> Sent: Tuesday, June 05, 2007 8:33 PM
> To: Li, Tong N
> Cc: linux-kernel@...r.kernel.org; Ingo Molnar; Con Kolivas; Linus
Torvalds;
> Arjan van de Ven; Siddha, Suresh B; Barnes, Jesse; William Lee Irwin
III;
> Bill Huey (hui); vatsa@...ibm.com; balbir@...ibm.com; Nick Piggin;
Bill
> Davidsen; John Kingman; Peter Williams; seuler.shi@...il.com
> Subject: Re: [RFC] Extend Linux to support proportional-share
scheduling
> 
> Hi Tong,
> 
> On Tue, Jun 05, 2007 at 06:56:17PM -0700, Li, Tong N wrote:
> > Hi all,
> >
> > I've ported my code to mainline 2.6.21.3. You can get it at
> > http://www.cs.duke.edu/~tongli/linux/.
> 
> as much as possible, you should post your patch for others to comment
> on it. Posting just a URL is often fine to inform people that there's
> an update to *try*, but at this stage, it may be more important to
> comment on your design and code than trying it.
> 
> [...]
> 
> > Trio has two unique features: (1) it enables users to control shares
of
> > CPU time for any thread or group of threads (e.g., a process, an
> > application, etc.), and (2) it enables fair sharing of CPU time
across
> > multiple CPUs. For example, with ten tasks running on eight CPUs,
Trio
> > allows each task to take an equal fraction of the total CPU time,
> 
> While this looks interesting, doesn't it make threads jump to random
> CPUs all the time, thus reducing cache efficiency ? Or maybe it would
> be good to consider two or three criteria to group CPUs :
>   - those which share the same caches (multi-core)
>   - those which share the same local memory on the same mainboard
>     (multi-socket)
>   - those which are so far away from each others that it's really
>     not worth migrating a task
> 
> > whereas no existing scheduler achieves such fairness. These features
> > enable Trio to complement the mainline scheduler and other proposals
> > such as CFS and SD to enable greater user flexibility and stronger
> > fairness.
> 
> Right now, I think that only benchmarks could tell which design is
> better. I understand that running 10 tasks on 8 CPUs may result in
> the last batch involving only 2 CPUs with 1 task each, thus increasing
> the overall wall time. But maybe cache thrashing between CPUs will
> also increase the wall time.
> 
> Regards,
> Willy
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists