lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 8 Oct 2007 14:39:33 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Andi Kleen <ak@...e.de>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [3/6] scheduler: Do devirtualization for sched_fair


* Andi Kleen <ak@...e.de> wrote:

> > hm, i'm not convinced about this one. It increases the code size a 
> > bit
> 
> Tiny bit (<200 bytes) and the wait_for/sleep_on refactor patch in the 
> series saves over 1K so I should have some room for code size 
> increase. Overall it will be still considerable smaller.

there's no forced dependency between those two patches :-) So for now 
i've applied the one that saves text and skipped the one that bloats it.

> > and it's a sched.c local hack. If then this should be done on a 
> > generic infrastructure level - lots of other code (VFS, networking, 
> > etc.) could benefit from it i suspect - and then should be 
> > .configurable as well.
> 
> Unfortunately not -- for this to work (especially for inlining) 
> requires to
> #include files implementing the sub calls. Except for the scheduler 
> #that
> is pretty uncommon unfortunately. Also the situation regarding which 
> call target is the common one is typically much less clear than with 
> sched_fair / other scheduling classes.

some workloads would call sched_fair uncommon too. To me this seems like 
a workaround for the lack of a particular hardware feature.

> > Then the benefit might become measurable too.
> 
> It might have been measurable if the context switch was measurable at 
> all. Unfortunately the lmbench3 lat_ctx test I tired fluctuated by 
> itself over 50%. Ok I suppose it would be possible to instrument the 
> kernel itself to measure cycles. Would that convince you?

dunno, it would depend on the numbers. But really, in most workloads we 
do a lot more VFS indirect calls than scheduler indirect calls. So if 
this was an issue i'd really suggest to attack it in a generic way.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ