lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200710081633.52467.ak@suse.de>
Date:	Mon, 8 Oct 2007 16:33:52 +0200
From:	Andi Kleen <ak@...e.de>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [3/6] scheduler: Do devirtualization for sched_fair

On Monday 08 October 2007 14:39:33 Ingo Molnar wrote:
> 
> * Andi Kleen <ak@...e.de> wrote:
> 
> > > hm, i'm not convinced about this one. It increases the code size a 
> > > bit
> > 
> > Tiny bit (<200 bytes) and the wait_for/sleep_on refactor patch in the 
> > series saves over 1K so I should have some room for code size 
> > increase. Overall it will be still considerable smaller.
> 
> there's no forced dependency between those two patches :-) 

I did that .text reduction only to make up for the text increase
from the other patch.  Your approach is asking for keeping them both
in a single patch next time, which surely cannot be what you really
want.

> So for now   
> i've applied the one that saves text and skipped the one that bloats it.

Ok, but I trust I have at least 0.5kb bloat budget for future changes now.

> 
> > > and it's a sched.c local hack. If then this should be done on a 
> > > generic infrastructure level - lots of other code (VFS, networking, 
> > > etc.) could benefit from it i suspect - and then should be 
> > > .configurable as well.
> > 
> > Unfortunately not -- for this to work (especially for inlining) 
> > requires to
> > #include files implementing the sub calls. Except for the scheduler 
> > #that
> > is pretty uncommon unfortunately. Also the situation regarding which 
> > call target is the common one is typically much less clear than with 
> > sched_fair / other scheduling classes.
> 
> some workloads would call sched_fair uncommon too. To me this seems like 
> a workaround for the lack of a particular hardware feature.

That's like saying all optimizations are a workaround for lack of hardware
with infinite IPC. Yes sure. But that doesn't seem like a very fruitful way to 
reason.

Besides even on CPUs with indirect branch predictor it is probably a win --
they tend to have much less memory to store indirect branch prediction (requires
an address) than for conditional jumps (requires just a bit) 

> > > Then the benefit might become measurable too.
> > 
> > It might have been measurable if the context switch was measurable at 
> > all. Unfortunately the lmbench3 lat_ctx test I tired fluctuated by 
> > itself over 50%. Ok I suppose it would be possible to instrument the 
> > kernel itself to measure cycles. Would that convince you?
> 
> dunno, it would depend on the numbers. But really, in most workloads we 
> do a lot more VFS indirect calls than scheduler indirect calls. So if 
> this was an issue i'd really suggest to attack it in a generic way.

The difference is that the VFS always did indirect calls; but the scheduler
didn't. And again it's much less clear for the VFS in general what
are the common paths. To do it fully generic would probably require
a JIT and reoptimization at run time -- i don't think that's a path
we should go.

But if generic solutions are not possible doing it for important
special cases where it happens to be possible is certainly a valid approach.

But given that I couldn't come up with clear numbers it's reasonable to not
apply it yet. I'll try to come up with some better way to measure this.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ