lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1225191456.4903.254.camel@marge.simson.net>
Date:	Tue, 28 Oct 2008 11:57:36 +0100
From:	Mike Galbraith <efault@....de>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	David Miller <davem@...emloft.net>, zbr@...emap.net,
	alan@...rguk.ukuu.org.uk, jkosina@...e.cz,
	akpm@...ux-foundation.org, a.p.zijlstra@...llo.nl, rjw@...k.pl,
	s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.

On Tue, 2008-10-28 at 11:37 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@....de> wrote:
> 
> > ..removing the overhead from .27 does not produce the anticipated 
> > result despite a max context switch rate markedly above that of 
> > 2.6.26.
> > 
> > There lies an as yet unaddressed regression IMBHO.  The hrtick has 
> > been addressed.  It sucked at high frequency, and it's gone.  The 
> > added math overhead in .27 hurt some too, and is now history as 
> > well.
> 
> thanks Mike for the _extensive_ testing and bug hunting session you've 
> done in the past couple of weeks! All the relevant fixlets you found 
> are now queued up properly in sched/urgent, correct?

Yeah.

> What's your gut feeling, is that remaining small regression scheduler 
> or networking related?

I don't know where it lives.  I'm still looking, and the numbers are
still playing games with my head.

> i'm cutting the ball in half and i'm passing over one half of it to 
> the networking folks, because your numbers show _huge_ sensitivity in 
> this workload, depending on networking settings:

I strongly _suspect_ that the network folks have some things they could
investigate, but given my utter failure at finding the smoking gun, I
can't say one way of the other.  IMHO, sharing with network folks would
likely turn out to be a fair thing to do.

Am I waffling?  Me?  You bet your a$$! My clock is already squeaky clean
thank you very much :-)

What I can say is that my box is quite certain that there are influences
outside the scheduler which have more influence benchmark results than
the scheduler does through the life of testing.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ