lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081028103741.GA22319@elte.hu>
Date:	Tue, 28 Oct 2008 11:37:41 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Mike Galbraith <efault@....de>
Cc:	David Miller <davem@...emloft.net>, zbr@...emap.net,
	alan@...rguk.ukuu.org.uk, jkosina@...e.cz,
	akpm@...ux-foundation.org, a.p.zijlstra@...llo.nl, rjw@...k.pl,
	s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.


* Mike Galbraith <efault@....de> wrote:

> ..removing the overhead from .27 does not produce the anticipated 
> result despite a max context switch rate markedly above that of 
> 2.6.26.
> 
> There lies an as yet unaddressed regression IMBHO.  The hrtick has 
> been addressed.  It sucked at high frequency, and it's gone.  The 
> added math overhead in .27 hurt some too, and is now history as 
> well.

thanks Mike for the _extensive_ testing and bug hunting session you've 
done in the past couple of weeks! All the relevant fixlets you found 
are now queued up properly in sched/urgent, correct?

What's your gut feeling, is that remaining small regression scheduler 
or networking related?

i'm cutting the ball in half and i'm passing over one half of it to 
the networking folks, because your numbers show _huge_ sensitivity in 
this workload, depending on networking settings:

> To really illustrate rockiness, cutting network config down from distro
> lard-ball to something leaner and meaner took SMP throughput from this
> (was only testing netperf at that time) on 19 Aug..
> 
> 2.6.22.19 pinned
> 16384  87380  1        1       300.00   59866.40   

> 2.6.22.19 (also pinned)
> 16384  87380  1        1       60.01    94179.12

> 2.6.22.19 (also pinned)
> 16384  87380  1        1       60.01    111272.55  1.00

any scheduler micro-overhead detail is going to be a drop in the 
ocean, compared to such huge variations. We could change the scheduler 
to the old O(N) design of the 2.2 kernel and the impact of that would 
be a blip on the radar, compared to the overhead shown above.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ