lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Oct 2008 15:31:00 +0400
From:	Evgeniy Polyakov <s0mbre@...rvice.net.ru>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	David Miller <davem@...emloft.net>,
	Mike Galbraith <efault@....de>
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.

Hi Ingo.

On Fri, Oct 10, 2008 at 11:15:11AM +0200, Ingo Molnar (mingo@...e.hu) wrote:

> > 
> > I use tsc clocksource, also available acpi_pm and jiffies,
> > with acpi_pm performance is even lower (I stopped test after it dropped
> > below 340 MB/s mark), jiffies do not work at all, looks like sockets
> > stuck in time_wait state when this clock source is used, although that
> > may be some different issue.
> > 
> > So I think hrticks are guilty, but still not as good as .25 tree without
> > mentioned changes (455 MB/s) and .24 (475 MB/s).
> 
> i'm glad that you are looking into this! That is an SMP box, right? If 
> yes then could you try this sched-domains tuning utility i have written 
> yesterday (incidentally):
> 
>   http://redhat.com/~mingo/cfs-scheduler/tune-sched-domains

I've removed SD_BALANCE_NEWIDLE:
# ./tune-sched-domains $[191-2]
changed /proc/sys/kernel/sched_domain/cpu0/domain0/flags: 191 => 189
SD flag: 189
+   1: SD_LOAD_BALANCE:          Do load balancing on this domain
-   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
+   4: SD_BALANCE_EXEC:          Balance on exec
+   8: SD_BALANCE_FORK:          Balance on fork, clone
+  16: SD_WAKE_IDLE:             Wake to idle CPU on task wakeup
+  32: SD_WAKE_AFFINE:           Wake task to waking CPU
-  64: SD_WAKE_BALANCE:          Perform balancing at task wakeup
+ 128: SD_SHARE_CPUPOWER:        Domain members share cpu power
changed /proc/sys/kernel/sched_domain/cpu0/domain1/flags: 47 => 189
SD flag: 189
+   1: SD_LOAD_BALANCE:          Do load balancing on this domain
-   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
+   4: SD_BALANCE_EXEC:          Balance on exec
+   8: SD_BALANCE_FORK:          Balance on fork, clone
+  16: SD_WAKE_IDLE:             Wake to idle CPU on task wakeup
+  32: SD_WAKE_AFFINE:           Wake task to waking CPU
-  64: SD_WAKE_BALANCE:          Perform balancing at task wakeup
+ 128: SD_SHARE_CPUPOWER:        Domain members share cpu power

And got noticeble improvement (each new line has fixes from previous):

vanilla 27: 347.222
no TSO/GSO: 357.331
no hrticks: 382.983
no balance: 389.802

> and please, when tuning such scheduler bits, could you run latest 
> tip/master:
> 
>    http://people.redhat.com/mingo/tip.git/README
> 
> and you need to have CONFIG_SCHED_DEBUG=y enabled for the tuning knobs.
> 
> so that it's all in sync with upcoming scheduler changes/tunings/fixes.

Ok, I've started to pull it down, I will reply back when things are
ready.

-- 
	Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ