lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2cd4df870901270804p75822471odca6f5d66b9d33fa@mail.gmail.com>
Date:	Tue, 27 Jan 2009 17:04:07 +0100
From:	Pawel Dziekonski <dzieko@...il.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Mike Galbraith <efault@....de>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org
Subject: Re: CPU scheduler question/problem

2009/1/26 Ingo Molnar <mingo@...e.hu>:
>
> * Pawel Dziekonski <dzieko@...il.com> wrote:
>
>> 2009/1/23 Peter Zijlstra <peterz@...radead.org>:
>>
>> > The pipe workload you mentioned has would behave that way because pipes
>> > 'assume' a produces/consumer behaviour, and thus are more likely to
>> > place both tasks on the same cpu -- but will eventually pull them apart
>> > if they want to run concurrently.
>> >
>> > You might enable SCHED_DEBUG=y and try
>> >  echo NO_SYNC_WAKEUPS > /debug/sched_features
>>
>> Hello,
>>
>> that did the trick. Openssl now gets a whole core exclusively and gives
>> full performance.
>>
>> Regarding quantum chemistry application -- it is also using pipes for
>> communication between worker processes. Now this app works OK.
>
> Could you please try the fuller fix below too please, does it still do the
> trick and does the scheduler still maximize openssl and your quantum
> chemistry app's throughput?
>
> There should be no need for you to tune anything - the scheduler must get
> such workloads right out of the box.

hello,

After contacting Ingo directly I downloaded tip/master kernel tree via
http://people.redhat.com/mingo/tip.git/README.

after reboot SYNC_WAKEUPS is enabled by default and my openssl
benchmark is still
stuck on one core:

# uname -a
Linux MiP 2.6.29-rc2 #1 SMP Tue Jan 27 16:03:29 CET 2009 x86_64 x86_64
x86_64 GNU/Linux

# cat /sys/kernel/debug/sched_features
NEW_FAIR_SLEEPERS NO_NORMALIZED_SLEEPER ADAPTIVE_GRAN WAKEUP_PREEMPT
START_DEBIT AFFINE_WAKEUPS CACHE_HOT_BUDDY SYNC_WAKEUPS NO_HRTICK
NO_DOUBLE_TICK ASYM_GRAN LB_BIAS LB_WAKEUP_UPDATE ASYM_EFF_LOAD
NO_WAKEUP_OVERLAP LAST_BUDDY OWNER_SPIN

# pidstat

16:20:43          PID    %usr %system  %guest    %CPU   CPU  Command
16:20:44         4992    0.00    2.00    0.00    2.00     4  dd
16:20:44         4993   81.00    6.00    0.00   87.00     4  openssl
16:20:44         4994    3.00    9.00    0.00   12.00     4  pv

> Also, does
> the slowdown go away (if SYNC_WAKEUPS is enabled) if you reduce/increase
> /proc/sys/kernel/sched_migration_cost?

I tested values of 100000, 500000 (default), 999999. in all cases 3
processes stay together on the same core and are jumping together
between different cores:

# pidstat 1

16:27:07          PID    %usr %system  %guest    %CPU   CPU  Command
16:27:08         4992    0.00    1.00    0.00    1.00     0  dd
16:27:08         4993   78.00    6.00    0.00   84.00     0  openssl
16:27:08         4994    2.00   12.00    0.00   14.00     0  pv

16:27:08          PID    %usr %system  %guest    %CPU   CPU  Command
16:27:09         4992    0.00    2.00    0.00    2.00     4  dd
16:27:09         4993   83.00    3.00    0.00   86.00     4  openssl
16:27:09         4994    3.00    9.00    0.00   12.00     4  pv

However value of "1" works:

16:30:08          PID    %usr %system  %guest    %CPU   CPU  Command
16:30:09         6197    0.00    4.00    0.00    4.00     2  dd
16:30:09         6198   94.00    7.00    0.00  101.00     1  openssl
16:30:09         6199    2.00   15.00    0.00   17.00     0  pv

regards, Pawel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ