lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53DA6BDA.8080000@redhat.com>
Date:	Thu, 31 Jul 2014 18:16:26 +0200
From:	Jirka Hladky <jhladky@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>,
	Rik van Riel <riel@...hat.com>
CC:	Aaron Lu <aaron.lu@...el.com>, LKML <linux-kernel@...r.kernel.org>,
	lkp@...org
Subject: Re: [LKP] [sched/numa] a43455a1d57: +94.1% proc-vmstat.numa_hint_faults_local

On 07/31/2014 05:57 PM, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 12:42:41PM +0200, Peter Zijlstra wrote:
>> On Tue, Jul 29, 2014 at 02:39:40AM -0400, Rik van Riel wrote:
>>> On Tue, 29 Jul 2014 13:24:05 +0800
>>> Aaron Lu <aaron.lu@...el.com> wrote:
>>>
>>>> FYI, we noticed the below changes on
>>>>
>>>> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
>>>> commit a43455a1d572daf7b730fe12eb747d1e17411365 ("sched/numa: Ensure task_numa_migrate() checks the preferred node")
>>>>
>>>> ebe06187bf2aec1  a43455a1d572daf7b730fe12e
>>>> ---------------  -------------------------
>>>>       94500 ~ 3%    +115.6%     203711 ~ 6%  ivb42/hackbench/50%-threads-pipe
>>>>       67745 ~ 4%     +64.1%     111174 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
>>>>      162245 ~ 3%     +94.1%     314885 ~ 6%  TOTAL proc-vmstat.numa_hint_faults_local
>>> Hi Aaron,
>>>
>>> Jirka Hladky has reported a regression with that changeset as
>>> well, and I have already spent some time debugging the issue.
>> Let me see if I can still find my SPECjbb2005 copy to see what that
>> does.
> Jirka, what kind of setup were you seeing SPECjbb regressions?
>
> I'm not seeing any on 2 sockets with a single SPECjbb instance, I'll go
> check one instance per socket now.
>
>
Peter, I'm seeing regressions for

SINGLE SPECjbb instance for number of warehouses being the same as total 
number of cores in the box.

Example: 4 NUMA node box, each CPU has 6 cores => biggest regression is 
for 24 warehouses.

See the attached snapshot.

Jirka

Download attachment "SPECjbb2005_-127.el7numafixes9.png" of type "image/png" (91443 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ