lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7795f178-37a1-0927-a356-c80d1b174423@oracle.com>
Date:   Thu, 21 Feb 2019 10:44:35 -0800
From:   Subhra Mazumdar <subhra.mazumdar@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>, kerrnel@...gle.com
Subject: Re: [RFC][PATCH 00/16] sched: Core scheduling


On 2/21/19 6:03 AM, Peter Zijlstra wrote:
> On Wed, Feb 20, 2019 at 06:53:08PM -0800, Subhra Mazumdar wrote:
>> On 2/18/19 9:49 AM, Linus Torvalds wrote:
>>> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra <peterz@...radead.org> wrote:
>>>> However; whichever way around you turn this cookie; it is expensive and nasty.
>>> Do you (or anybody else) have numbers for real loads?
>>>
>>> Because performance is all that matters. If performance is bad, then
>>> it's pointless, since just turning off SMT is the answer.
>>>
>>>                     Linus
>> I tested 2 Oracle DB instances running OLTP on a 2 socket 44 cores system.
>> This is on baremetal, no virtualization.
> I'm thinking oracle schedules quite a bit, right? Then you get massive
> overhead (as shown).
Yes. In terms of idleness we have:

Users baseline core_sched
16    67% 70%
24    53% 59%
32    41% 49%

So there is more idleness with core sched which is understandable as there
can be forced idleness. The other part contributing to regression is most
likely overhead.
>
> The thing with virt workloads is that if they don't VMEXIT lots, they
> also don't schedule lots (the vCPU stays running, nested scheduler
> etc..).
I plan to run some VM workloads.
>
> Also; like I wrote, it is quite possible there is some sibling rivalry
> here, which can cause excessive rescheduling. Someone would have to
> trace a workload and check.
>
> My older patches had a condition that would not preempt a task for a
> little while, such that it might make _some_ progress, these patches
> don't have that (yet).
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ