lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 May 2019 10:04:01 -0400
From:   Vineeth Pillai <vpillai@...italocean.com>
To:     Phil Auld <pauld@...hat.com>
Cc:     Aubrey Li <aubrey.intel@...il.com>,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>, Aaron Lu <aaron.lwe@...il.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v2 13/17] sched: Add core wide task selection and scheduling.

> > The following patch improved my test cases.
> > Welcome any comments.
> >
>
> This is certainly better than violating the point of the core scheduler :)
>
> If I'm understanding this right what will happen in this case is instead
> of using the idle process selected by the sibling we do the core scheduling
> again. This may start with a newidle_balance which might bring over something
> to run that matches what we want to put on the sibling. If that works then I
> can see this helping.
>
> But I'd be a little concerned that we could end up thrashing. Once we do core
> scheduling again here we'd force the sibling to resched and if we got a different
> result which "helped" him pick idle we'd go around again.
>
> I think inherent in the concept of core scheduling (barring a perfectly aligned set
> of jobs) is some extra idle time on siblings.
>
I was also thinking along the same lines. This change basically always
tries to avoid idle and there by constantly interrupting the sibling.
While this change might benefit a very small subset of workloads, it
might introduce thrashing more often.

One other reason you might be seeing performance improvement is
because of the bugs that caused both siblings to go idle even though
there are runnable and compatible threads in the queue. Most of the
issues are fixed based on all the feedback received in v2. We have a
github repo with the pre v3 changes here:
https://github.com/digitalocean/linux-coresched/tree/coresched

Please try this and see how it compares with the vanilla v2. I think its
time for a v3 now and we shall be posting it soon after some more
testing and benchmarking.

Thanks,

Powered by blists - more mailing lists