[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eec72c2d533b7600c63de3c8001cc6ab9e915afe.camel@suse.com>
Date: Wed, 07 Aug 2019 10:58:40 +0200
From: Dario Faggioli <dfaggioli@...e.com>
To: Julien Desfossez <jdesfossez@...italocean.com>,
"Li, Aubrey" <aubrey.li@...ux.intel.com>
Cc: Aaron Lu <aaron.lu@...ux.alibaba.com>,
Aubrey Li <aubrey.intel@...il.com>,
Subhra Mazumdar <subhra.mazumdar@...cle.com>,
Vineeth Remanan Pillai <vpillai@...italocean.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Frédéric Weisbecker <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3
Hello everyone,
This is Dario, from SUSE. I'm also interesting in core-scheduling, and
using it in virtualization use cases.
Just for context, I'm working in virt since a few years, mostly on Xen,
but I've done Linux stuff before, and I am getting back at it.
For now, I've been looking at the core-scheduling code, and run some
benchmarks myself.
On Fri, 2019-08-02 at 11:37 -0400, Julien Desfossez wrote:
> We tested both Aaron's and Tim's patches and here are our results.
>
> Test setup:
> - 2 1-thread sysbench, one running the cpu benchmark, the other one
> the
> mem benchmark
> - both started at the same time
> - both are pinned on the same core (2 hardware threads)
> - 10 30-seconds runs
> - test script: https://paste.debian.net/plainh/834cf45c
> - only showing the CPU events/sec (higher is better)
> - tested 4 tag configurations:
> - no tag
> - sysbench mem untagged, sysbench cpu tagged
> - sysbench mem tagged, sysbench cpu untagged
> - both tagged with a different tag
> - "Alone" is the sysbench CPU running alone on the core, no tag
> - "nosmt" is both sysbench pinned on the same hardware thread, no tag
> - "Tim's full patchset + sched" is an experiment with Tim's patchset
> combined with Aaron's "hack patch" to get rid of the remaining deep
> idle cases
> - In all test cases, both tasks can run simultaneously (which was not
> the case without those patches), but the standard deviation is a
> pretty good indicator of the fairness/consistency.
>
This, and of course the numbers below too, is very interesting.
So, here comes my question: I've done a benchmarking campaign (yes,
I'll post numbers soon) using this branch:
https://github.com/digitalocean/linux-coresched.git vpillai/coresched-v3-v5.1.5-test
https://github.com/digitalocean/linux-coresched/tree/vpillai/coresched-v3-v5.1.5-test
Last commit:
7feb1007f274 "Fix stalling of untagged processes competing with tagged
processes"
Since I see that, in this thread, there are various patches being
proposed and discussed... should I rerun my benchmarks with them
applied? If yes, which ones? And is there, by any chance, one (or maybe
more than one) updated git branch(es)?
Thanks in advance and Regards
--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists