[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190426101947.GZ18914@techsingularity.net>
Date: Fri, 26 Apr 2019 11:19:47 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Ingo Molnar <mingo@...nel.org>
Cc: Aubrey Li <aubrey.intel@...il.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Vineeth Remanan Pillai <vpillai@...italocean.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Subhra Mazumdar <subhra.mazumdar@...cle.com>,
Fr?d?ric Weisbecker <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
Aaron Lu <aaron.lwe@...il.com>,
Valentin Schneider <valentin.schneider@....com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jiri Kosina <jkosina@...e.cz>
Subject: Re: [RFC PATCH v2 00/17] Core scheduling v2
On Fri, Apr 26, 2019 at 11:45:45AM +0200, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@...hsingularity.net> wrote:
>
> > > > I can show a comparison with equal levels of parallelisation but with
> > > > HT off, it is a completely broken configuration and I do not think a
> > > > comparison like that makes any sense.
> > >
> > > I would still be interested in that comparison, because I'd like
> > > to learn whether there's any true *inherent* performance advantage to
> > > HyperThreading for that particular workload, for exactly tuned
> > > parallelism.
> > >
> >
> > It really isn't a fair comparison. MPI seems to behave very differently
> > when a machine is saturated. It's documented as changing its behaviour
> > as it tries to avoid the worst consequences of saturation.
> >
> > Curiously, the results on the 2-socket machine were not as bad as I
> > feared when the HT configuration is running with twice the number of
> > threads as there are CPUs
> >
> > Amean bt 771.15 ( 0.00%) 1086.74 * -40.93%*
> > Amean cg 445.92 ( 0.00%) 543.41 * -21.86%*
> > Amean ep 70.01 ( 0.00%) 96.29 * -37.53%*
> > Amean is 16.75 ( 0.00%) 21.19 * -26.51%*
> > Amean lu 882.84 ( 0.00%) 595.14 * 32.59%*
> > Amean mg 84.10 ( 0.00%) 80.02 * 4.84%*
> > Amean sp 1353.88 ( 0.00%) 1384.10 * -2.23%*
>
> Yeah, so what I wanted to suggest is a parallel numeric throughput test
> with few inter-process data dependencies, and see whether HT actually
> improves total throughput versus the no-HT case.
>
> No over-saturation - but exactly as many threads as logical CPUs.
>
> I.e. with 20 physical cores and 40 logical CPUs the numbers to compare
> would be a 'nosmt' benchmark running 20 threads, versus a SMT test
> running 40 threads.
>
> I.e. how much does SMT improve total throughput when the workload's
> parallelism is tuned to utilize 100% of the available CPUs?
>
> Does this make sense?
>
Yes. Here is the comparison.
Amean bt 678.75 ( 0.00%) 789.13 * -16.26%*
Amean cg 261.22 ( 0.00%) 428.82 * -64.16%*
Amean ep 55.36 ( 0.00%) 84.41 * -52.48%*
Amean is 13.25 ( 0.00%) 17.82 * -34.47%*
Amean lu 1065.08 ( 0.00%) 1090.44 ( -2.38%)
Amean mg 89.96 ( 0.00%) 84.28 * 6.31%*
Amean sp 1579.52 ( 0.00%) 1506.16 * 4.64%*
Amean ua 611.87 ( 0.00%) 663.26 * -8.40%*
This is the socket machine and with HT On, there are 80 logical CPUs
versus HT Off with 40 logical CPUs.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists