[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <275d5112-eaa8-1158-b26d-4e18c8bf79e1@redhat.com>
Date: Fri, 3 Feb 2017 13:42:46 -0500
From: Waiman Long <longman@...hat.com>
To: valdis.kletnieks@...edu
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Jonathan Corbet <corbet@....net>, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Scott J Norton <scott.norton@....com>
Subject: Re: [PATCH-tip v5 17/21] TP-futex: Group readers together in wait
queue
On 02/03/2017 01:23 PM, valdis.kletnieks@...edu wrote:
> On Fri, 03 Feb 2017 13:03:50 -0500, Waiman Long said:
>
>> On a 2-socket 36-core E5-2699 v3 system (HT off) running on a 4.10
>> WW futex TP futex Glibc
>> -------- -------- -----
>> Total locking ops 35,707,234 58,645,434 10,930,422
>> Per-thread avg/sec 99,149 162,887 30,362
>> Per-thread min/sec 93,190 38,641 29,872
>> Per-thread max/sec 104,213 225,983 30,708
> Do we understand where the 38K number came from? I'm a bit concerned that the
> min-to-max has such a large dispersion compared to all the other numbers. Was
> that a worst-case issue, and is the worst-case something likely to happen in
> production, or requires special effort to trigger?
>
Because the lock isn't fair and depending on the placement of the lock,
you will see some CPUs have higher likelihood of getting the lock than
the others. This is reflected in the different locking rates as reported
by the micro-benchmark. As the microbenchmark is included in this patch
set, you can play around with it if you want.
This patch set does guarantee some minimum performance level, but it
can't guarantee fairness for all the lock waiters.
Regards,
Longman
Powered by blists - more mailing lists