[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150610073512.GA17226@gmail.com>
Date: Wed, 10 Jun 2015 09:35:12 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <Waiman.Long@...com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v2 2/2] locking/qrwlock: Don't contend with readers when
setting _QW_WAITING
* Waiman Long <Waiman.Long@...com> wrote:
> The current cmpxchg() loop in setting the _QW_WAITING flag for writers
> in queue_write_lock_slowpath() will contend with incoming readers
> causing possibly extra cmpxchg() operations that are wasteful. This
> patch changes the code to do a byte cmpxchg() to eliminate contention
> with new readers.
>
> A multithreaded microbenchmark running 5M read_lock/write_lock loop
> on a 8-socket 80-core Westmere-EX machine running 4.0 based kernel
> with the qspinlock patch have the following execution times (in ms)
> with and without the patch:
>
> With R:W ratio = 5:1
>
> Threads w/o patch with patch % change
> ------- --------- ---------- --------
> 2 990 895 -9.6%
> 3 2136 1912 -10.5%
> 4 3166 2830 -10.6%
> 5 3953 3629 -8.2%
> 6 4628 4405 -4.8%
> 7 5344 5197 -2.8%
> 8 6065 6004 -1.0%
> 9 6826 6811 -0.2%
> 10 7599 7599 0.0%
> 15 9757 9766 +0.1%
> 20 13767 13817 +0.4%
>
> With small number of contending threads, this patch can improve
> locking performance by up to 10%. With more contending threads,
> however, the gain diminishes.
Mind posting the microbenchmark?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists