[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150612084543.GA24472@gmail.com>
Date: Fri, 12 Jun 2015 10:45:43 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <waiman.long@...com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v2 2/2] locking/qrwlock: Don't contend with readers when
setting _QW_WAITING
* Waiman Long <waiman.long@...com> wrote:
> > Mind posting the microbenchmark?
>
> I have attached the tool that I used for testing.
Thanks, that's interesting!
Btw., we could also do something like this in user-space, in tools/perf/bench/, we
have no 'perf bench locking' subcommand yet.
We already build and measure simple x86 kernel methods there such as memset() and
memcpy():
triton:~/tip> perf bench mem memcpy -r all
# Running 'mem/memcpy' benchmark:
Routine default (Default memcpy() provided by glibc)
# Copying 1MB Bytes ...
1.385195 GB/Sec
4.982462 GB/Sec (with prefault)
Routine x86-64-unrolled (unrolled memcpy() in arch/x86/lib/memcpy_64.S)
# Copying 1MB Bytes ...
1.627604 GB/Sec
5.336407 GB/Sec (with prefault)
Routine x86-64-movsq (movsq-based memcpy() in arch/x86/lib/memcpy_64.S)
# Copying 1MB Bytes ...
2.132233 GB/Sec
4.264465 GB/Sec (with prefault)
Routine x86-64-movsb (movsb-based memcpy() in arch/x86/lib/memcpy_64.S)
# Copying 1MB Bytes ...
1.490935 GB/Sec
7.128193 GB/Sec (with prefault)
Locking primitives would certainly be more complex build in user-space - but we
could shuffle things around in kernel headers as well to make it easier to test in
user-space.
That's how we can build lockdep in user-space for example, see tools/lib/lockdep.
Just a thought.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists