[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1454095846-19628-3-git-send-email-Waiman.Long@hpe.com>
Date: Fri, 29 Jan 2016 14:30:45 -0500
From: Waiman Long <Waiman.Long@....com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Alexander Viro <viro@...iv.linux.org.uk>
Cc: linux-fsdevel@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>,
Waiman Long <Waiman.Long@....com>
Subject: [PATCH v2 2/3] lib/list_batch, x86: Enable list insertion/deletion batching for x86
System with a large number of CPUs will benefit from the list batching
facility when the list lock is contended. If the lock isn't contended,
the performance of the list batching function should be similar to the
"lock; listop; unlock;" sequence that it replaces.
This patch enables it for the x86 architecture.
Signed-off-by: Waiman Long <Waiman.Long@....com>
---
arch/x86/Kconfig | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 330e738..5b35be7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -42,6 +42,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if X86_64
+ select ARCH_USE_LIST_BATCHING
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if SMP
--
1.7.1
Powered by blists - more mailing lists