[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1473328647-33116-17-git-send-email-fenghua.yu@intel.com>
Date: Thu, 8 Sep 2016 02:57:10 -0700
From: "Fenghua Yu" <fenghua.yu@...el.com>
To: "Thomas Gleixner" <tglx@...utronix.de>,
"H. Peter Anvin" <h.peter.anvin@...el.com>,
"Ingo Molnar" <mingo@...e.hu>, "Tony Luck" <tony.luck@...el.com>,
"Peter Zijlstra" <peterz@...radead.org>,
"Tejun Heo" <tj@...nel.org>, "Borislav Petkov" <bp@...e.de>,
"Stephane Eranian" <eranian@...gle.com>,
"Marcelo Tosatti" <mtosatti@...hat.com>,
"David Carrillo-Cisneros" <davidcc@...gle.com>,
"Shaohua Li" <shli@...com>,
"Ravi V Shankar" <ravi.v.shankar@...el.com>,
"Vikas Shivappa" <vikas.shivappa@...ux.intel.com>,
"Sai Prakhya" <sai.praneeth.prakhya@...el.com>
Cc: "linux-kernel" <linux-kernel@...r.kernel.org>,
"x86" <x86@...nel.org>, Fenghua Yu <fenghua.yu@...el.com>
Subject: [PATCH v2 16/33] x86/intel_rdt: Class of service and capacity bitmask management for CDP
From: Vikas Shivappa <vikas.shivappa@...ux.intel.com>
Add support to manage CLOSid(CLass Of Service id) and capacity
bitmask(cbm) for code data prioritization(CDP).
Closid management includes changes to allocating, freeing closid and
closid_get and closid_put and changes to closid availability map during
CDP set up. CDP has a separate cbm for code and data.
Each closid is mapped to a (dcache_cbm, icache_cbm) pair when cdp mode
is enabled.
Signed-off-by: Vikas Shivappa <vikas.shivappa@...ux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
Reviewed-by: Tony Luck <tony.luck@...el.com>
---
arch/x86/kernel/cpu/intel_rdt.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index e0f23b6..9cee3fe 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -27,7 +27,13 @@
#include <asm/intel_rdt.h>
/*
- * cctable maintains 1:1 mapping between CLOSid and cache bitmask.
+ * During cache alloc mode cctable maintains 1:1 mapping between
+ * CLOSid and cache bitmask.
+ *
+ * During CDP mode, the cctable maintains a 1:2 mapping between the closid
+ * and (dcache_cbm, icache_cbm) pair.
+ * index of a dcache_cbm for CLOSid 'n' = n << 1.
+ * index of a icache_cbm for CLOSid 'n' = n << 1 + 1
*/
static struct clos_cbm_table *cctable;
/*
@@ -50,6 +56,13 @@ bool cdp_enabled;
#define __DCBM_TABLE_INDEX(x) (x << 1)
#define __ICBM_TABLE_INDEX(x) ((x << 1) + 1)
+#define __DCBM_MSR_INDEX(x) \
+ CBM_FROM_INDEX(__DCBM_TABLE_INDEX(x))
+#define __ICBM_MSR_INDEX(x) \
+ CBM_FROM_INDEX(__ICBM_TABLE_INDEX(x))
+
+#define DCBM_TABLE_INDEX(x) (x << cdp_enabled)
+#define ICBM_TABLE_INDEX(x) ((x << cdp_enabled) + cdp_enabled)
struct rdt_remote_data {
int msr;
@@ -107,9 +120,12 @@ void __intel_rdt_sched_in(void *dummy)
state->closid = 0;
}
+/*
+ * When cdp mode is enabled, refcnt is maintained in the dcache_cbm entry.
+ */
static inline void closid_get(u32 closid)
{
- struct clos_cbm_table *cct = &cctable[closid];
+ struct clos_cbm_table *cct = &cctable[DCBM_TABLE_INDEX(closid)];
lockdep_assert_held(&rdtgroup_mutex);
@@ -139,7 +155,7 @@ static int closid_alloc(u32 *closid)
static inline void closid_free(u32 closid)
{
clear_bit(closid, cconfig.closmap);
- cctable[closid].cbm = 0;
+ cctable[DCBM_TABLE_INDEX(closid)].cbm = 0;
if (WARN_ON(!cconfig.closids_used))
return;
@@ -149,7 +165,7 @@ static inline void closid_free(u32 closid)
static void closid_put(u32 closid)
{
- struct clos_cbm_table *cct = &cctable[closid];
+ struct clos_cbm_table *cct = &cctable[DCBM_TABLE_INDEX(closid)];
lockdep_assert_held(&rdtgroup_mutex);
if (WARN_ON(!cct->clos_refcnt))
--
2.5.0
Powered by blists - more mailing lists