[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1e53b953147bca171814305ff764931d71eec09a.1561569068.git.reinette.chatre@intel.com>
Date: Wed, 26 Jun 2019 10:48:49 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: tglx@...utronix.de, fenghua.yu@...el.com, bp@...en8.de,
tony.luck@...el.com
Cc: mingo@...hat.com, hpa@...or.com, x86@...nel.org,
linux-kernel@...r.kernel.org,
Reinette Chatre <reinette.chatre@...el.com>
Subject: [PATCH 10/10] x86/resctrl: Only pseudo-lock L3 cache when inclusive
Cache pseudo-locking is a model specific feature and platforms
supporting this feature are added by adding the x86 model data to the
source code after cache pseudo-locking has been validated for the
particular platform.
Indicating support for cache pseudo-locking for an entire platform is
sufficient when the cache characteristics of the platform is the same
for all instances of the platform. If this is not the case then an
additional check needs to be added. In particular, it is currently only
possible to pseudo-lock an L3 cache region if the L3 cache is inclusive
of lower level caches. If the L3 cache is not inclusive then any
pseudo-locked data would be evicted from the pseudo-locked region when
it is moved to the L2 cache.
When some SKUs of a platform may have inclusive cache while other SKUs
may have non inclusive cache it is necessary to, in addition of checking
if the platform supports cache pseudo-locking, also check if the cache
being pseudo-locked is inclusive.
Signed-off-by: Reinette Chatre <reinette.chatre@...el.com>
---
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 35 +++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index 4e47ad582db6..e79f555d5226 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -125,6 +125,30 @@ static unsigned int get_cache_line_size(unsigned int cpu, int level)
return 0;
}
+/**
+ * get_cache_inclusive - Determine if cache is inclusive of lower levels
+ * @cpu: CPU with which cache is associated
+ * @level: Cache level
+ *
+ * Context: @cpu has to be online.
+ * Return: 1 if cache is inclusive of lower cache levels, 0 if cache is not
+ * inclusive of lower cache levels or on failure.
+ */
+static unsigned int get_cache_inclusive(unsigned int cpu, int level)
+{
+ struct cpu_cacheinfo *ci;
+ int i;
+
+ ci = get_cpu_cacheinfo(cpu);
+
+ for (i = 0; i < ci->num_leaves; i++) {
+ if (ci->info_list[i].level == level)
+ return ci->info_list[i].inclusive;
+ }
+
+ return 0;
+}
+
/**
* pseudo_lock_minor_get - Obtain available minor number
* @minor: Pointer to where new minor number will be stored
@@ -317,6 +341,12 @@ static int pseudo_lock_single_portion_valid(struct pseudo_lock_region *plr,
goto err_cpu;
}
+ if (p->r->cache_level == 3 &&
+ !get_cache_inclusive(plr->cpu, p->r->cache_level)) {
+ rdt_last_cmd_puts("L3 cache not inclusive\n");
+ goto err_cpu;
+ }
+
plr->line_size = get_cache_line_size(plr->cpu, p->r->cache_level);
if (plr->line_size == 0) {
rdt_last_cmd_puts("Unable to compute cache line length\n");
@@ -418,6 +448,11 @@ static int pseudo_lock_l2_l3_portions_valid(struct pseudo_lock_region *plr,
goto err_cpu;
}
+ if (!get_cache_inclusive(plr->cpu, l3_p->r->cache_level)) {
+ rdt_last_cmd_puts("L3 cache not inclusive\n");
+ goto err_cpu;
+ }
+
return 0;
err_cpu:
--
2.17.2
Powered by blists - more mailing lists