[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1425249564-28347-1-git-send-email-andre.przywara@arm.com>
Date: Sun, 1 Mar 2015 22:39:24 +0000
From: Andre Przywara <andre.przywara@....com>
To: Borislav Petkov <bp@...e.de>, Sudeep Holla <sudeep.holla@....com>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH RFT v2] x86: move cacheinfo sysfs to generic cacheinfo infrastructure
On Tue, 24 Feb 2015 18:57:49 +0100, Borislav Petkov wrote:
> On Mon, Feb 23, 2015 at 06:14:25PM +0000, Sudeep Holla wrote:
>> - Rebased on v4.0-rc1
>> - Fixed lockdep warning reported by Borislav
> You probably have fixed the lockdep splat but not the NULL pointer
> dereference which was there in the first mail I sent you. I'd suggest
> you find an AMD box with an L3 and do some testing yourself before
> sending out another version.
So I gave that a spin on my old PhenomII X6 box, and indeed there is a
NULL pointer dereference. I tracked it down to
__cache_amd_cpumap_setup(), where the sibling map for the L3 is
populated. When the function is called for the first CPU, subsequent
CPU's cacheinfo structures obviously haven't been initialized yet, so
the struct cpu_cacheinfo we obtain for the other CPUs is empty
(num_levels and num_leaves are 0). Dereferencing info_list + index is
then a bad idea.
I fixed it by simply skipping the sibling map setup in this case (see
the patch below). Later we revisit this code path because the most
outer loop iterates over all CPUs anyway, so the fields are valid then
and the sibling map is build up properly. Not sure if that is a proper
fix (should we check for info_list being NULL?, is num_levels the
right value?), but I don't feel like looking into this code deeper
than I already did (Sudeep should be decorated for doing that!).
Boris, can you try this on a Bulldozer class machine?
Sudeep, if Boris acknowledges this, can you merge those two lines (or
something similar) in your patch and repost it? I can give it a try
then again.
>
> Also, when testing, do a snapshot of the sysfs cache hierarchy by
> doing:
>
> grep . -EriIn /sys/devices/system/cpu/cpu?/cache/*
> before your changes and after and check for differences.
>
> We can't allow ourselves any differences because this is an API.
But as Sudeep mentioned, this is an orthogonal issue not directly
related to this patch.
Indeed I see a regression between 3.19.0 and a clean 4.0.0-rc1 in this
respect, the diff is like:
-cpu5/cache/index3/shared_cpu_map:00000000,0000003f
+cpu5/cache/index3/shared_cpu_map:3f
(for each cpu and index)
Can someone confirm (and investigate) this?
Cheers,
Andre.
---
arch/x86/kernel/cpu/intel_cacheinfo.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
index 5f3059c..1f69ce7 100644
--- a/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -836,6 +836,8 @@ static int __cache_amd_cpumap_setup(unsigned int cpu, int index,
} else if (index == 3) {
for_each_cpu(i, cpu_llc_shared_mask(cpu)) {
this_cpu_ci = get_cpu_cacheinfo(i);
+ if (index >= this_cpu_ci->num_levels)
+ continue;
this_leaf = this_cpu_ci->info_list + index;
for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) {
if (!cpu_online(sibling))
--
1.8.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists