[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1271945222-5283-1-git-send-email-bp@amd64.org>
Date: Thu, 22 Apr 2010 16:06:57 +0200
From: Borislav Petkov <bp@...64.org>
To: <hpa@...or.com>, <mingo@...e.hu>, <tglx@...utronix.de>
Cc: <x86@...nel.org>, <linux-kernel@...r.kernel.org>,
Frank Arnold <frank.arnold@....com>,
Borislav Petkov <borislav.petkov@....com>
Subject: [PATCH -v2 0/5] AMD L3 cache index disable fixes for .35
From: Borislav Petkov <borislav.petkov@....com>
Hi,
here's the dynamic allocation version in 4/5. The small amount of
NUM_NODES * 8 Bytes is not being freed because we don't have an exit
callback but I guess this is ok since we want to free it only when
shutting down anyway.
-v2:
Allocate l3_caches descriptor array dynamically.
-v1:
this is a small patchset of fixes for L3 CID which have accumulated over
the last couple of weeks. They serve as a preparation for disabling L3
cache indices whenever an L3 MCE triggers, has been evaluated and the
offending index thresholded and, if error rate is excessively high,
disabled. Those patches will be coming up later though.
Patches 1,3,4 are cleanups and unifications which save us a little bit
of percpu memory in favor of dynamic allocation. Also, we have an L3
cache descriptor per node now instead of having this information per
CPU.
I triggered a lockdep warning in lockdep_trace_alloc() during testing
due to the fact that we may run with disabled interrupts that early
in the boot process. Therefore, I have a GFP_ATOMIC in patch 3 there
allocating the cache descriptors. I'm open for suggestions in case this
is undesired.
Patch 2 is a fix which triggers when we run as a guest on Xen due to Xen
not exporting CPU PCI config space to the guests.
And finally #5 is a required fix.
The patchset is also available at
git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git l3-for-35
Please queue for .35,
thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists