[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3908561D78D1C84285E8C5FCA982C28F7F4A5F08@ORSMSX115.amr.corp.intel.com>
Date: Thu, 17 Oct 2019 23:28:12 +0000
From: "Luck, Tony" <tony.luck@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Paolo Bonzini <pbonzini@...hat.com>
CC: "Li, Xiaoyao" <xiaoyao.li@...el.com>,
"Christopherson, Sean J" <sean.j.christopherson@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Hansen, Dave" <dave.hansen@...el.com>,
"Radim Krcmar" <rkrcmar@...hat.com>,
"Raj, Ashok" <ashok.raj@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Prakhya, Sai Praneeth" <sai.praneeth.prakhya@...el.com>,
"Shankar, Ravi V" <ravi.v.shankar@...el.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
x86 <x86@...nel.org>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: RE: [RFD] x86/split_lock: Request to Intel
> If that's not going to happen, then we just bury the whole thing and put it
> on hold until a sane implementation of that functionality surfaces in
> silicon some day in the not so foreseeable future.
We will drop the patches to flip the MSR bits to enable checking.
But we can fix the split lock issues that have already been found in the kernel.
Two strategies:
1) Adjust alignments of arrays passed to set_bit() et. al.
2) Fix set_bit() et. al. to not issue atomic operations that cross boundaries.
Fenghua had been pursuing option #1 in previous iterations. He found a few
more places with the help of the "grep" patterns suggested by David Laight.
So that path is up to ~8 patches now that do one of:
+ Change from u32 to u64
+ Force alignment with a union with a u64
+ Change to non-atomic (places that didn't need atomic)
Downside of strategy #1 is that people will add new misaligned cases in the
future. So this process has no defined end point.
Strategy #2 begun when I looked at the split-lock issue I saw that with a
constant bit argument set_bit() just does a "ORB" on the affected byte (i.e.
no split lock). Similar for clear_bit() and change_bit(). Changing code to also
do that for the variable bit case is easy.
test_and_clr_bit() needs more care, but luckily, we had Peter Anvin nearby
to give us a neat solution.
So strategy #2 is being tried now (and Fenghua will post some patches
soon).
Strategy #2 does increase code size when the bit number argument isn't
a constant. But that isn't the common case (Fenghua is counting and will
give numbers when patches are ready).
So take a look at the option #2 patches when they are posted. If the code
size increase is unacceptable, we can go back to fixing each of the callers
to get alignment right.
-Tony
Powered by blists - more mailing lists