lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Apr 2019 19:29:01 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        David Laight <David.Laight@...LAB.COM>
Cc:     'Fenghua Yu' <fenghua.yu@...el.com>,
        'Ingo Molnar' <mingo@...hat.com>,
        'Borislav Petkov' <bp@...en8.de>,
        'H Peter Anvin' <hpa@...or.com>,
        'Dave Hansen' <dave.hansen@...el.com>,
        'Ashok Raj' <ashok.raj@...el.com>,
        'Peter Zijlstra' <peterz@...radead.org>,
        'Kalle Valo' <kvalo@...eaurora.org>,
        'Xiaoyao Li ' <xiaoyao.li@...el.com>,
        'Michael Chan' <michael.chan@...adcom.com>,
        'Ravi V Shankar' <ravi.v.shankar@...el.com>,
        'linux-kernel' <linux-kernel@...r.kernel.org>,
        'x86' <x86@...nel.org>,
        "'linux-wireless@...r.kernel.org'" <linux-wireless@...r.kernel.org>,
        "'netdev@...r.kernel.org'" <netdev@...r.kernel.org>,
        "'kvm@...r.kernel.org'" <kvm@...r.kernel.org>
Subject: Re: [PATCH v6 04/20] x86/split_lock: Align x86_capability to unsigned
 long to avoid split locked access

On 04/04/19 18:52, Thomas Gleixner wrote:
> On Thu, 4 Apr 2019, David Laight wrote:
>> From: David Laight Sent: 04 April 2019 15:45
>>> From: Fenghua Yu Sent: 03 April 2019 22:22
>>> That is not true.
>>> The BTS/BTR instructions access the memory word that contains the
>>> expected bit.
>>> The 'operand size' only affects the size of the register use for the
>>> bit offset.
>>> If the 'operand size' is 16 bits wide (+/- 32k bit offset) the cpu might
>>> do an aligned 16bit memory access, otherwise (32 or 64bit bit offset) it
>>> might do an aligned 32 bit access.
>>> It should never do an 64bit access and never a misaligned one (even if
>>> the base address is misaligned).
>>
>> Hmmm... I may have misread things slightly.
>> The accessed address is 'Effective Address + (4 ∗ (BitOffset DIV 32))'.
>> However nothing suggests that it ever does 64bit accesses.
>>
>> If it does do 64bit accesses when the operand size is 64 bits then the
>> asm stubs ought to be changed to only specify 32bit operand size.
> 
> bitops operate on unsigned long arrays, so the RMW on the affected array
> member has to be atomic vs. other RMW operations on the same array
> member. If we make the bitops 32bit wide on x86/64 we break that.
> 
> So selecting 64bit access (REX.W prefix) is correct and has to stay.

Aren't bitops always atomic with respect to the whole cache line(s)?  We
regularly rely on cmpxchgl being atomic with respect to movb.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ