lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Jul 2023 09:56:27 +0800
From:   "Aiqun(Maria) Yu" <quic_aiquny@...cinc.com>
To:     Mark Rutland <mark.rutland@....com>
CC:     Will Deacon <will@...nel.org>, <corbet@....net>,
        <catalin.marinas@....com>, <maz@...nel.org>,
        <quic_pkondeti@...cinc.com>, <quic_kaushalk@...cinc.com>,
        <quic_satyap@...cinc.com>, <quic_shashim@...cinc.com>,
        <quic_songxue@...cinc.com>, <linux-doc@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH] arm64: Add the arm64.nolse_atomics command line option

On 7/14/2023 3:08 AM, Mark Rutland wrote:
> On Thu, Jul 13, 2023 at 10:08:34PM +0800, Aiqun(Maria) Yu wrote:
>> On 7/13/2023 7:20 PM, Mark Rutland wrote:
>>> Are you saying that LSE atomics to *cacheable* mappings do not work on your
>>> system?
>>>
>>> Specifically, when using a Normal Inner-Shareable Inner-Writeback
>>> Outer-Writeback mapping, do the LSE atomics work or not work?
>> *cacheable* mapping have the LSE atomic is not working if far atomic is
>> performed.
> 
> Thanks for confirming; the fact that this doesn't work on *cacheable* memory is
> definitely a major issue. I think everyone is confused here because of the
> earlier mention of non-cachable accesses (which don't matter).
> 
Maybe I can have the information collected in a summary to see if that 
helps.
> I know that some CPU implementations have EL3 control bits to force LSE atomics
> to be performed near (e.g. in Cortex-A55, the CPUECTLR.ATOM control bits),
> which would avoid the issue while still allowing the LSE atomics to be used.
> 
> If those can be configured in EL3 firmware, that would be a preferable
> workaround.
> 
> Can you say which CPUs are integrated in this system? and/or can you check if
> such control bits exist?

We have CPUECTLR_EL1.ATOM bit can force LSE atomics to be perform near.
CPUECTLR_EL1 is also an option to EL1 kernel drivers to be configuarable.

Try to a detailed summarise of the whole discussions, anyone can ignore 
some part if you are already know.

* Part 1: Solution for this issue.
While we still want to have options to let third party and end users 
can have options:
   1.Disable lse atomic cap.
   2.*Disallow* far atomic by "CPUECTLR_EL1.atom force near atomic" and 
non-cachable mappling for lse atomic only.


* Part 2: Why we need the solution
1. There is also some case far atomic is better performance than near 
atomic. end user may still can still try to do allow far atomic.
while this driver is also use kerenl LSE ATOMIC macro, so it can be 
running on cpu don't support lse atomic and cpu support lse atomic already.
while current system, cpu have feature register said lse atomic is 
supported, but memory controller is not supported is currently not yet 
supported.
2. cpu feature of lse atomic capbility can be controled via options for 
the same image.
Can have GKI(generic kernel Image) + same third party drivers Images 
support multi systems.
--  *New system* fully support lse atomic
-- *Intermidiate support system* which only have cpu support lse atomic, 
but have memory control/bus don't support lse atomic.* (mainly issue are 
discussed in this thread.)
-- *old system* have cpu don't have this cpu feature at all.
3. better for debugging purpose, it would be easier for verify if it is 
this feature related or not.
4. *Disallow* from the developer side is not easy to control, expecially 
when they have the same code working on *old system* or *new system*, 
but failed on current *Intermidiate support system*.

> 
> Thanks,
> Mark.
>   

Thx for discussion in details.
-- 
Thx and BRs,
Aiqun(Maria) Yu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ