lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b7f3700-aeee-15af-59a7-8e271a89c850@easystack.cn>
Date: Wed, 8 May 2024 19:39:23 +0800
From: Dongsheng Yang <dongsheng.yang@...ystack.cn>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>,
 John Groves <John@...ves.net>, Dan Williams <dan.j.williams@...el.com>,
 Gregory Price <gregory.price@...verge.com>
Cc: Gregory Price <gregory.price@...verge.com>,
 Dan Williams <dan.j.williams@...el.com>, axboe@...nel.dk,
 linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
 linux-cxl@...r.kernel.org, nvdimm@...ts.linux.dev
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)



在 2024/5/3 星期五 下午 5:52, Jonathan Cameron 写道:
> On Sun, 28 Apr 2024 11:55:10 -0500
> John Groves <John@...ves.net> wrote:
> 
>> On 24/04/28 01:47PM, Dongsheng Yang wrote:
>>>
>>>
>>> 在 2024/4/27 星期六 上午 12:14, Gregory Price 写道:
>>>> On Fri, Apr 26, 2024 at 10:53:43PM +0800, Dongsheng Yang wrote:
>>>>>
>>>>>
>>>>> 在 2024/4/26 星期五 下午 9:48, Gregory Price 写道:
>>>>>>    
>>>>>

..
>>
>> Just to make things slightly gnarlier, the MESI cache coherency protocol
>> allows a CPU to speculatively convert a line from exclusive to modified,
>> meaning it's not clear as of now whether "occasional" clean write-backs
>> can be avoided. Meaning those read-only mappings may be more important
>> than one might think. (Clean write-backs basically make it
>> impossible for software to manage cache coherency.)
> 
> My understanding is that clean write backs are an implementation specific
> issue that came as a surprise to some CPU arch folk I spoke to, we will
> need some path for a host to say if they can ever do that.
> 
> Given this definitely effects one CPU vendor, maybe solutions that
> rely on this not happening are not suitable for upstream.
> 
> Maybe this market will be important enough for that CPU vendor to stop
> doing it but if they do it will take a while...
> 
> Flushing in general is as CPU architecture problem where each of the
> architectures needs to be clear what they do / specify that their
> licensees do.
> 
> I'm with Dan on encouraging all memory vendors to do hardware coherence!

Hi Gregory, John, Jonathan and Dan:
	Thanx for your information, they help a lot, and sorry for the late reply.

After some internal discussions, I think we can design it as follows:

(1) If the hardware implements cache coherence, then the software layer 
doesn't need to consider this issue, and can perform read and write 
operations directly.

(2) If the hardware doesn't implement cache coherence, we can consider a 
DMA-like approach, where we check architectural features to determine if 
cache coherence is supported. This could be similar to 
`dev_is_dma_coherent`.

Additionally, if the architecture supports flushing and invalidating CPU 
caches (`CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE`, 
`CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU`, 
`CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL`),

then we can handle cache coherence at the software layer.
(For the clean writeback issue, I think it may also require 
clarification from the architecture, and how DMA handles the clean 
writeback problem, which I haven't further checked.)

(3) If the hardware doesn't implement cache coherence and the cpu 
doesn't support the required CPU cache operations, then we can run in 
nocache mode.

CBD can initially support (3), and then transition to (1) when hardware 
supports cache-coherency. If there's sufficient market demand, we can 
also consider supporting (2).

How does this approach sound?

Thanx
> 
> J
> 
>>
>> Keep in mind that I don't think anybody has cxl 3 devices or CPUs yet, and
>> shared memory is not explicitly legal in cxl 2, so there are things a cpu
>> could do (or not do) in a cxl 2 environment that are not illegal because
>> they should not be observable in a no-shared-memory environment.
>>
>> CBD is interesting work, though for some of the reasons above I'm somewhat
>> skeptical of shared memory as an IPC mechanism.
>>
>> Regards,
>> John
>>
>>
>>
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ