lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f161b2d-eacd-ad35-8959-0f44c8d132b3@easystack.cn>
Date: Wed, 22 May 2024 14:17:38 +0800
From: Dongsheng Yang <dongsheng.yang@...ystack.cn>
To: Dan Williams <dan.j.williams@...el.com>,
 Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc: John Groves <John@...ves.net>, Gregory Price
 <gregory.price@...verge.com>, axboe@...nel.dk, linux-block@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
 nvdimm@...ts.linux.dev
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)



在 2024/5/22 星期三 上午 2:41, Dan Williams 写道:
> Dongsheng Yang wrote:
>> 在 2024/5/9 星期四 下午 8:21, Jonathan Cameron 写道:
> [..]
>>>> If we check and find that the "No clean writeback" bit in both CSDS and
>>>> DVSEC is set, can we then assume that software cache-coherency is
>>>> feasible, as outlined below:
>>>>
>>>> (1) Both the writer and reader ensure cache flushes. Since there are no
>>>> clean writebacks, there will be no background data writes.
>>>>
>>>> (2) The writer writes data to shared memory and then executes a cache
>>>> flush. If we trust the "No clean writeback" bit, we can assume that the
>>>> data in shared memory is coherent.
>>>>
>>>> (3) Before reading the data, the reader performs cache invalidation.
>>>> Since there are no clean writebacks, this invalidation operation will
>>>> not destroy the data written by the writer. Therefore, the data read by
>>>> the reader should be the data written by the writer, and since the
>>>> writer's cache is clean, it will not write data to shared memory during
>>>> the reader's reading process. Additionally, data integrity can be ensured.
> 
> What guarantees this property? How does the reader know that its local
> cache invalidation is sufficient for reading data that has only reached
> global visibility on the remote peer? As far as I can see, there is
> nothing that guarantees that local global visibility translates to
> remote visibility. In fact, the GPF feature is counter-evidence of the
> fact that writes can be pending in buffers that are only flushed on a
> GPF event.

Sounds correct. From what I learned from GPF, ADR, and eADR, there would 
still be data in WPQ even though we perform a CPU cache line flush in 
the OS.

This means we don't have a explicit method to make data puncture all 
caches and land in the media after writing. also it seems there isn't a 
explicit method to invalidate all caches along the entire path.

> 
> I remain skeptical that a software managed inter-host cache-coherency
> scheme can be made reliable with current CXL defined mechanisms.


I got your point now, acorrding current CXL Spec, it seems software 
managed cache-coherency for inter-host shared memory is not working. 
Will the next version of CXL spec consider it?
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ