[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZldIzp0ncsRX5BZE@memverge.com>
Date: Wed, 29 May 2024 11:25:02 -0400
From: Gregory Price <gregory.price@...verge.com>
To: Dongsheng Yang <dongsheng.yang@...ystack.cn>
Cc: Dan Williams <dan.j.williams@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
John Groves <John@...ves.net>, axboe@...nel.dk,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-cxl@...r.kernel.org, nvdimm@...ts.linux.dev
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)
On Wed, May 22, 2024 at 02:17:38PM +0800, Dongsheng Yang wrote:
>
>
> 在 2024/5/22 星期三 上午 2:41, Dan Williams 写道:
> > Dongsheng Yang wrote:
> >
> > What guarantees this property? How does the reader know that its local
> > cache invalidation is sufficient for reading data that has only reached
> > global visibility on the remote peer? As far as I can see, there is
> > nothing that guarantees that local global visibility translates to
> > remote visibility. In fact, the GPF feature is counter-evidence of the
> > fact that writes can be pending in buffers that are only flushed on a
> > GPF event.
>
> Sounds correct. From what I learned from GPF, ADR, and eADR, there would
> still be data in WPQ even though we perform a CPU cache line flush in the
> OS.
>
> This means we don't have a explicit method to make data puncture all caches
> and land in the media after writing. also it seems there isn't a explicit
> method to invalidate all caches along the entire path.
>
> >
> > I remain skeptical that a software managed inter-host cache-coherency
> > scheme can be made reliable with current CXL defined mechanisms.
>
>
> I got your point now, acorrding current CXL Spec, it seems software managed
> cache-coherency for inter-host shared memory is not working. Will the next
> version of CXL spec consider it?
> >
Sorry for missing the conversation, have been out of office for a bit.
It's not just a CXL spec issue, though that is part of it. I think the
CXL spec would have to expose some form of puncturing flush, and this
makes the assumption that such a flush doesn't cause some kind of
race/deadlock issue. Certainly this needs to be discussed.
However, consider that the upstream processor actually has to generate
this flush. This means adding the flush to existing coherence protocols,
or at the very least a new instruction to generate the flush explicitly.
The latter seems more likely than the former.
This flush would need to ensure the data is forced out of the local WPQ
AND all WPQs south of the PCIE complex - because what you really want to
know is that the data has actually made it back to a place where remote
viewers are capable of percieving the change.
So this means:
1) Spec revision with puncturing flush
2) Buy-in from CPU vendors to generate such a flush
3) A new instruction added to the architecture.
Call me in a decade or so.
But really, I think it likely we see hardware-coherence well before this.
For this reason, I have become skeptical of all but a few memory sharing
use cases that depend on software-controlled cache-coherency.
There are some (FAMFS, for example). The coherence state of these
systems tend to be less volatile (e.g. mappings are read-only), or
they have inherent design limitations (cacheline-sized message passing
via write-ahead logging only).
~Gregory
Powered by blists - more mailing lists