lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date: Mon, 29 Apr 2024 17:10:31 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Dongsheng Yang <dongsheng.yang@...ystack.cn>, Dan Williams
	<dan.j.williams@...el.com>, <axboe@...nel.dk>, John Groves <John@...ves.net>
CC: <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<linux-cxl@...r.kernel.org>, Dongsheng Yang <dongsheng.yang.linux@...il.com>
Subject: Re: [PATCH RFC 0/7] block: Introduce CBD (CXL Block Device)

Dongsheng Yang wrote:
> 
> 
> 在 2024/4/25 星期四 上午 2:08, Dan Williams 写道:
> > Dongsheng Yang wrote:
> >>
> >>
> >> 在 2024/4/24 星期三 下午 12:29, Dan Williams 写道:
> >>> Dongsheng Yang wrote:
> >>>> From: Dongsheng Yang <dongsheng.yang.linux@...il.com>
> >>>>
> >>>> Hi all,
> >>>> 	This patchset introduce cbd (CXL block device). It's based on linux 6.8, and available at:
> >>>> 	https://github.com/DataTravelGuide/linux
> >>>>
> >>> [..]
> >>>> (4) dax is not supported yet:
> >>>> 	same with famfs, dax device is not supported here, because dax device does not support
> >>>> dev_dax_iomap so far. Once dev_dax_iomap is supported, CBD can easily support DAX mode.
> >>>
> >>> I am glad that famfs is mentioned here, it demonstrates you know about
> >>> it. However, unfortunately this cover letter does not offer any analysis
> >>> of *why* the Linux project should consider this additional approach to
> >>> the inter-host shared-memory enabling problem.
> >>>
> >>> To be clear I am neutral at best on some of the initiatives around CXL
> >>> memory sharing vs pooling, but famfs at least jettisons block-devices
> >>> and gets closer to a purpose-built memory semantic.
> >>>
> >>> So my primary question is why would Linux need both famfs and cbd? I am
> >>> sure famfs would love feedback and help vs developing competing efforts.
> >>
> >> Hi,
> >> 	Thanks for your reply, IIUC about FAMfs, the data in famfs is stored in
> >> shared memory, and related nodes can share the data inside this file
> >> system; whereas cbd does not store data in shared memory, it uses shared
> >> memory as a channel for data transmission, and the actual data is stored
> >> in the backend block device of remote nodes. In cbd, shared memory works
> >> more like network to connect different hosts.
> >>
> >> That is to say, in my view, FAMfs and cbd do not conflict at all; they
> >> meet different scenario requirements. cbd simply uses shared memory to
> >> transmit data, shared memory plays the role of a data transmission
> >> channel, while in FAMfs, shared memory serves as a data store role.
> > 
> > If shared memory is just a communication transport then a block-device
> > abstraction does not seem a proper fit. From the above description this
> > sounds similar to what CONFIG_NTB_TRANSPORT offers which is a way for
> > two hosts to communicate over a shared memory channel.
> > 
> > So, I am not really looking for an analysis of famfs vs CBD I am looking
> > for CBD to clarify why Linux should consider it, and why the
> > architecture is fit for purpose.
> 
> Let me explain why we need cbd:
> 
> In cloud storage scenarios, we often need to expose block devices of 
> storage nodes to compute nodes. We have options like nbd, iscsi, nvmeof, 
> etc., but these all communicate over the network. cbd aims to address 
> the same scenario but using shared memory for data transfer instead of 
> the network, aiming for better performance and reduced network latency.
> 
> Furthermore, shared memory can not only transfer data but also implement 
> features like write-ahead logging (WAL) or read/write cache, further 
> improving performance, especially latency-sensitive business scenarios. 
> (If I understand correctly, this might not be achievable with the 
> previously mentioned ntb.)
> 
> To ensure we have a common understanding, I'd like to clarify one point: 
> the /dev/cbdX block device is not an abstraction of shared memory; it is 
> a mapping of a block device (such as /dev/sda) on the remote host. 
> Reading/writing to /dev/cbdX is equivalent to reading/writing to 
> /dev/sda on the remote host.
> 
> This is the design intention of cbd. I hope this clarifies things.

I does, thanks for the clarification. Let me go back and take a another
look now that I undertand that this is a "remote storage target over CXL
memory" solution.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ