lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240827161825.0000146b@Huawei.com>
Date: Tue, 27 Aug 2024 16:18:25 +0100
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Alejandro Lucero Palau <alucerop@....com>
CC: <alejandro.lucero-palau@....com>, <linux-cxl@...r.kernel.org>,
	<netdev@...r.kernel.org>, <dan.j.williams@...el.com>,
	<martin.habets@...inx.com>, <edward.cree@....com>, <davem@...emloft.net>,
	<kuba@...nel.org>, <pabeni@...hat.com>, <edumazet@...gle.com>,
	<richard.hughes@....com>
Subject: Re: [PATCH v2 09/15] cxl: define a driver interface for HPA free
 space enumaration

On Mon, 19 Aug 2024 15:47:48 +0100
Alejandro Lucero Palau <alucerop@....com> wrote:

> On 8/4/24 18:57, Jonathan Cameron wrote:
> > On Mon, 15 Jul 2024 18:28:29 +0100
> > alejandro.lucero-palau@....com wrote:
> >  
> >> From: Alejandro Lucero <alucerop@....com>
> >>
> >> CXL region creation involves allocating capacity from device DPA
> >> (device-physical-address space) and assigning it to decode a given HPA
> >> (host-physical-address space). Before determining how much DPA to
> >> allocate the amount of available HPA must be determined. Also, not all
> >> HPA is create equal, some specifically targets RAM, some target PMEM,
> >> some is prepared for device-memory flows like HDM-D and HDM-DB, and some
> >> is host-only (HDM-H).
> >>
> >> Wrap all of those concerns into an API that retrieves a root decoder
> >> (platform CXL window) that fits the specified constraints and the
> >> capacity available for a new region.
> >>
> >> Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m6fbe775541da3cd477d65fa95c8acdc347345b4f
> >>
> >> Signed-off-by: Alejandro Lucero <alucerop@....com>
> >> Co-developed-by: Dan Williams <dan.j.williams@...el.com>  
> > Hi.
> >
> > This seems a lot more complex than an accelerator would need.
> > If plan is to use this in the type3 driver as well, I'd like to
> > see that done as a precursor to the main series.
> > If it only matters to accelerator drivers (as in type 3 I think
> > we make this a userspace problem), then limit the code to handle
> > interleave ways == 1 only.  Maybe we will care about higher interleave
> > in the long run, but do you have a multihead accelerator today?  
> 
> 
> I would say this is needed for Type3 as well but current support relies 
> on user space requests. I think Type3 support uses the legacy 
> implementation for memory devices where initially the requirements are 
> quite similar, but I think where CXL is going requires less manual 
> intervention or more automatic assisted manual intervention. I'll wait 
> until Dan can comment on this one for sending it as a precursor or as 
> part of the type2 support.
> 
> 
> Regarding the interleave, I know you are joking ... but who knows what 
> the future will bring. O maybe I'm misunderstanding your comment, 
> because in my view multi-head device and interleave are not directly 
> related. Are they? I think you can have a single head and support 
> interleaving, with multi-head implying different hosts and therefore 
> different HPAs.

Nothing says they heads are connected to different hosts.

For type 3 version the reason you'd do this is to spread load across
multiple root ports.  So it's just a bandwidth play and as far
as the host is concerned they might as well be separate devices.

For accelerators in theory you can do stuff like that but it gets
fiddly fast and in theory you might care that they are the same
device for reasons beyond RAS etc and might interleave access to
device memory across the two heads.

Don't think we care today though, so for now I'd just reject any
interleaving.

Jonathan

 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ