lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <adcc692e-8819-3741-31d3-d1202cc1b619@amd.com>
Date: Mon, 19 Aug 2024 15:47:48 +0100
From: Alejandro Lucero Palau <alucerop@....com>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>,
 alejandro.lucero-palau@....com
Cc: linux-cxl@...r.kernel.org, netdev@...r.kernel.org,
 dan.j.williams@...el.com, martin.habets@...inx.com, edward.cree@....com,
 davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
 edumazet@...gle.com, richard.hughes@....com
Subject: Re: [PATCH v2 09/15] cxl: define a driver interface for HPA free
 space enumaration


On 8/4/24 18:57, Jonathan Cameron wrote:
> On Mon, 15 Jul 2024 18:28:29 +0100
> alejandro.lucero-palau@....com wrote:
>
>> From: Alejandro Lucero <alucerop@....com>
>>
>> CXL region creation involves allocating capacity from device DPA
>> (device-physical-address space) and assigning it to decode a given HPA
>> (host-physical-address space). Before determining how much DPA to
>> allocate the amount of available HPA must be determined. Also, not all
>> HPA is create equal, some specifically targets RAM, some target PMEM,
>> some is prepared for device-memory flows like HDM-D and HDM-DB, and some
>> is host-only (HDM-H).
>>
>> Wrap all of those concerns into an API that retrieves a root decoder
>> (platform CXL window) that fits the specified constraints and the
>> capacity available for a new region.
>>
>> Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m6fbe775541da3cd477d65fa95c8acdc347345b4f
>>
>> Signed-off-by: Alejandro Lucero <alucerop@....com>
>> Co-developed-by: Dan Williams <dan.j.williams@...el.com>
> Hi.
>
> This seems a lot more complex than an accelerator would need.
> If plan is to use this in the type3 driver as well, I'd like to
> see that done as a precursor to the main series.
> If it only matters to accelerator drivers (as in type 3 I think
> we make this a userspace problem), then limit the code to handle
> interleave ways == 1 only.  Maybe we will care about higher interleave
> in the long run, but do you have a multihead accelerator today?


I would say this is needed for Type3 as well but current support relies 
on user space requests. I think Type3 support uses the legacy 
implementation for memory devices where initially the requirements are 
quite similar, but I think where CXL is going requires less manual 
intervention or more automatic assisted manual intervention. I'll wait 
until Dan can comment on this one for sending it as a precursor or as 
part of the type2 support.


Regarding the interleave, I know you are joking ... but who knows what 
the future will bring. O maybe I'm misunderstanding your comment, 
because in my view multi-head device and interleave are not directly 
related. Are they? I think you can have a single head and support 
interleaving, with multi-head implying different hosts and therefore 
different HPAs.


> Jonathan
>
>> ---
>>   drivers/cxl/core/region.c          | 161 +++++++++++++++++++++++++++++
>>   drivers/cxl/cxl.h                  |   3 +
>>   drivers/cxl/cxlmem.h               |   5 +
>>   drivers/net/ethernet/sfc/efx_cxl.c |  14 +++
>>   include/linux/cxl_accel_mem.h      |   9 ++
>>   5 files changed, 192 insertions(+)
>>
>> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
>> index 538ebd5a64fd..ca464bfef77b 100644
>> --- a/drivers/cxl/core/region.c
>> +++ b/drivers/cxl/core/region.c
>> @@ -702,6 +702,167 @@ static int free_hpa(struct cxl_region *cxlr)
>>   	return 0;
>>   }
>>   
>> +
>> +struct cxlrd_max_context {
>> +	struct device * const *host_bridges;
>> +	int interleave_ways;
>> +	unsigned long flags;
>> +	resource_size_t max_hpa;
>> +	struct cxl_root_decoder *cxlrd;
>> +};
>> +
>> +static int find_max_hpa(struct device *dev, void *data)
>> +{
>> +	struct cxlrd_max_context *ctx = data;
>> +	struct cxl_switch_decoder *cxlsd;
>> +	struct cxl_root_decoder *cxlrd;
>> +	struct resource *res, *prev;
>> +	struct cxl_decoder *cxld;
>> +	resource_size_t max;
>> +	int found;
>> +
>> +	if (!is_root_decoder(dev))
>> +		return 0;
>> +
>> +	cxlrd = to_cxl_root_decoder(dev);
>> +	cxld = &cxlrd->cxlsd.cxld;
>> +	if ((cxld->flags & ctx->flags) != ctx->flags) {
>> +		dev_dbg(dev, "find_max_hpa, flags not matching: %08lx vs %08lx\n",
>> +			      cxld->flags, ctx->flags);
>> +		return 0;
>> +	}
>> +
>> +	/* A Host bridge could have more interleave ways than an
>> +	 * endpoint, couldnĀ“t it?
> EP interleave ways is about working out how the full HPA address (it's
> all sent over the wire) is modified to get to the DPA.  So it needs
> to know what the overall interleave is.  Host bridge can't interleave
> and then have the EP not know about it.  If there are switch HDM decoders
> in the path, the host bridge interleave may be less than that the EP needs
> to deal with.
>
> Does an accelerator actually cope with interleave? Is aim here to ensure
> that IW is never anything other than 1?  Or is this meant to have
> more general use? I guess it is meant to. In which case, I'd like to
> see this used in the type3 driver as well.
>
>> +	 *
>> +	 * What does interleave ways mean here in terms of the requestor?
>> +	 * Why the FFMWS has 0 interleave ways but root port has 1?
> FFMWS?
>
>> +	 */
>> +	if (cxld->interleave_ways != ctx->interleave_ways) {
>> +		dev_dbg(dev, "find_max_hpa, interleave_ways  not matching\n");
>> +		return 0;
>> +	}
>> +
>> +	cxlsd = &cxlrd->cxlsd;
>> +
>> +	guard(rwsem_read)(&cxl_region_rwsem);
>> +	found = 0;
>> +	for (int i = 0; i < ctx->interleave_ways; i++)
>> +		for (int j = 0; j < ctx->interleave_ways; j++)
>> +			if (ctx->host_bridges[i] ==
>> +					cxlsd->target[j]->dport_dev) {
>> +				found++;
>> +				break;
>> +			}
>> +
>> +	if (found != ctx->interleave_ways) {
>> +		dev_dbg(dev, "find_max_hpa, no interleave_ways found\n");
>> +		return 0;
>> +	}
>> +
>> +	/*
>> +	 * Walk the root decoder resource range relying on cxl_region_rwsem to
>> +	 * preclude sibling arrival/departure and find the largest free space
>> +	 * gap.
>> +	 */
>> +	lockdep_assert_held_read(&cxl_region_rwsem);
>> +	max = 0;
>> +	res = cxlrd->res->child;
>> +	if (!res)
>> +		max = resource_size(cxlrd->res);
>> +	else
>> +		max = 0;
>> +
>> +	for (prev = NULL; res; prev = res, res = res->sibling) {
>> +		struct resource *next = res->sibling;
>> +		resource_size_t free = 0;
>> +
>> +		if (!prev && res->start > cxlrd->res->start) {
>> +			free = res->start - cxlrd->res->start;
>> +			max = max(free, max);
>> +		}
>> +		if (prev && res->start > prev->end + 1) {
>> +			free = res->start - prev->end + 1;
>> +			max = max(free, max);
>> +		}
>> +		if (next && res->end + 1 < next->start) {
>> +			free = next->start - res->end + 1;
>> +			max = max(free, max);
>> +		}
>> +		if (!next && res->end + 1 < cxlrd->res->end + 1) {
>> +			free = cxlrd->res->end + 1 - res->end + 1;
>> +			max = max(free, max);
>> +		}
>> +	}
>> +
>> +	if (max > ctx->max_hpa) {
>> +		if (ctx->cxlrd)
>> +			put_device(CXLRD_DEV(ctx->cxlrd));
>> +		get_device(CXLRD_DEV(cxlrd));
>> +		ctx->cxlrd = cxlrd;
>> +		ctx->max_hpa = max;
>> +		dev_info(CXLRD_DEV(cxlrd), "found %pa bytes of free space\n", &max);
> dev_dbg()
>
>> +	}
>> +	return 0;
>> +}
>> +
>> +/**
>> + * cxl_get_hpa_freespace - find a root decoder with free capacity per constraints
>> + * @endpoint: an endpoint that is mapped by the returned decoder
>> + * @interleave_ways: number of entries in @host_bridges
>> + * @flags: CXL_DECODER_F flags for selecting RAM vs PMEM, and HDM-H vs HDM-D[B]
>> + * @max: output parameter of bytes available in the returned decoder
> @available_size
> or something along those lines. I'd expect max to be the end address of the available
> region
>
>> + *
>> + * The return tuple of a 'struct cxl_root_decoder' and 'bytes available (@max)'
>> + * is a point in time snapshot. If by the time the caller goes to use this root
>> + * decoder's capacity the capacity is reduced then caller needs to loop and
>> + * retry.
>> + *
>> + * The returned root decoder has an elevated reference count that needs to be
>> + * put with put_device(cxlrd_dev(cxlrd)). Locking context is with
>> + * cxl_{acquire,release}_endpoint(), that ensures removal of the root decoder
>> + * does not race.
>> + */
>> +struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_port *endpoint,
>> +					       int interleave_ways,
>> +					       unsigned long flags,
>> +					       resource_size_t *max)
>> +{
>> +
>> +	struct cxlrd_max_context ctx = {
>> +		.host_bridges = &endpoint->host_bridge,
>> +		.interleave_ways = interleave_ways,
>> +		.flags = flags,
>> +	};
>> +	struct cxl_port *root_port;
>> +	struct cxl_root *root;
>> +
>> +	if (!is_cxl_endpoint(endpoint)) {
>> +		dev_dbg(&endpoint->dev, "hpa requestor is not an endpoint\n");
>> +		return ERR_PTR(-EINVAL);
>> +	}
>> +
>> +	root = find_cxl_root(endpoint);
>> +	if (!root) {
>> +		dev_dbg(&endpoint->dev, "endpoint can not be related to a root port\n");
>> +		return ERR_PTR(-ENXIO);
>> +	}
>> +
>> +	root_port = &root->port;
>> +	down_read(&cxl_region_rwsem);
>> +	device_for_each_child(&root_port->dev, &ctx, find_max_hpa);
>> +	up_read(&cxl_region_rwsem);
>> +	put_device(&root_port->dev);
>> +
>> +	if (!ctx.cxlrd)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	*max = ctx.max_hpa;
> Rename max_hpa to available_hpa.
>
>> +	return ctx.cxlrd;
>> +}
>> +EXPORT_SYMBOL_NS_GPL(cxl_get_hpa_freespace, CXL);
>> +
>> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ