lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250924212912.GP2617119@nvidia.com>
Date: Wed, 24 Sep 2025 18:29:12 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: will@...nel.org, robin.murphy@....com, joro@...tes.org,
	jean-philippe@...aro.org, miko.lenczewski@....com,
	balbirs@...dia.com, peterz@...radead.org, smostafa@...gle.com,
	kevin.tian@...el.com, praan@...gle.com,
	linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
	linux-kernel@...r.kernel.org, patches@...ts.linux.dev
Subject: Re: [PATCH rfcv2 4/8] iommu/arm-smmu-v3: Introduce a per-domain
 arm_smmu_invs array

On Mon, Sep 08, 2025 at 04:26:58PM -0700, Nicolin Chen wrote:
> +/**
> + * arm_smmu_invs_merge() - Merge @to_merge into @invs and generate a new array
> + * @invs: the base invalidation array
> + * @to_merge: an array of invlidations to merge
> + *
> + * Return: a newly allocated array on success, or ERR_PTR
> + *
> + * This function must be locked and serialized with arm_smmu_invs_unref() and
> + * arm_smmu_invs_purge(), but do not lockdep on any lock for KUNIT test.
> + *
> + * Either @invs or @to_merge must be sorted itself. This ensures the returned

s/Either/Both

A merge sort like this requires both lists to be sorted.

> +struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
> +					  struct arm_smmu_invs *to_merge)
> +{
> +	struct arm_smmu_invs *new_invs;
> +	struct arm_smmu_inv *new;
> +	size_t num_adds = 0;
> +	size_t num_dels = 0;
> +	size_t i, j;
> +
> +	for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
> +		int cmp = arm_smmu_invs_merge_cmp(invs, i, to_merge, j);
> +
> +		if (cmp < 0) {
> +			/* no found in to_merge, leave alone but delete trash */

s/no/not/

> +			if (!refcount_read(&invs->inv[i].users))
> +				num_dels++;
> +			i++;

This sequence related to users should be consistent in all the merge
sorts. The one below in unref is the best one:

 +		int cmp;
 +
 +		if (!refcount_read(&invs->inv[i].users)) {
 +			num_dels++;
 +			i++;
 +			continue;
 +		}
 +
 +		cmp = arm_smmu_invs_merge_cmp(invs, i, to_unref, j);

Make all of these loops look like that

> +
> +	WARN_ON(new != new_invs->inv + new_invs->num_invs);
> +
> +	return new_invs;

A debugging check that the output list is sorted would be a nice touch
for robustness.

I think this looks OK and has turned out to be pretty simple.

I've been thinking about generalizing it to core code and I think it
would hold up well there as well?

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ