lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc99bc7b-b6bc-1b82-3d8e-8e385596070b@arm.com>
Date:   Tue, 29 Aug 2023 23:40:29 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Nicolin Chen <nicolinc@...dia.com>
Cc:     will@...nel.org, jgg@...dia.com, joro@...tes.org,
        jean-philippe@...aro.org, apopple@...dia.com,
        linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        iommu@...ts.linux.dev
Subject: Re: [PATCH 3/3] iommu/arm-smmu-v3: Add a max_tlbi_ops for
 __arm_smmu_tlb_inv_range()

On 2023-08-23 00:04, Nicolin Chen wrote:
> Hi Robin,
> 
> On Tue, Aug 22, 2023 at 09:32:26AM -0700, Nicolin Chen wrote:
>> On Tue, Aug 22, 2023 at 10:30:35AM +0100, Robin Murphy wrote:
>>
>>>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>>>> index d6c647e1eb01..3f0db30932bd 100644
>>>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>>>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>>>> @@ -1897,7 +1897,14 @@ static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
>>>>        if (!size)
>>>>                return;
>>>>
>>>> -     if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
>>>> +     if (!(smmu->features & ARM_SMMU_FEAT_RANGE_INV)) {
>>>> +             /*
>>>> +              * When the size reaches a threshold, replace per-granule TLBI
>>>> +              * commands with one single per-asid or per-vmid TLBI command.
>>>> +              */
>>>> +             if (size >= granule * smmu_domain->max_tlbi_ops)
>>>> +                     return arm_smmu_tlb_inv_domain(smmu_domain);
>>>
>>> This looks like it's at the wrong level - we should have figured this
>>> out before we got as far as low-level command-building. I'd have thought
>>> it would be a case of short-circuiting directly from
>>> arm_smmu_tlb_inv_range_domain() to arm_smmu_tlb_inv_context().
>>
>> OK, I could do that. We would have copies of this same routine
>> though. Also, the shortcut applies to !ARM_SMMU_FEAT_RANGE_INV
>> cases only, so this function feels convenient to me.
> 
> I was trying to say that we would need the same piece in both
> arm_smmu_tlb_inv_range_domain() and arm_smmu_tlb_inv_range_asid(),
> though the latter one only needs to call arm_smmu_tlb_inv_asid().

Its not like arm_smmu_tlb_inv_range_asid() doesn't already massively 
overlap with arm_smmu_tlb_inv_range_domain() anyway, so a little further 
duplication hardly seems like it would hurt. Checking 
ARM_SMMU_FEAT_RANGE_INV should be cheap (otherwise we'd really want to 
split __arm_smmu_tlb_inv_range() into separate RIL vs. non-RIL versions 
to avoid having it in the loop), and it makes the intent clear. What I 
just really don't like is a flow where we construct a specific command, 
then call the low-level function to issue it, only that function then 
actually jumps back out into another high-level function which 
constructs a *different* command. This code is already a maze of twisty 
little passages...

> Also, arm_smmu_tlb_inv_context() does a full range ATC invalidation
> instead of a given range like what arm_smmu_tlb_inv_range_domain()
> currently does. So, it might be a bit overkill.
> 
> Combining all your comments, we'd have something like this:

TBH I'd be inclined to refactor a bit harder, maybe break out some 
VMID-based helpers for orthogonality, and aim for a flow like:

	if (over threshold)
		tlb_inv_domain()
	else if (stage 1)
		tlb_inv_range_asid()
	else
		tlb_inv_range_vmid()
	atc_inv_range()

or possibly if you prefer:

	if (stage 1) {
		if (over threshold)
			tlb_inv_asid()
		else
			tlb_inv_range_asid()
	} else {
		if (over threshold)
			tlb_inv_vmid()
		else
			tlb_inv_range_vmid()
	}
	atc_inv_range()

where the latter maybe trades more verbosity for less duplication 
overall - I'd probably have to try both to see which looks nicer in the 
end. And obviously if there's any chance of inventing a clear and 
consistent naming scheme in the process, that would be lovely.

Thanks,
Robin.

> -------------------------------------------------------------------
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 7614739ea2c1..2967a6634c7c 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1937,12 +1937,22 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size,
>   					  size_t granule, bool leaf,
>   					  struct arm_smmu_domain *smmu_domain)
>   {
> +	struct io_pgtable_cfg *cfg =
> +		&io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops)->cfg;
>   	struct arm_smmu_cmdq_ent cmd = {
>   		.tlbi = {
>   			.leaf	= leaf,
>   		},
>   	};
>   
> +	/*
> +	 * If the given size is too large that would end up with too many TLBI
> +	 * commands in CMDQ, short circuit directly to a full invalidation
> +	 */
> +	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_RANGE_INV) &&
> +	    size >= granule * (1UL << cfg->bits_per_level))
> +		return arm_smmu_tlb_inv_context(smmu_domain);
> +
>   	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>   		cmd.opcode	= smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
>   				  CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA;
> @@ -1964,6 +1974,8 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
>   				 size_t granule, bool leaf,
>   				 struct arm_smmu_domain *smmu_domain)
>   {
> +	struct io_pgtable_cfg *cfg =
> +		&io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops)->cfg;
>   	struct arm_smmu_cmdq_ent cmd = {
>   		.opcode	= smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
>   			  CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA,
> @@ -1973,6 +1985,14 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
>   		},
>   	};
>   
> +	/*
> +	 * If the given size is too large that would end up with too many TLBI
> +	 * commands in CMDQ, short circuit directly to a full invalidation
> +	 */
> +	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_RANGE_INV) &&
> +	    size >= granule * (1UL << cfg->bits_per_level))
> +		return arm_smmu_tlb_inv_asid(smmu_domain->smmu, asid);
> +
>   	__arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
>   }
>   
> -------------------------------------------------------------------
> 
> You're sure that you prefer this, right?
> 
> Thanks
> Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ