lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Jun 2021 13:08:51 +0100
From:   John Garry <john.garry@...wei.com>
To:     Robin Murphy <robin.murphy@....com>,
        Lu Baolu <baolu.lu@...ux.intel.com>,
        "joro@...tes.org" <joro@...tes.org>,
        "will@...nel.org" <will@...nel.org>,
        "dwmw2@...radead.org" <dwmw2@...radead.org>,
        "corbet@....net" <corbet@....net>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        Linuxarm <linuxarm@...wei.com>,
        "Leizhen (ThunderTown)" <thunder.leizhen@...wei.com>,
        "chenxiang (M)" <chenxiang66@...ilicon.com>,
        "linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
        <nadav.amit@...il.com>
Subject: Re: [PATCH v14 6/6] iommu: Remove mode argument from
 iommu_set_dma_strict()

On 21/06/2021 12:59, Robin Murphy wrote:

+ Nadav

>> On a personal level I would be happy with that approach, but I think
>> it's better to not start changing things right away in a follow-up series.
>>
>> So how about we add this patch (which replaces 6/6 "iommu: Remove mode
>> argument from iommu_set_dma_strict()")?
>>
>> Robin, any opinion?
> For me it boils down to whether there are any realistic workloads where
> non-strict mode*would*  still perform better under virtualisation. The
> only reason for the user to explicitly pass "iommu.strict=0" is because
> they expect it to increase unmap performance; if it's only ever going to
> lead to an unexpected performance loss, I don't see any value in
> overriding the kernel's decision purely for the sake of subservience.
> 
> If there*are*  certain valid cases for allowing it for people who really
> know what they're doing, then we should arguably also log a counterpart
> message to say "we're honouring your override but beware it may have the
> opposite effect to what you expect" for the benefit of other users who
> assume it's a generic go-faster knob. At that point it starts getting
> non-trivial enough that I'd want to know for sure it's worthwhile.
> 
> The other reason this might be better to revisit later is that an AMD
> equivalent is still in flight[1], and there might be more that can
> eventually be factored out. I think both series are pretty much good to
> merge for 5.14, but time's already tight to sort out the conflicts which
> exist as-is, without making them any worse.

ok, fine. Can revisit.

As for getting these merged, I'll dry-run merging both of those series 
to see the conflicts. It doesn't look too problematic from a glance.

Cheers,
John

> 
> Robin.
> 
> [1]
> https://lore.kernel.org/linux-iommu/20210616100500.174507-3-namit@vmware.com/
> 
>> ------->8---------
>>
>> [PATCH] iommu/vt-d: Make "iommu.strict" override batching due to
>>    virtualization
>>
>> As a change in policy, make iommu.strict cmdline argument override
>> whether we disable batching due to virtualization.
>>
>> The API of iommu_set_dma_strict() is changed to accept a "force"
>> argument, which means that we always set iommu_dma_strict true,
>> regardless of whether we already set via cmdline. Also return a boolean,
>> to tell whether iommu_dma_strict was set or not.
>>
>> Note that in all pre-existing callsites of iommu_set_dma_strict(),
>> argument strict was true, so this argument is dropped.
>>
>> Signed-off-by: John Garry<john.garry@...wei.com>
>>
>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>> index 06666f9d8116..e8d65239b359 100644
>> --- a/drivers/iommu/intel/iommu.c
>> +++ b/drivers/iommu/intel/iommu.c
>> @@ -4380,10 +4380,8 @@ int __init intel_iommu_init(void)
>>             * is likely to be much lower than the overhead of synchronizing
>>             * the virtual and physical IOMMU page-tables.
>>             */
>> -        if (cap_caching_mode(iommu->cap)) {
>> +        if (cap_caching_mode(iommu->cap) && iommu_set_dma_strict(false))
>>                pr_info_once("IOMMU batching disallowed due to
>> virtualization\n");
>> -            iommu_set_dma_strict(true);
>> -        }
>>            iommu_device_sysfs_add(&iommu->iommu, NULL,
>>                           intel_iommu_groups,
>>                           "%s", iommu->name);
>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>> index 60b1ec42e73b..1434bee64af3 100644
>> --- a/drivers/iommu/iommu.c
>> +++ b/drivers/iommu/iommu.c
>> @@ -349,10 +349,14 @@ static int __init iommu_dma_setup(char *str)
>>    }
>>    early_param("iommu.strict", iommu_dma_setup);
>>
>> -void iommu_set_dma_strict(bool strict)
>> +/* Return true if we set iommu_dma_strict */
>> +bool iommu_set_dma_strict(bool force)
>>    {
>> -    if (strict || !(iommu_cmd_line & IOMMU_CMD_LINE_STRICT))
>> -        iommu_dma_strict = strict;
>> +    if (force || !(iommu_cmd_line & IOMMU_CMD_LINE_STRICT)) {
>> +        iommu_dma_strict = true;
>> +        return true;
>> +    }
>> +    return false;
>>    }
>>
>>    bool iommu_get_dma_strict(struct iommu_domain *domain)
>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>> index 32d448050bf7..f17b20234296 100644
>> --- a/include/linux/iommu.h
>> +++ b/include/linux/iommu.h
>> @@ -476,7 +476,7 @@ int iommu_enable_nesting(struct iommu_domain *domain);
>>    int iommu_set_pgtable_quirks(struct iommu_domain *domain,
>>            unsigned long quirks);
>>
>> -void iommu_set_dma_strict(bool val);
>> +bool iommu_set_dma_strict(bool force);
>>    bool iommu_get_dma_strict(struct iommu_domain *domain);
>>
>>    extern int report_iommu_fault(struct iommu_domain *domain, struct
>> device *dev,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ