lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jul 2021 18:40:18 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Ming Lei <ming.lei@...hat.com>, John Garry <john.garry@...wei.com>
Cc:     linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
        iommu@...ts.linux-foundation.org, Will Deacon <will@...nel.org>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO
 from remote numa node

On 2021-07-22 16:54, Ming Lei wrote:
[...]
>> If you are still keen to investigate more, then can try either of these:
>>
>> - add iommu.strict=0 to the cmdline
>>
>> - use perf record+annotate to find the hotspot
>>    - For this you need to enable psuedo-NMI with 2x steps:
>>      CONFIG_ARM64_PSEUDO_NMI=y in defconfig
>>      Add irqchip.gicv3_pseudo_nmi=1
>>
>>      See https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/Kconfig#n1745
>>      Your kernel log should show:
>>      [    0.000000] GICv3: Pseudo-NMIs enabled using forced ICC_PMR_EL1
>> synchronisation
> 
> OK, will try the above tomorrow.

Thanks, I was also going to suggest the latter, since it's what 
arm_smmu_cmdq_issue_cmdlist() does with IRQs masked that should be most 
indicative of where the slowness most likely stems from.

FWIW I would expect iommu.strict=0 to give a proportional reduction in 
SMMU overhead for both cases since it should effectively mean only 1/256 
as many invalidations are issued.

Could you also check whether the SMMU platform devices have "numa_node" 
properties exposed in sysfs (and if so whether the values look right), 
and share all the SMMU output from the boot log?

I still suspect that the most significant bottleneck is likely to be 
MMIO access across chips, incurring the CML/CCIX latency twice for every 
single read, but it's also possible that the performance of the SMMU 
itself could be reduced if its NUMA affinity is not described and we end 
up allocating stuff like pagetables on the wrong node as well.

>> But my impression is that this may be a HW implementation issue, considering
>> we don't see such a huge drop off on our HW.
> 
> Except for mpere-mtjade, we saw bad nvme performance on ThunderX2® CN99XX too,
> but I don't get one CN99XX system to check if the issue is same with
> this one.

I know Cavium's SMMU implementation didn't support MSIs, so that case 
would quite possibly lean towards the MMIO polling angle as well (albeit 
with a very different interconnect).

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ