[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1598018062-175608-1-git-send-email-john.garry@huawei.com>
Date: Fri, 21 Aug 2020 21:54:20 +0800
From: John Garry <john.garry@...wei.com>
To: <will@...nel.org>, <robin.murphy@....com>
CC: <joro@...tes.org>, <linux-arm-kernel@...ts.infradead.org>,
<iommu@...ts.linux-foundation.org>, <maz@...nel.org>,
<linuxarm@...wei.com>, <linux-kernel@...r.kernel.org>,
John Garry <john.garry@...wei.com>
Subject: [PATCH v2 0/2] iommu/arm-smmu-v3: Improve cmdq lock efficiency
As mentioned in [0], the CPU may consume many cycles processing
arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to
get space on the queue takes a lot of time once we start getting many
CPUs contending - from experiment, for 64 CPUs contending the cmdq,
success rate is ~ 1 in 12, which is poor, but not totally awful.
This series removes that cmpxchg() and replaces with an atomic_add,
same as how the actual cmdq deals with maintaining the prod pointer.
For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput
increase:
Before: 1250K IOPs
After: 1550K IOPs
I also have a test harness to check the rate of DMA map+unmaps we can
achieve:
CPU count 8 16 32 64
Before: 282K 115K 36K 11K
After: 302K 193K 80K 30K
(unit is map+unmaps per CPU per second)
[0] https://lore.kernel.org/linux-iommu/B926444035E5E2439431908E3842AFD24B86DB@DGGEMI525-MBS.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3
Differences to v1:
- Simplify by dropping patch to always issue a CMD_SYNC
- Use 64b atomic add, keeping prod in a separate 32b field
John Garry (2):
iommu/arm-smmu-v3: Calculate max commands per batch
iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist()
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 166 ++++++++++++++------
1 file changed, 114 insertions(+), 52 deletions(-)
--
2.26.2
Powered by blists - more mailing lists