[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1592315993-164290-1-git-send-email-john.garry@huawei.com>
Date: Tue, 16 Jun 2020 21:59:49 +0800
From: John Garry <john.garry@...wei.com>
To: <will@...nel.org>, <robin.murphy@....com>
CC: <joro@...tes.org>, <trivial@...nel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linuxarm@...wei.com>, <maz@...nel.org>,
"John Garry" <john.garry@...wei.com>
Subject: [PATCH RFC v2 0/4] iommu/arm-smmu-v3: Improve cmdq lock efficiency
As mentioned in [0], the CPU may consume many cycles processing
arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to
get space on the queue takes approx 25% of the cycles for this function.
This series removes that cmpxchg().
For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput
increase:
Before: 1310 IOPs
After: 1630 IOPs
I also have a test harness to check the rate of DMA map+unmaps we can
achieve:
CPU count 32 64 128
Before: 63187 19418 10169
After: 93287 44789 15862
(unit is map+unmaps per CPU per second)
There's no specific problem that I know of with this series, as previous
issues should now be fixed - but I'm a bit nervous about how we deal with
the queue being full and wrapping.
And I want to test more.
Thanks
[0] https://lore.kernel.org/linux-iommu/B926444035E5E2439431908E3842AFD24B86DB@DGGEMI525-MBS.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3
John Garry (4):
iommu/arm-smmu-v3: Fix trivial typo
iommu/arm-smmu-v3: Calculate bits for prod and owner
iommu/arm-smmu-v3: Always issue a CMD_SYNC per batch
iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist()
drivers/iommu/arm-smmu-v3.c | 210 ++++++++++++++++++++++--------------
1 file changed, 131 insertions(+), 79 deletions(-)
--
2.26.2
Powered by blists - more mailing lists