[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a5f98ff2-2d93-7306-9af9-a7bfc347757e@huawei.com>
Date: Wed, 8 Jul 2020 14:00:54 +0100
From: John Garry <john.garry@...wei.com>
To: <will@...nel.org>, <robin.murphy@....com>
CC: <joro@...tes.org>, <trivial@...nel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linuxarm@...wei.com>, <maz@...nel.org>
Subject: Re: [PATCH 0/4] iommu/arm-smmu-v3: Improve cmdq lock efficiency
On 22/06/2020 18:28, John Garry wrote:
Hi, Can you guys let me know if this is on the radar at all?
I have been talking about this performance issue since Jan, and not
getting anything really.
thanks
> As mentioned in [0], the CPU may consume many cycles processing
> arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to
> get space on the queue takes approx 25% of the cycles for this function.
>
> This series removes that cmpxchg().
>
> For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput
> increase:
> Before: 1310 IOPs
> After: 1630 IOPs
>
> I also have a test harness to check the rate of DMA map+unmaps we can
> achieve:
>
> CPU count 32 64 128
> Before: 63187 19418 10169
> After: 93287 44789 15862
>
> (unit is map+unmaps per CPU per second)
>
> [0] https://lore.kernel.org/linux-iommu/B926444035E5E2439431908E3842AFD24B86DB@DGGEMI525-MBS.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3
>
> John Garry (4):
> iommu/arm-smmu-v3: Fix trivial typo
> iommu/arm-smmu-v3: Calculate bits for prod and owner
> iommu/arm-smmu-v3: Always issue a CMD_SYNC per batch
> iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist()
>
> drivers/iommu/arm-smmu-v3.c | 233 +++++++++++++++++++++++-------------
> 1 file changed, 151 insertions(+), 82 deletions(-)
>
Powered by blists - more mailing lists