[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ychdxbopmzkdpebjeyyegkq2xmhknmggtwppc7adman5qxxdn2@kgv4fiok4p2j>
Date: Thu, 17 Aug 2023 09:39:54 -0700
From: Jerry Snitselaar <jsnitsel@...hat.com>
To: Robin Murphy <robin.murphy@....com>
Cc: joro@...tes.org, will@...nel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, john.g.garry@...cle.com,
zhangzekun11@...wei.com
Subject: Re: [PATCH 0/2] iommu/iova: Make the rcache depot properly flexible
On Mon, Aug 14, 2023 at 06:53:32PM +0100, Robin Murphy wrote:
> Hi all,
>
> Prompted by [1], which reminded me I started this a while ago, I've now
> finished off my own attempt at sorting out the horrid lack of rcache
> scalability. It's become quite clear that given the vast range of system
> sizes and workloads there is no right size for a fixed depot array, so I
> reckon we're better off not having one at all.
>
> Note that the reclaim threshold and rate are chosen fairly arbitrarily -
> it's enough of a challenge to get my 4-core dev board with spinning disk
> and gigabit ethernet to push anything into a depot at all :)
>
> Thanks,
> Robin.
>
> [1] https://lore.kernel.org/linux-iommu/20230811130246.42719-1-zhangzekun11@huawei.com
>
>
> Robin Murphy (2):
> iommu/iova: Make the rcache depot scale better
> iommu/iova: Manage the depot list size
>
> drivers/iommu/iova.c | 94 ++++++++++++++++++++++++++++++--------------
> 1 file changed, 65 insertions(+), 29 deletions(-)
>
> --
> 2.39.2.101.g768bb238c484.dirty
>
I'm trying to hunt down a system where we've seen some issues before,
but most of them have involved systems with nvme drives. Commit
3710e2b056cb ("nvme-pci: clamp max_hw_sectors based on DMA optimized
limitation") has helped those cases. I ran the patches overnight with
IOVA_DEPOT_DELAY fixed up on a couple of Genoa based systems (384
cores) without issue.
Regards,
Jerry
Powered by blists - more mailing lists