[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGuDGftmxsF35C9P@8bytes.org>
Date: Mon, 22 May 2023 16:58:33 +0200
From: Joerg Roedel <joro@...tes.org>
To: Vasant Hegde <vasant.hegde@....com>
Cc: Jerry Snitselaar <jsnitsel@...hat.com>,
Peng Zhang <zhangpeng.00@...edance.com>,
Robin Murphy <robin.murphy@....com>, will@...nel.org,
iommu@...ts.linux.dev, linux-kernel@...r.kernel.org,
Li Bin <huawei.libin@...wei.com>,
Xie XiuQi <xiexiuqi@...wei.com>,
Yang Yingliang <yangyingliang@...wei.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Subject: Re: [PATCH] iommu: Avoid softlockup and rcu stall in
fq_flush_timeout().
Hi,
On Fri, Apr 28, 2023 at 11:14:54AM +0530, Vasant Hegde wrote:
> Ping. Any suggestion on below proposal (schedule work on each CPU to free iova)?
Optimizing the flush-timeout path seems to be working on the symptoms
rather than the cause. The main question to look into first is why are
so many CPUs competing for the IOVA allocator lock.
That is a situation which the flush-queue code is there to avoid,
obviously it does not scale to the workloads tested here. Any chance to
check why?
My guess is that the allocations are too big and not covered by the
allocation sizes supported by the flush-queue code. But maybe this is
something that can be fixed. Or the flush-queue code could even be
changed to auto-adapt to allocation patterns of the device driver?
Regards,
Joerg
Powered by blists - more mailing lists