[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e09d847-fb7f-1ec1-02bf-f0c8b315845f@huawei.com>
Date: Thu, 3 Dec 2020 14:54:27 +0000
From: John Garry <john.garry@...wei.com>
To: Dmitry Safonov <0x7f454c46@...il.com>,
Will Deacon <will@...nel.org>
CC: Joerg Roedel <joro@...tes.org>, <robin.murphy@....com>,
Catalin Marinas <catalin.marinas@....com>,
<kernel-team@...roid.com>, <xiyou.wangcong@...il.com>,
<linuxarm@...wei.com>, <iommu@...ts.linux-foundation.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RESEND PATCH v3 0/4] iommu/iova: Solve longterm IOVA issue
On 03/12/2020 06:04, Dmitry Safonov wrote:
> On Tue, 1 Dec 2020 at 21:50, Will Deacon<will@...nel.org> wrote:
>> On Tue, 17 Nov 2020 18:25:30 +0800, John Garry wrote:
>>> This series contains a patch to solve the longterm IOVA issue which
>>> leizhen originally tried to address at [0].
>>>
>>> A sieved kernel log is at the following, showing periodic dumps of IOVA
>>> sizes, per CPU and per depot bin, per IOVA size granule:
>>> https://raw.githubusercontent.com/hisilicon/kernel-dev/topic-iommu-5.10-iova-debug-v3/aging_test
>>>
>>> [...]
>> Applied the final patch to arm64 (for-next/iommu/iova), thanks!
>>
>> [4/4] iommu: avoid taking iova_rbtree_lock twice
>> https://git.kernel.org/arm64/c/3a651b3a27a1
> Glad it made in next, 2 years ago I couldn't convince iommu maintainer
> it's worth it (but with a different justification):
> https://lore.kernel.org/linux-iommu/20180621180823.805-3-dima@arista.com/
Hi Dmitry,
I was unaware of your series, and it’s unfortunate that your
optimization never made it. However I was having a quick look there,
and, in case you did not notice, that the code which you were proposing
changing in patch #1 for intel-iommu.c was removed in e70b081c6f37
("iommu/vt-d: Remove IOVA handling code from the non-dma_ops path").
BTW, split_and_remove_iova() has no in-tree users anymore, so I can send
a patch to delete if nobody else wants to.
BTW2, there's some more patches in my series which could use a review if
you're feeling very helpful :)
Cheers,
John
Powered by blists - more mailing lists