[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23f4a64d-5977-1816-8faa-fe7691ace2ff@gmail.com>
Date: Mon, 21 Feb 2022 23:14:58 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, hch@...radead.org,
m.szyprowski@...sung.com, robin.murphy@....com,
michael.h.kelley@...rosoft.com,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
iommu@...ts.linux-foundation.org, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
brijesh.singh@....com, konrad.wilk@...cle.com,
parri.andrea@...il.com, thomas.lendacky@....com
Subject: Re: [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch
On 2/15/2022 11:32 PM, Tianyu Lan wrote:
> On 2/14/2022 9:58 PM, Christoph Hellwig wrote:
>> On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
>>> On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
>>>> Adding a function to set the flag doesn't really change much. As Robin
>>>> pointed out last time you should fine a way to just call
>>>> swiotlb_init_with_tbl directly with the memory allocated the way you
>>>> like it. Or given that we have quite a few of these trusted hypervisor
>>>> schemes maybe add an argument to swiotlb_init that specifies how to
>>>> allocate the memory.
>>>
>>> Thanks for your suggestion. I will try the first approach first
>>> approach.
>>
>> Take a look at the SWIOTLB_ANY flag in this WIP branch:
>>
>>
>> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/swiotlb-init-cleanup
>>
>>
>> That being said I'm not sure that either this flag or the existing
>> powerpc
>> code iѕ actually the right thing to do. We still need the 4G limited
>> buffer to support devices with addressing limitations. So I think we
>> need
>> an additional io_tlb_mem instance for the devices without addressing
>> limitations instead.
>>
>
> Hi Christoph:
> Thanks for your patches. I tested these patches in Hyper-V trusted
> VM and system can't boot up. I am debugging and will report back.
Sorry. The boot failure is not related with these patches and the issue
has been fixed in the latest upstream code.
There is a performance bottleneck due to io tlb mem's spin lock during
performance test. All devices'io queues uses same io tlb mem entry
and the spin lock of io tlb mem introduce overheads. There is a fix
patch from Andi Kleen in the github. Could you have a look?
https://github.com/intel/tdx/commit/4529b5784c141782c72ec9bd9a92df2b68cb7d45
Thanks.
Powered by blists - more mailing lists