[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f76c15c5-9ee4-a825-73c8-223564a26d74@gmail.com>
Date: Tue, 1 Mar 2022 22:01:27 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, m.szyprowski@...sung.com,
robin.murphy@....com, michael.h.kelley@...rosoft.com,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
iommu@...ts.linux-foundation.org, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
brijesh.singh@....com, konrad.wilk@...cle.com,
parri.andrea@...il.com, thomas.lendacky@....com,
"krish.sadhukhan@...cle.com" <krish.sadhukhan@...cle.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch
On 3/1/2022 7:53 PM, Christoph Hellwig wrote:
> On Fri, Feb 25, 2022 at 10:28:54PM +0800, Tianyu Lan wrote:
>> One more perspective is that one device may have multiple queues and
>> each queues should have independent swiotlb bounce buffer to avoid spin
>> lock overhead. The number of queues is only available in the device
>> driver. This means new API needs to be called in the device driver
>> according to queue number.
>
> Well, given how hell bent people are on bounce buffering we might
> need some scalability work there anyway.
According to my test on the local machine with two VMs, Linux guest
without swiotlb bounce buffer or with the fix patch from Andi Kleen can
achieve about 40G/s throughput but it's just 24-25G/s with current
swiotlb code. Otherwise, the spinlock contention also consumes more cpu
usage.
Powered by blists - more mailing lists