[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220428141429.1637028-1-ltykernel@gmail.com>
Date: Thu, 28 Apr 2022 10:14:27 -0400
From: Tianyu Lan <ltykernel@...il.com>
To: hch@...radead.org, m.szyprowski@...sung.com, robin.murphy@....com,
michael.h.kelley@...rosoft.com, kys@...rosoft.com
Cc: Tianyu Lan <Tianyu.Lan@...rosoft.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
vkuznets@...hat.com, brijesh.singh@....com, konrad.wilk@...cle.com,
hch@....de, wei.liu@...nel.org, parri.andrea@...il.com,
thomas.lendacky@....com, linux-hyperv@...r.kernel.org,
andi.kleen@...el.com, kirill.shutemov@...el.com
Subject: [RFC PATCH 0/2] swiotlb: Introduce swiotlb device allocation function
From: Tianyu Lan <Tianyu.Lan@...rosoft.com>
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
significant lock contention on the swiotlb lock.
This patchset splits the swiotlb into individual areas which have their
own lock. When there are swiotlb map/allocate request, allocate io tlb
buffer from areas averagely and free the allocation back to the associated
area.
Patch 2 introduces an helper function to allocate bounce buffer
from default IO tlb pool for devices with new IO TLB block unit
and set up IO TLB area for device queues to avoid spinlock overhead.
The area number is set by device driver according queue number.
The network test between traditional VM and Confidential VM.
The throughput improves from ~20Gb/s to ~34Gb/s with this patchset.
Tianyu Lan (2):
swiotlb: Split up single swiotlb lock
Swiotlb: Add device bounce buffer allocation interface
include/linux/swiotlb.h | 58 +++++++
kernel/dma/swiotlb.c | 340 +++++++++++++++++++++++++++++++++++-----
2 files changed, 362 insertions(+), 36 deletions(-)
--
2.25.1
Powered by blists - more mailing lists