[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220718012818.107051-1-chao.gao@intel.com>
Date: Mon, 18 Jul 2022 09:28:16 +0800
From: Chao Gao <chao.gao@...el.com>
To: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev
Cc: dave.hansen@...el.com, len.brown@...el.com, tony.luck@...el.com,
rafael.j.wysocki@...el.com, reinette.chatre@...el.com,
dan.j.williams@...el.com, kirill.shutemov@...ux.intel.com,
sathyanarayanan.kuppuswamy@...ux.intel.com,
ilpo.jarvinen@...ux.intel.com, ak@...ux.intel.com,
alexander.shishkin@...ux.intel.com, Chao Gao <chao.gao@...el.com>
Subject: [RFC v2 0/2] swiotlb performance optimizations
Intent of this post:
Seek reviews from Intel reviewers and anyone else in the list
interested in IO performance in confidential VMs. Need some acked-by
reviewed-by tags before I can add swiotlb maintainers to "to/cc" lists
and ask for a review from them.
Changes from v1 to v2:
- rebase to the latest dma-mapping tree.
- drop the duplicate patch for mitigating lock contention
- re-collect perf data
swiotlb is now widely used by confidential VMs. This series optimizes
swiotlb to reduce cache misses and lock contention during bounce buffer
allocation/free and memory bouncing to improve IO workload performance in
confidential VMs.
Here are some FIO tests we did to demonstrate the improvement.
Test setup
----------
A normal VM with 8vCPU and 32G memory, swiotlb is enabled by swiotlb=force.
FIO block size is 4K and iodepth is 256. Note that a normal VM is used so
that others lack of necessary hardware to host confidential VMs can reproduce
results below.
Results
-------
1 FIO job read/write IOPS (k)
vanilla read 216
write 251
optimized read 250
write 270
1-job FIO sequential read/write perf increase by 19% and 8% respectively.
Chao Gao (2):
swiotlb: use bitmap to track free slots
swiotlb: Allocate memory in a cache-friendly way
include/linux/swiotlb.h | 8 ++-
kernel/dma/swiotlb.c | 127 +++++++++++++++++-----------------------
2 files changed, 60 insertions(+), 75 deletions(-)
--
2.25.1
Powered by blists - more mailing lists