[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe761ea8-650a-4118-bd53-e1e4408fea9c@oracle.com>
Date: Wed, 23 Apr 2025 12:21:15 -0700
From: jane.chu@...cle.com
To: logane@...tatee.com, hch@....de, gregkh@...uxfoundation.org, jgg@...pe.ca,
willy@...radead.org, kch@...dia.com, axboe@...nel.dk,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-pci@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org
Cc: jane.chu@...cle.com
Subject: Report: Performance regression from ib_umem_get on zone device pages
Hi,
I recently looked at an mr cache registration regression issue that
follows device-dax backed mr memory, not system RAM backed mr memory.
It boils down to
1567b49d1a40 lib/scatterlist: add check when merging zone device pages
[PATCH v11 5/9] lib/scatterlist: add check when merging zone device pages
https://lore.kernel.org/all/20221021174116.7200-6-logang@deltatee.com/
that went into v6.2-rc1.
The line that introduced the regression is
ib_uverbs_reg_mr
mlx5_ib_reg_user_mr
ib_umem_get
sg_alloc_append_table_from_pages
pages_are_mergeable
zone_device_pages_have_same_pgmap(a,b)
return a->pgmap == b->pgmap <-------
Sub "return a->pgmap == b->pgmap" with "return true" purely as an
experiment and the regression reliably went away.
So this looks like a case of CPU cache thrashing, but I don't know to
fix it. Could someone help address the issue? I'd be happy to help
verifying.
My test system is a two-socket bare metal Intel(R) Xeon(R) Platinum
8352Y with with 12 Intel NVDIMMs installed.
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Model name: Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz
L1d cache: 48K <----
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
# cat /proc/meminfo
MemTotal: 263744088 kB
MemFree: 252151828 kB
MemAvailable: 251806008 kB
There are 12 device-dax instances configured exactly the same -
# ndctl list -m devdax | egrep -m 1 'map'
"map":"mem",
# ndctl list -m devdax | egrep -c 'map'
12
# ndctl list -m devdax
[
{
"dev":"namespace1.0",
"mode":"devdax",
"map":"mem",
"size":135289372672,
"uuid":"a67deda8-e5b3-4a6e-bea2-c1ebdc0fd996",
"chardev":"dax1.0",
"align":2097152
},
[..]
The system is idle unless running mr registration test. The test
attempts to register 61440 mrs by 64 threads in parallel, each mr is 2MB
and is backed by device-dax memory.
The flow of a single test run:
1. reserve virtual address space for (61440 * 2MB) via mmap with
PROT_NONE and MAP_ANONYMOUS | MAP_NORESERVE| MAP_PRIVATE
2. mmap ((61440 * 2MB) / 12) from each of the 12 device-dax to the
reserved virtual address space sequentially to form a continual VA space
3. touch the entire mapped memory page by page
4. take timestamp,
create 40 pthreads, each thread registers (61440 / 40) mrs via
ibv_reg_mr(),
take another timestamp after pthread_join
5. wait 10 seconds
6. repeat step 4 except for deregistration via ibv_dereg_mr()
7. tear down everything
I hope the above description is helpful as I am not at liberty to share
the test code.
Here is the highlight from perfdiff comparing the culprit(PATCH 5/9)
against the baseline(PATCH 4/9).
baseline = 49580e690755 block: add check when merging zone device pages
culprit = 1567b49d1a40 lib/scatterlist: add check when merging zone
device pages
# Baseline Delta Abs Shared Object Symbol
# ........ ......... .........................
............................................................
#
26.53% -19.46% [kernel.kallsyms] [k] follow_page_mask
49.15% +11.56% [kernel.kallsyms] [k]
native_queued_spin_lock_slowpath
+1.38% [kernel.kallsyms] [k]
pages_are_mergeable <----
+0.82% [kernel.kallsyms] [k]
__rdma_block_iter_next
0.74% +0.68% [kernel.kallsyms] [k] osq_lock
+0.56% [kernel.kallsyms] [k]
mlx5r_umr_update_mr_pas
2.25% +0.49% [kernel.kallsyms] [k]
follow_pmd_mask.isra.0
1.92% +0.37% [kernel.kallsyms] [k] _raw_spin_lock
1.13% +0.35% [kernel.kallsyms] [k] __get_user_pages
With baseline, per mr registration takes ~2950 nanoseconds, +- 50ns,
with culprit, per mr registration takes ~6850 nanoseconds, +- 50ns.
Regards,
-jane
Powered by blists - more mailing lists