[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z29yxfeZMowr27ZZ@gourry-fedora-PF4VCD3F>
Date: Fri, 27 Dec 2024 22:38:45 -0500
From: Gregory Price <gourry@...rry.net>
To: "Huang, Ying" <ying.huang@...ux.alibaba.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, nehagholkar@...a.com,
abhishekd@...a.com, kernel-team@...a.com, david@...hat.com,
nphamcs@...il.com, akpm@...ux-foundation.org, hannes@...xchg.org,
kbusch@...a.com
Subject: Re: [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios.
On Fri, Dec 27, 2024 at 02:09:50PM -0500, Gregory Price wrote:
> On Fri, Dec 27, 2024 at 10:40:36AM -0500, Gregory Price wrote:
just adding some follow-up data
test is essentially
membind(1) - node1 is cxl
read() - filecache is initialized on cxl
set_mempolicy(MPOL_DEFAULT) - allow migrations
while true:
start = time()
read()
print(time()-start)
// external events cause migration/drop cache while running
baseline: .93-1s/read()
from cxl: ~1.15-1.2s/read()
So we are seeing anywhere from 20-25% overhead from the filecache living
on CXL right out of the box. At least we have good clear signal, right?
tests:
echo 3 > drop_cache - filecache refills into node 1
result => ~.95-1s/read()
we return back to the baseline, which is expected
enable promotion - numactl shows promotion occurs
result => ~1.15-1.2s/read()
No effect?! Even offlining the dax devices does nothing.
enable promotion, wait for it to complete, drop cache
after promotion => 1.15-1.2s/read
after drop cache => .95-1s/read()
Back to baseline!
This seems to imply that the overhead we're seeing from read() even
when filecache is on the remote node isn't actually related to the
memory speed, but instead likely related to some kind of stale
metadata in the filesystem or filecache layers.
This is going to take me a bit to figure out. I need to isolate the
filesystem influence (we are using btrfs, i want to make sure this
behavior is consistent on other file systems).
~Gregory
Powered by blists - more mailing lists