[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c4e4480d-bed3-4a4a-b07f-496006c5785f@huawei.com>
Date: Wed, 16 Oct 2024 19:47:14 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, <akpm@...ux-foundation.org>,
<hughd@...gle.com>
CC: <willy@...radead.org>, <david@...hat.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] Improve the tmpfs large folio read performance
On 2024/10/16 18:09, Baolin Wang wrote:
> The tmpfs has already supported the PMD-sized large folios, but the tmpfs
> read operation still performs copying at the PAGE SIZE granularity, which
> is not perfect. This patch changes to copy data at the folio granularity,
> which can improve the read performance.
>
> Use 'fio bs=64k' to read a 1G tmpfs file populated with 2M THPs, and I can
> see about 20% performance improvement, and no regression with bs=4k. I
> also did some functional test with the xfstests suite, and I did not find
> any regressions with the following xfstests config.
> FSTYP=tmpfs
> export TEST_DIR=/mnt/tempfs_mnt
> export TEST_DEV=/mnt/tempfs_mnt
> export SCRATCH_MNT=/mnt/scratchdir
> export SCRATCH_DEV=/mnt/scratchdir
>
Oh,we make same changes, my bonnie test(./bonnie -d /tmp -s 1024) see
similar improvement(19.2% with huge=always) with out inner changes :)
> Baolin Wang (2):
> mm: shmem: update iocb->ki_pos directly to simplify tmpfs read logic
> mm: shmem: improve the tmpfs large folio read performance
>
> mm/shmem.c | 54 ++++++++++++++++++++++--------------------------------
> 1 file changed, 22 insertions(+), 32 deletions(-)
>
Powered by blists - more mailing lists