lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220424113543.456342-1-guoxuenan@huawei.com>
Date:   Sun, 24 Apr 2022 19:35:43 +0800
From:   Guo Xuenan <guoxuenan@...wei.com>
To:     <willy@...radead.org>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <houtao1@...wei.com>, <fangwei1@...wei.com>, <guoxuenan@...wei.com>
Subject: Questions about folio allocation.

Hi Matthew,

You have done a lot of work on folio, many folio related patches have been
incorporated into the mainline. I'm very interested in your excellent work
and did some sequential read test (using fixed read length, testing on a
10G file), and found something.
1. different read length may effect folio order
   using 100KB read length during sequentital read, when readahead folio
   order may always 0, so there always allocate folios with 0 or 2. 
2. folio order can not reach MAX_PAGECACHE_ORDER, when read length is small.
   (eg, less than 32KB)

As you have mentationed here[1],
"The heuristic for choosing which folio sizes will surely need some tuning"
I wonder (1) why the folio order need align with page index. is this
necessary or there are some certain restrictions?
(2) for pagecache, by using large folio, it saving loops for allocating pages,
and i also did some test on dropcache, it shows that dropcache costs less time.
there are twenty times performance improvement when drop the 10G file's cache.
so, can i concluded that pagecache should tend to use large order of folio?

[1] https://lore.kernel.org/linux-mm/20220204195852.1751729-72-willy@infradead.org/,

Thanks,
Guo Xuenan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ