[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1588812129-8596-44-git-send-email-anthony.yznaga@oracle.com>
Date: Wed, 6 May 2020 17:42:09 -0700
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: willy@...radead.org, corbet@....net, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
rppt@...ux.ibm.com, akpm@...ux-foundation.org, hughd@...gle.com,
ebiederm@...ssion.com, masahiroy@...nel.org, ardb@...nel.org,
ndesaulniers@...gle.com, dima@...ovin.in, daniel.kiper@...cle.com,
nivedita@...m.mit.edu, rafael.j.wysocki@...el.com,
dan.j.williams@...el.com, zhenzhong.duan@...cle.com,
jroedel@...e.de, bhe@...hat.com, guro@...com,
Thomas.Lendacky@....com, andriy.shevchenko@...ux.intel.com,
keescook@...omium.org, hannes@...xchg.org, minchan@...nel.org,
mhocko@...nel.org, ying.huang@...el.com,
yang.shi@...ux.alibaba.com, gustavo@...eddedor.com,
ziqian.lzq@...fin.com, vdavydov.dev@...il.com,
jason.zeng@...el.com, kevin.tian@...el.com, zhiyuan.lv@...el.com,
lei.l.li@...el.com, paul.c.lai@...el.com, ashok.raj@...el.com,
linux-fsdevel@...r.kernel.org, linux-doc@...r.kernel.org,
kexec@...ts.infradead.org
Subject: [RFC 43/43] PKRAM: improve index alignment of pkram_link entries
To take advantage of optimizations when adding pages to the page cache
via shmem_insert_pages(), improve the likelihood that the pages array
passed to shmem_insert_pages() starts on an aligned index. Do this
when preserving pages by starting a new pkram_link page when the current
page is aligned and the next aligned page will not fit on the pkram_link
page.
Signed-off-by: Anthony Yznaga <anthony.yznaga@...cle.com>
---
mm/pkram.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index ef092aa5ce7a..416c3ca4411b 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -913,11 +913,21 @@ static int __pkram_save_page(struct pkram_stream *ps,
{
struct pkram_link *link = ps->link;
struct pkram_obj *obj = ps->obj;
+ int order, align, align_cnt;
pkram_entry_t p;
- int order;
+
+ if (PageTransHuge(page)) {
+ align = 1 << (HPAGE_PMD_ORDER + XA_CHUNK_SHIFT - (HPAGE_PMD_ORDER % XA_CHUNK_SHIFT));
+ align_cnt = align >> HPAGE_PMD_ORDER;
+ } else {
+ align = XA_CHUNK_SIZE;
+ align_cnt = XA_CHUNK_SIZE;
+ }
if (!link || ps->entry_idx >= PKRAM_LINK_ENTRIES_MAX ||
- index != ps->next_index) {
+ index != ps->next_index ||
+ (IS_ALIGNED(index, align) &&
+ (ps->entry_idx + align_cnt > PKRAM_LINK_ENTRIES_MAX))) {
struct page *link_page;
link_page = pkram_alloc_page((ps->gfp_mask & GFP_RECLAIM_MASK) |
--
2.13.3
Powered by blists - more mailing lists