[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1617140178-8773-44-git-send-email-anthony.yznaga@oracle.com>
Date: Tue, 30 Mar 2021 14:36:18 -0700
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: willy@...radead.org, corbet@....net, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
rppt@...nel.org, akpm@...ux-foundation.org, hughd@...gle.com,
ebiederm@...ssion.com, keescook@...omium.org, ardb@...nel.org,
nivedita@...m.mit.edu, jroedel@...e.de, masahiroy@...nel.org,
nathan@...nel.org, terrelln@...com, vincenzo.frascino@....com,
martin.b.radev@...il.com, andreyknvl@...gle.com,
daniel.kiper@...cle.com, rafael.j.wysocki@...el.com,
dan.j.williams@...el.com, Jonathan.Cameron@...wei.com,
bhe@...hat.com, rminnich@...il.com, ashish.kalra@....com,
guro@...com, hannes@...xchg.org, mhocko@...nel.org,
iamjoonsoo.kim@....com, vbabka@...e.cz, alex.shi@...ux.alibaba.com,
david@...hat.com, richard.weiyang@...il.com,
vdavydov.dev@...il.com, graf@...zon.com, jason.zeng@...el.com,
lei.l.li@...el.com, daniel.m.jordan@...cle.com,
steven.sistare@...cle.com, linux-fsdevel@...r.kernel.org,
linux-doc@...r.kernel.org, kexec@...ts.infradead.org
Subject: [RFC v2 43/43] PKRAM: improve index alignment of pkram_link entries
To take advantage of optimizations when adding pages to the page cache
via shmem_insert_pages(), improve the likelihood that the pages array
passed to shmem_insert_pages() starts on an aligned index. Do this
when preserving pages by starting a new pkram_link page when the current
page is aligned and the next aligned page will not fit on the pkram_link
page.
Signed-off-by: Anthony Yznaga <anthony.yznaga@...cle.com>
---
mm/pkram.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index b63b2a3958e7..3f43809c8a85 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -911,9 +911,20 @@ static int __pkram_save_page(struct pkram_access *pa, struct page *page,
{
struct pkram_data_stream *pds = &pa->pds;
struct pkram_link *link = pds->link;
+ int align, align_cnt;
+
+ if (PageTransHuge(page)) {
+ align = 1 << (HPAGE_PMD_ORDER + XA_CHUNK_SHIFT - (HPAGE_PMD_ORDER % XA_CHUNK_SHIFT));
+ align_cnt = align >> HPAGE_PMD_ORDER;
+ } else {
+ align = XA_CHUNK_SIZE;
+ align_cnt = XA_CHUNK_SIZE;
+ }
if (!link || pds->entry_idx >= PKRAM_LINK_ENTRIES_MAX ||
- index != pa->pages.next_index) {
+ index != pa->pages.next_index ||
+ (IS_ALIGNED(index, align) &&
+ (pds->entry_idx + align_cnt > PKRAM_LINK_ENTRIES_MAX))) {
link = pkram_new_link(pds, pa->ps->gfp_mask);
if (!link)
return -ENOMEM;
--
1.8.3.1
Powered by blists - more mailing lists