[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1588812129-8596-27-git-send-email-anthony.yznaga@oracle.com>
Date: Wed, 6 May 2020 17:41:52 -0700
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: willy@...radead.org, corbet@....net, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
rppt@...ux.ibm.com, akpm@...ux-foundation.org, hughd@...gle.com,
ebiederm@...ssion.com, masahiroy@...nel.org, ardb@...nel.org,
ndesaulniers@...gle.com, dima@...ovin.in, daniel.kiper@...cle.com,
nivedita@...m.mit.edu, rafael.j.wysocki@...el.com,
dan.j.williams@...el.com, zhenzhong.duan@...cle.com,
jroedel@...e.de, bhe@...hat.com, guro@...com,
Thomas.Lendacky@....com, andriy.shevchenko@...ux.intel.com,
keescook@...omium.org, hannes@...xchg.org, minchan@...nel.org,
mhocko@...nel.org, ying.huang@...el.com,
yang.shi@...ux.alibaba.com, gustavo@...eddedor.com,
ziqian.lzq@...fin.com, vdavydov.dev@...il.com,
jason.zeng@...el.com, kevin.tian@...el.com, zhiyuan.lv@...el.com,
lei.l.li@...el.com, paul.c.lai@...el.com, ashok.raj@...el.com,
linux-fsdevel@...r.kernel.org, linux-doc@...r.kernel.org,
kexec@...ts.infradead.org
Subject: [RFC 26/43] mm: shmem: when inserting, handle pages already charged to a memcg
If shmem_insert_page() is called to insert a page that was preserved
using PKRAM on the current boot (i.e. preserved page is restored without
an intervening kexec boot), the page will still be charged to a memory
cgroup because it is never freed. Don't try to charge it again.
Signed-off-by: Anthony Yznaga <anthony.yznaga@...cle.com>
---
mm/shmem.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 13475073fb52..1f3b43b8fa34 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -693,6 +693,7 @@ int shmem_insert_page(struct mm_struct *mm, struct inode *inode, pgoff_t index,
struct mem_cgroup *memcg;
pgoff_t hindex = index;
bool on_lru = PageLRU(page);
+ bool has_memcg = page->mem_cgroup ? true : false;
if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT))
return -EFBIG;
@@ -738,20 +739,24 @@ int shmem_insert_page(struct mm_struct *mm, struct inode *inode, pgoff_t index,
__SetPageReferenced(page);
- err = mem_cgroup_try_charge_delay(page, mm, gfp, &memcg,
- PageTransHuge(page));
- if (err)
- goto out_unlock;
+ if (!has_memcg) {
+ err = mem_cgroup_try_charge_delay(page, mm, gfp, &memcg,
+ PageTransHuge(page));
+ if (err)
+ goto out_unlock;
+ }
err = shmem_add_to_page_cache(page, mapping, hindex,
NULL, gfp & GFP_RECLAIM_MASK);
if (err) {
- mem_cgroup_cancel_charge(page, memcg,
- PageTransHuge(page));
+ if (!has_memcg)
+ mem_cgroup_cancel_charge(page, memcg,
+ PageTransHuge(page));
goto out_unlock;
}
- mem_cgroup_commit_charge(page, memcg, on_lru,
- PageTransHuge(page));
+ if (!has_memcg)
+ mem_cgroup_commit_charge(page, memcg, on_lru,
+ PageTransHuge(page));
if (!on_lru)
lru_cache_add_anon(page);
--
2.13.3
Powered by blists - more mailing lists