[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2008301358020.5954@eggly.anvils>
Date: Sun, 30 Aug 2020 13:59:35 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Alex Shi <alex.shi@...ux.alibaba.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Matthew Wilcox <willy@...radead.org>, Qian Cai <cai@....pw>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH 1/5] ksm: reinstate memcg charge on copied pages
In 5.8 some instances of memcg charging in do_swap_page() and unuse_pte()
were removed, on the understanding that swap cache is now already charged
at those points; but a case was missed, when ksm_might_need_to_copy() has
decided it must allocate a substitute page: such pages were never charged.
Fix it inside ksm_might_need_to_copy().
This was discovered by Alex Shi's prospective commit "mm/memcg: warning
on !memcg after readahead page charged".
But there is a another surprise: this also fixes some rarer uncharged
PageAnon cases, when KSM is configured in, but has never been activated.
ksm_might_need_to_copy()'s anon_vma->root and linear_page_index() check
sometimes catches a case which would need to have been copied if KSM
were turned on. Or that's my optimistic interpretation (of my own old
code), but it leaves some doubt as to whether everything is working as
intended there - might it hint at rare anon ptes which rmap cannot find?
A question not easily answered: put in the fix for missed memcg charges.
Fixes: 4c6355b25e8b ("mm: memcontrol: charge swapin pages on instantiation")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Cc: stable@...r.kernel.org # v5.8
---
mm/ksm.c | 4 ++++
1 file changed, 4 insertions(+)
--- 5.9-rc2/mm/ksm.c 2020-08-16 17:32:50.645506940 -0700
+++ linux/mm/ksm.c 2020-08-28 17:42:07.967278385 -0700
@@ -2582,6 +2582,10 @@ struct page *ksm_might_need_to_copy(stru
return page; /* let do_swap_page report the error */
new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
+ if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) {
+ put_page(new_page);
+ new_page = NULL;
+ }
if (new_page) {
copy_user_highpage(new_page, page, address, vma);
Powered by blists - more mailing lists