[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201109125033.223213816@linuxfoundation.org>
Date: Mon, 9 Nov 2020 13:55:14 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Michal Privoznik <mprivozn@...hat.com>,
David Hildenbrand <david@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mina Almasry <almasrymina@...gle.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Muchun Song <songmuchun@...edance.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Tejun Heo <tj@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.9 052/133] hugetlb_cgroup: fix reservation accounting
From: Mike Kravetz <mike.kravetz@...cle.com>
commit 79aa925bf239c234be8586780e482872dc4690dd upstream.
Michal Privoznik was using "free page reporting" in QEMU/virtio-balloon
with hugetlbfs and hit the warning below. QEMU with free page hinting
uses fallocate(FALLOC_FL_PUNCH_HOLE) to discard pages that are reported
as free by a VM. The reporting granularity is in pageblock granularity.
So when the guest reports 2M chunks, we fallocate(FALLOC_FL_PUNCH_HOLE)
one huge page in QEMU.
WARNING: CPU: 7 PID: 6636 at mm/page_counter.c:57 page_counter_uncharge+0x4b/0x50
Modules linked in: ...
CPU: 7 PID: 6636 Comm: qemu-system-x86 Not tainted 5.9.0 #137
Hardware name: Gigabyte Technology Co., Ltd. X570 AORUS PRO/X570 AORUS PRO, BIOS F21 07/31/2020
RIP: 0010:page_counter_uncharge+0x4b/0x50
...
Call Trace:
hugetlb_cgroup_uncharge_file_region+0x4b/0x80
region_del+0x1d3/0x300
hugetlb_unreserve_pages+0x39/0xb0
remove_inode_hugepages+0x1a8/0x3d0
hugetlbfs_fallocate+0x3c4/0x5c0
vfs_fallocate+0x146/0x290
__x64_sys_fallocate+0x3e/0x70
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Investigation of the issue uncovered bugs in hugetlb cgroup reservation
accounting. This patch addresses the found issues.
Fixes: 075a61d07a8e ("hugetlb_cgroup: add accounting for shared mappings")
Reported-by: Michal Privoznik <mprivozn@...hat.com>
Co-developed-by: David Hildenbrand <david@...hat.com>
Signed-off-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Tested-by: Michal Privoznik <mprivozn@...hat.com>
Reviewed-by: Mina Almasry <almasrymina@...gle.com>
Acked-by: Michael S. Tsirkin <mst@...hat.com>
Cc: <stable@...r.kernel.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Muchun Song <songmuchun@...edance.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc: Tejun Heo <tj@...nel.org>
Link: https://lkml.kernel.org/r/20201021204426.36069-1-mike.kravetz@oracle.com
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/hugetlb.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -655,6 +655,8 @@ retry:
}
del += t - f;
+ hugetlb_cgroup_uncharge_file_region(
+ resv, rg, t - f);
/* New entry for end of split region */
nrg->from = t;
@@ -667,9 +669,6 @@ retry:
/* Original entry is trimmed */
rg->to = f;
- hugetlb_cgroup_uncharge_file_region(
- resv, rg, nrg->to - nrg->from);
-
list_add(&nrg->link, &rg->link);
nrg = NULL;
break;
@@ -685,17 +684,17 @@ retry:
}
if (f <= rg->from) { /* Trim beginning of region */
- del += t - rg->from;
- rg->from = t;
-
hugetlb_cgroup_uncharge_file_region(resv, rg,
t - rg->from);
- } else { /* Trim end of region */
- del += rg->to - f;
- rg->to = f;
+ del += t - rg->from;
+ rg->from = t;
+ } else { /* Trim end of region */
hugetlb_cgroup_uncharge_file_region(resv, rg,
rg->to - f);
+
+ del += rg->to - f;
+ rg->to = f;
}
}
@@ -2454,6 +2453,9 @@ struct page *alloc_huge_page(struct vm_a
rsv_adjust = hugepage_subpool_put_pages(spool, 1);
hugetlb_acct_memory(h, -rsv_adjust);
+ if (deferred_reserve)
+ hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
+ pages_per_huge_page(h), page);
}
return page;
Powered by blists - more mailing lists