[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1E25514A-202B-48E6-97F2-1E02B0980A96@nvidia.com>
Date: Tue, 2 Mar 2021 10:37:13 -0500
From: Zi Yan <ziy@...dia.com>
To: "Zhouguanghui (OS Kernel)" <zhouguanghui1@...wei.com>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<akpm@...ux-foundation.org>, <npiggin@...e.de>,
"Wangkefeng (OS Kernel Lab)" <wangkefeng.wang@...wei.com>,
"Guohanjun (Hanjun Guo)" <guohanjun@...wei.com>,
Dingtianhong <dingtianhong@...wei.com>,
Chenweilong <chenweilong@...wei.com>,
"Xiangrui (Euler)" <rui.xiang@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: [PATCH] mm/memcg: set memcg when split pages
On 2 Mar 2021, at 2:05, Zhouguanghui (OS Kernel) wrote:
> 在 2021/3/2 10:00, Zi Yan 写道:
>> On 1 Mar 2021, at 20:34, Zhou Guanghui wrote:
>>
>>> When split page, the memory cgroup info recorded in first page is
>>> not copied to tail pages. In this case, when the tail pages are
>>> freed, the uncharge operation is not performed. As a result, the
>>> usage of this memcg keeps increasing, and the OOM may occur.
>>>
>>> So, the copying of first page's memory cgroup info to tail pages
>>> is needed when split page.
>>>
>>> Signed-off-by: Zhou Guanghui <zhouguanghui1@...wei.com>
>>> ---
>>> include/linux/memcontrol.h | 10 ++++++++++
>>> mm/page_alloc.c | 4 +++-
>>> 2 files changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>> index e6dc793d587d..c7e2b4421dc1 100644
>>> --- a/include/linux/memcontrol.h
>>> +++ b/include/linux/memcontrol.h
>>> @@ -867,6 +867,12 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg);
>>> extern bool cgroup_memory_noswap;
>>> #endif
>>>
>>> +static inline void copy_page_memcg(struct page *dst, struct page *src)
>>> +{
>>> + if (src->memcg_data)
>>> + dst->memcg_data = src->memcg_data;
>>> +}
>>> +
>>> struct mem_cgroup *lock_page_memcg(struct page *page);
>>> void __unlock_page_memcg(struct mem_cgroup *memcg);
>>> void unlock_page_memcg(struct page *page);
>>> @@ -1291,6 +1297,10 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
>>> {
>>> }
>>>
>>> +static inline void copy_page_memcg(struct page *dst, struct page *src)
>>> +{
>>> +}
>>> +
>>> static inline struct mem_cgroup *lock_page_memcg(struct page *page)
>>> {
>>> return NULL;
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 3e4b29ee2b1e..ee0a63dc1c9b 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -3307,8 +3307,10 @@ void split_page(struct page *page, unsigned int order)
>>> VM_BUG_ON_PAGE(PageCompound(page), page);
>>> VM_BUG_ON_PAGE(!page_count(page), page);
>>>
>>> - for (i = 1; i < (1 << order); i++)
>>> + for (i = 1; i < (1 << order); i++) {
>>> set_page_refcounted(page + i);
>>> + copy_page_memcg(page + i, page);
>>> + }
>>> split_page_owner(page, 1 << order);
>>> }
>>> EXPORT_SYMBOL_GPL(split_page);
>>> --
>>> 2.25.0
>>
>> +memcg maintainers
>>
>> split_page() is used for non-compound higher-order pages. I am not sure
>> if there is any such pages monitored by memcg. Please let me know
>> if I miss anything.
>
> Thank you for taking time for this.
>
> This should be put in kmemcg, and I'll modify it.
>
> When the kmemcg is enabled and _GFP_ACCOUNT is set, the charged and
> uncharged sizes do not match when alloc/free_pages_exact method is used
> to apply for or free memory with exact size. This is because memcg data
> of the tail page is not set during the split page.
Thanks for your clarification. I missed kmemcg.
I have a question on copy_page_memcg above. By reading __memcg_kmem_charge_page
and __memcg_kmem_uncharge_page, it seems to me that every single page requires
a css_get(&memcg->css) at charge time and a css_put(&memcg->css) at uncharge time.
But your copy_page_memcg does not do css_get for split subpages. Will it cause
memcg->css underflow when subpages are uncharged?
—
Best Regards,
Yan Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists