[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8444f6e3-e628-3d64-fd20-4ae26f1c761b@virtuozzo.com>
Date: Tue, 20 Aug 2019 13:53:44 +0300
From: Kirill Tkhai <ktkhai@...tuozzo.com>
To: Yang Shi <yang.shi@...ux.alibaba.com>,
kirill.shutemov@...ux.intel.com, hannes@...xchg.org,
mhocko@...e.com, hughd@...gle.com, shakeelb@...gle.com,
rientjes@...gle.com, cai@....pw, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [v5 PATCH 2/4] mm: move mem_cgroup_uncharge out of
__page_cache_release()
On 07.08.2019 05:17, Yang Shi wrote:
> The later patch would make THP deferred split shrinker memcg aware, but
> it needs page->mem_cgroup information in THP destructor, which is called
> after mem_cgroup_uncharge() now.
>
> So, move mem_cgroup_uncharge() from __page_cache_release() to compound
> page destructor, which is called by both THP and other compound pages
> except HugeTLB. And call it in __put_single_page() for single order
> page.
>
> Suggested-by: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
> Cc: Kirill Tkhai <ktkhai@...tuozzo.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Shakeel Butt <shakeelb@...gle.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Qian Cai <cai@....pw>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
Reviewed-by: Kirill Tkhai <ktkhai@...tuozzo.com>
> ---
> mm/page_alloc.c | 1 +
> mm/swap.c | 2 +-
> mm/vmscan.c | 6 ++----
> 3 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df02a88..1d1c5d3 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -670,6 +670,7 @@ static void bad_page(struct page *page, const char *reason,
>
> void free_compound_page(struct page *page)
> {
> + mem_cgroup_uncharge(page);
> __free_pages_ok(page, compound_order(page));
> }
>
> diff --git a/mm/swap.c b/mm/swap.c
> index ae30039..d4242c8 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -71,12 +71,12 @@ static void __page_cache_release(struct page *page)
> spin_unlock_irqrestore(&pgdat->lru_lock, flags);
> }
> __ClearPageWaiters(page);
> - mem_cgroup_uncharge(page);
> }
>
> static void __put_single_page(struct page *page)
> {
> __page_cache_release(page);
> + mem_cgroup_uncharge(page);
> free_unref_page(page);
> }
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index dbdc46a..b1b5e5f 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1490,10 +1490,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> * Is there need to periodically free_page_list? It would
> * appear not as the counts should be low
> */
> - if (unlikely(PageTransHuge(page))) {
> - mem_cgroup_uncharge(page);
> + if (unlikely(PageTransHuge(page)))
> (*get_compound_page_dtor(page))(page);
> - } else
> + else
> list_add(&page->lru, &free_pages);
> continue;
>
> @@ -1914,7 +1913,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>
> if (unlikely(PageCompound(page))) {
> spin_unlock_irq(&pgdat->lru_lock);
> - mem_cgroup_uncharge(page);
> (*get_compound_page_dtor(page))(page);
> spin_lock_irq(&pgdat->lru_lock);
> } else
>
Powered by blists - more mailing lists