[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200416161133.GA178028@cmpxchg.org>
Date: Thu, 16 Apr 2020 12:11:33 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: js1304@...il.com
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v5 05/10] mm/swap: charge the page when adding to the
swap cache
Hello Joonsoo,
On Fri, Apr 03, 2020 at 02:40:43PM +0900, js1304@...il.com wrote:
> @@ -112,7 +112,7 @@ void show_swap_cache_info(void)
> * but sets SwapCache flag and private instead of mapping and index.
> */
> int add_to_swap_cache(struct page *page, swp_entry_t entry,
> - gfp_t gfp, void **shadowp)
> + struct vm_area_struct *vma, gfp_t gfp, void **shadowp)
> {
> struct address_space *address_space = swap_address_space(entry);
> pgoff_t idx = swp_offset(entry);
> @@ -120,14 +120,26 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
> unsigned long i, nr = compound_nr(page);
> unsigned long nrexceptional = 0;
> void *old;
> + bool compound = !!compound_order(page);
> + int error;
> + struct mm_struct *mm = vma ? vma->vm_mm : current->mm;
> + struct mem_cgroup *memcg;
>
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> VM_BUG_ON_PAGE(PageSwapCache(page), page);
> VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
>
> page_ref_add(page, nr);
> + /* PageSwapCache() prevent the page from being re-charged */
> SetPageSwapCache(page);
>
> + error = mem_cgroup_try_charge(page, mm, gfp, &memcg, compound);
> + if (error) {
> + ClearPageSwapCache(page);
> + page_ref_sub(page, nr);
> + return error;
> + }
> +
> do {
> xas_lock_irq(&xas);
> xas_create_range(&xas);
> @@ -153,11 +165,16 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
> xas_unlock_irq(&xas);
> } while (xas_nomem(&xas, gfp));
>
> - if (!xas_error(&xas))
> + if (!xas_error(&xas)) {
> + mem_cgroup_commit_charge(page, memcg, false, compound);
Unfortunately you cannot commit here yet because the rmap isn't set up
and that will cause memcg_charge_statistics() to account the page
incorrectly as file. And rmap is only set up during a page fault.
This needs a bit of a rework of the memcg charging code that appears
out of scope for your patches. I'm preparing a series right now to do
just that. It'll also fix the swap ownership tracking problem when the
swap controller is disabled, so we don't have to choose between
charging the wrong cgroup or hampering swap readahead efficiency.
The patches also unblock Alex Shi's "per lruvec lru_lock for memcg"
series, which is all the more indication that memcg needs fixing in
that area.
Powered by blists - more mailing lists