[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4M5EBXDarx-Xkryrf+BKgdOJqBr7aBaB=4P3gBRN-v30Q@mail.gmail.com>
Date: Fri, 17 Apr 2020 10:38:53 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v5 05/10] mm/swap: charge the page when adding to the swap cache
2020년 4월 17일 (금) 오전 1:11, Johannes Weiner <hannes@...xchg.org>님이 작성:
>
> Hello Joonsoo,
>
> On Fri, Apr 03, 2020 at 02:40:43PM +0900, js1304@...il.com wrote:
> > @@ -112,7 +112,7 @@ void show_swap_cache_info(void)
> > * but sets SwapCache flag and private instead of mapping and index.
> > */
> > int add_to_swap_cache(struct page *page, swp_entry_t entry,
> > - gfp_t gfp, void **shadowp)
> > + struct vm_area_struct *vma, gfp_t gfp, void **shadowp)
> > {
> > struct address_space *address_space = swap_address_space(entry);
> > pgoff_t idx = swp_offset(entry);
> > @@ -120,14 +120,26 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
> > unsigned long i, nr = compound_nr(page);
> > unsigned long nrexceptional = 0;
> > void *old;
> > + bool compound = !!compound_order(page);
> > + int error;
> > + struct mm_struct *mm = vma ? vma->vm_mm : current->mm;
> > + struct mem_cgroup *memcg;
> >
> > VM_BUG_ON_PAGE(!PageLocked(page), page);
> > VM_BUG_ON_PAGE(PageSwapCache(page), page);
> > VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
> >
> > page_ref_add(page, nr);
> > + /* PageSwapCache() prevent the page from being re-charged */
> > SetPageSwapCache(page);
> >
> > + error = mem_cgroup_try_charge(page, mm, gfp, &memcg, compound);
> > + if (error) {
> > + ClearPageSwapCache(page);
> > + page_ref_sub(page, nr);
> > + return error;
> > + }
> > +
> > do {
> > xas_lock_irq(&xas);
> > xas_create_range(&xas);
> > @@ -153,11 +165,16 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
> > xas_unlock_irq(&xas);
> > } while (xas_nomem(&xas, gfp));
> >
> > - if (!xas_error(&xas))
> > + if (!xas_error(&xas)) {
> > + mem_cgroup_commit_charge(page, memcg, false, compound);
>
> Unfortunately you cannot commit here yet because the rmap isn't set up
> and that will cause memcg_charge_statistics() to account the page
> incorrectly as file. And rmap is only set up during a page fault.
I also found this problem a few days ago. In my investigation, what we need for
anonymous page to make accounting correct is a way to find the type of the page,
file or anon, since there is no special code to use the rmap. And, I
think that it
could be done by checking NULL mapping or something else. Is there anything
I missed? And, I cannot find the function, memcg_charge_statistics(). Please
let me know the file name of this function.
This is just my curiosity and I agree what you commented below.
> This needs a bit of a rework of the memcg charging code that appears
> out of scope for your patches. I'm preparing a series right now to do
> just that. It'll also fix the swap ownership tracking problem when the
> swap controller is disabled, so we don't have to choose between
> charging the wrong cgroup or hampering swap readahead efficiency.
Sound good! I also think that these patches are out of scope of my series.
I will wait your patches. Could you let me know when your series is submitted?
I'd like to plan my work schedule based on your patch schedule.
> The patches also unblock Alex Shi's "per lruvec lru_lock for memcg"
> series, which is all the more indication that memcg needs fixing in
> that area.
Okay.
Thanks.
Powered by blists - more mailing lists