[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c24afdf-5103-4c1b-a649-2eeed185f3fb@arm.com>
Date: Tue, 16 Jan 2024 14:35:54 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Kefeng Wang <wangkefeng.wang@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Matthew Wilcox <willy@...radead.org>, David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH] mm: memory: move mem_cgroup_charge() into
alloc_anon_folio()
On 16/01/2024 07:13, Kefeng Wang wrote:
> In order to allocate as much as possible of large folio, move
> the mem charge into alloc_anon_folio() and try the next order
> if mem_cgroup_charge() fails, also we change the GFP_KERNEL
> to gfp to be consistent with PMD THP.
I agree that changing gfp gives you consistency. But it's not entirely clear to
me why THP should use one set of flags for this case, and since pages another.
Why does this difference exist?
Otherwise, LGTM!
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
> ---
> mm/memory.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 5e88d5379127..2e31a407e6f9 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4206,15 +4206,21 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> folio = vma_alloc_folio(gfp, order, vma, addr, true);
> if (folio) {
> + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
> + folio_put(folio);
> + goto next;
> + }
> + folio_throttle_swaprate(folio, gfp);
> clear_huge_page(&folio->page, vmf->address, 1 << order);
> return folio;
> }
> +next:
> order = next_order(&orders, order);
> }
>
> fallback:
> #endif
> - return vma_alloc_zeroed_movable_folio(vmf->vma, vmf->address);
> + return folio_prealloc(vma->vm_mm, vma, vmf->address, true);
> }
>
> /*
> @@ -4281,10 +4287,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> nr_pages = folio_nr_pages(folio);
> addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE);
>
> - if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL))
> - goto oom_free_page;
> - folio_throttle_swaprate(folio, GFP_KERNEL);
> -
> /*
> * The memory barrier inside __folio_mark_uptodate makes sure that
> * preceding stores to the page contents become visible before
> @@ -4338,8 +4340,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> release:
> folio_put(folio);
> goto unlock;
> -oom_free_page:
> - folio_put(folio);
> oom:
> return VM_FAULT_OOM;
> }
Powered by blists - more mailing lists