[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f19c2ff-66b2-4860-a870-a1bffe73320c@redhat.com>
Date: Mon, 26 Aug 2024 19:24:45 +0200
From: David Hildenbrand <david@...hat.com>
To: Zhiguo Jiang <justinjiang@...o.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, oe-lkp@...ts.linux.dev, oliver.sang@...el.com
Cc: opensource.kernel@...o.com
Subject: Re: [PATCH v2] vma remove the unneeded avc bound with non-CoWed folio
On 23.08.24 16:01, Zhiguo Jiang wrote:
> After CoWed by do_wp_page, the vma established a new mapping relationship
> with the CoWed folio instead of the non-CoWed folio. However, regarding
> the situation where vma->anon_vma and the non-CoWed folio's anon_vma are
> not same, the avc binding relationship between them will no longer be
> needed, so it is issue for the avc binding relationship still existing
> between them.
>
> This patch will remove the avc binding relationship between vma and the
> non-CoWed folio's anon_vma, which each has their own independent
> anon_vma. It can also alleviates rmap overhead simultaneously.
>
> Signed-off-by: Zhiguo Jiang <justinjiang@...o.com>
> ---
> -v2:
> * Solve the kernel test robot noticed "WARNING"
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Closes: https://lore.kernel.org/oe-lkp/202408230938.43f55b4-lkp@intel.com
> * Update comments to more accurately describe this patch.
>
> -v1:
> https://lore.kernel.org/linux-mm/20240820143359.199-1-justinjiang@vivo.com/
>
> include/linux/rmap.h | 1 +
> mm/memory.c | 8 +++++++
> mm/rmap.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 62 insertions(+)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 91b5935e8485..8607d28a3146
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -257,6 +257,7 @@ void folio_remove_rmap_ptes(struct folio *, struct page *, int nr_pages,
> folio_remove_rmap_ptes(folio, page, 1, vma)
> void folio_remove_rmap_pmd(struct folio *, struct page *,
> struct vm_area_struct *);
> +void folio_remove_anon_avc(struct folio *, struct vm_area_struct *);
>
> void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address, rmap_t flags);
> diff --git a/mm/memory.c b/mm/memory.c
> index 93c0c25433d0..4c89cb1cb73e
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3428,6 +3428,14 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> * old page will be flushed before it can be reused.
> */
> folio_remove_rmap_pte(old_folio, vmf->page, vma);
> +
> + /*
> + * If the new_folio's anon_vma is different from the
> + * old_folio's anon_vma, the avc binding relationship
> + * between vma and the old_folio's anon_vma is removed,
> + * avoiding rmap redundant overhead.
> + */
> + folio_remove_anon_avc(old_folio, vma);
... by increasing write fault latency, introducing an RMAP walk (!)? Hmm?
On the reuse path, we do a folio_move_anon_rmap(), to optimize that.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists