[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875ygam213.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 24 Oct 2022 09:41:12 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: <akpm@...ux-foundation.org>, <david@...hat.com>, <ziy@...dia.com>,
<shy828301@...il.com>, <apopple@...dia.com>,
<jingshan@...ux.alibaba.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/2] mm: migrate: Try again if THP split is failed
due to page refcnt
Baolin Wang <baolin.wang@...ux.alibaba.com> writes:
> When creating a virtual machine, we will use memfd_create() to get
> a file descriptor which can be used to create share memory mappings
> using the mmap function, meanwhile the mmap() will set the MAP_POPULATE
> flag to allocate physical pages for the virtual machine.
>
> When allocating physical pages for the guest, the host can fallback to
> allocate some CMA pages for the guest when over half of the zone's free
> memory is in the CMA area.
>
> In guest os, when the application wants to do some data transaction with
> DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and
> create IOMMU mappings for the DMA pages. However, when calling
> VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be
> failed to longterm-pin sometimes.
>
> After some invetigation, we found the pages used to do DMA mapping can
> contain some CMA pages, and these CMA pages will cause a possible
> failure of the longterm-pin, due to failed to migrate the CMA pages.
> The reason of migration failure may be temporary reference count or
> memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA
> ioctl returns error, which makes the application failed to start.
>
> I observed one migration failure case (which is not easy to reproduce) is
> that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed'
> count is also 1.
>
> That means when migrating a THP which is in CMA area, but can not allocate
> a new THP due to memory fragmentation, so it will split the THP. However
> THP split is also failed, probably the reason is temporary reference count
> of this THP. And the temporary reference count can be caused by dropping
> page caches (I observed the drop caches operation in the system), but we
> can not drop the shmem page caches due to they are already dirty at that time.
>
> Especially for THP split failure, which is caused by temporary reference
> count, we can try again to mitigate the failure of migration in this case
> according to previous discussion [1].
>
> [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Thanks!
Reviewed-by: "Huang, Ying" <ying.huang@...el.com>
Best Regards,
Huang, Ying
> ---
> Changes from v1:
> - Use another variable to save the return value of THP split.
> ---
> mm/huge_memory.c | 4 ++--
> mm/migrate.c | 19 ++++++++++++++++---
> 2 files changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ad17c8d..a79f03b 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> * split PMDs
> */
> if (!can_split_folio(folio, &extra_pins)) {
> - ret = -EBUSY;
> + ret = -EAGAIN;
> goto out_unlock;
> }
>
> @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> xas_unlock(&xas);
> local_irq_enable();
> remap_page(folio, folio_nr_pages(folio));
> - ret = -EBUSY;
> + ret = -EAGAIN;
> }
>
> out_unlock:
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 1da0dbc..6d49a3e 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1506,9 +1506,22 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> if (is_thp) {
> nr_thp_failed++;
> /* THP NUMA faulting doesn't split THP to retry. */
> - if (!nosplit && !try_split_thp(page, &thp_split_pages)) {
> - nr_thp_split++;
> - break;
> + if (!nosplit) {
> + int ret = try_split_thp(page, &thp_split_pages);
> +
> + if (!ret) {
> + nr_thp_split++;
> + break;
> + } else if (reason == MR_LONGTERM_PIN &&
> + ret == -EAGAIN) {
> + /*
> + * Try again to split THP to mitigate
> + * the failure of longterm pinning.
> + */
> + thp_retry++;
> + nr_retry_pages += nr_subpages;
> + break;
> + }
> }
> } else if (!no_subpage_counting) {
> nr_failed++;
Powered by blists - more mailing lists