[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a8da9ff8-b612-c5e1-54cd-a975f9075dae@nvidia.com>
Date: Sun, 17 Oct 2021 23:47:26 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Alistair Popple <apopple@...dia.com>, linux-mm@...ck.org,
akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, rcampbell@...dia.com,
jglisse@...hat.com
Subject: Re: [PATCH] mm/rmap.c: Avoid double faults migrating device private
pages
On 10/17/21 21:52, Alistair Popple wrote:
> During migration special page table entries are installed for each page
> being migrated. These entries store the pfn and associated permissions
> of ptes mapping the page being migarted.
s/migarted/migrated/
>
> Device-private pages use special swap pte entries to distinguish
> read-only vs. writeable pages which the migration code checks when
> creating migration entries. Normally this follows a fast path in
> migrate_vma_collect_pmd() which correctly copies the permissions of
> device-private pages over to migration entries when migrating pages back
> to the CPU.
>
> However the slow-path falls back to using try_to_migrate() which
> unconditionally creates read-only migration entries for device-private
> pages. This leads to unnecessary double faults on the CPU as the new
> pages are always mapped read-only even when they could be mapped
> writeable. Fix this by correctly copying device-private permissions in
> try_to_migrate_one().
>
> Signed-off-by: Alistair Popple <apopple@...dia.com>
> Reported-by: Ralph Campbell <rcampbell@...dia.com>
> ---
> mm/rmap.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
Looks very clearly correct to me.
Reviewed-by: John Hubbard <jhubbard@...dia.com>
thanks,
--
John Hubbard
NVIDIA
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b9eb5c12f3fe..271de8118cdd 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1804,6 +1804,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> update_hiwater_rss(mm);
>
> if (is_zone_device_page(page)) {
> + unsigned long pfn = page_to_pfn(page);
> swp_entry_t entry;
> pte_t swp_pte;
>
> @@ -1812,8 +1813,11 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> * pte. do_swap_page() will wait until the migration
> * pte is removed and then restart fault handling.
> */
> - entry = make_readable_migration_entry(
> - page_to_pfn(page));
> + entry = pte_to_swp_entry(pteval);
> + if (is_writable_device_private_entry(entry))
> + entry = make_writable_migration_entry(pfn);
> + else
> + entry = make_readable_migration_entry(pfn);
> swp_pte = swp_entry_to_pte(entry);
>
> /*
>
Powered by blists - more mailing lists