[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c87dc74-335e-c9e2-2ae8-1ec7e0cb44c4@oracle.com>
Date: Wed, 24 Mar 2021 19:00:58 +0000
From: Joao Martins <joao.m.martins@...cle.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Christoph Hellwig <hch@....de>,
Shiyang Ruan <ruansy.fnst@...itsu.com>,
Vishal Verma <vishal.l.verma@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Ira Weiny <ira.weiny@...el.com>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
david <david@...morbit.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>
Subject: Re: [PATCH 3/3] mm/devmap: Remove pgmap accounting in the
get_user_pages_fast() path
On 3/24/21 5:45 PM, Dan Williams wrote:
> On Thu, Mar 18, 2021 at 3:02 AM Joao Martins <joao.m.martins@...cle.com> wrote:
>> On 3/18/21 4:08 AM, Dan Williams wrote:
>>> Now that device-dax and filesystem-dax are guaranteed to unmap all user
>>> mappings of devmap / DAX pages before tearing down the 'struct page'
>>> array, get_user_pages_fast() can rely on its traditional synchronization
>>> method "validate_pte(); get_page(); revalidate_pte()" to catch races with
>>> device shutdown. Specifically the unmap guarantee ensures that gup-fast
>>> either succeeds in taking a page reference (lock-less), or it detects a
>>> need to fall back to the slow path where the device presence can be
>>> revalidated with locks held.
>>
>> [...]
>>
>>> @@ -2087,21 +2078,26 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>> #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */
>>>
>>> #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
>>> +
>>> static int __gup_device_huge(unsigned long pfn, unsigned long addr,
>>> unsigned long end, unsigned int flags,
>>> struct page **pages, int *nr)
>>> {
>>> int nr_start = *nr;
>>> - struct dev_pagemap *pgmap = NULL;
>>>
>>> do {
>>> - struct page *page = pfn_to_page(pfn);
>>> + struct page *page;
>>> +
>>> + /*
>>> + * Typically pfn_to_page() on a devmap pfn is not safe
>>> + * without holding a live reference on the hosting
>>> + * pgmap. In the gup-fast path it is safe because any
>>> + * races will be resolved by either gup-fast taking a
>>> + * reference or the shutdown path unmapping the pte to
>>> + * trigger gup-fast to fall back to the slow path.
>>> + */
>>> + page = pfn_to_page(pfn);
>>>
>>> - pgmap = get_dev_pagemap(pfn, pgmap);
>>> - if (unlikely(!pgmap)) {
>>> - undo_dev_pagemap(nr, nr_start, flags, pages);
>>> - return 0;
>>> - }
>>> SetPageReferenced(page);
>>> pages[*nr] = page;
>>> if (unlikely(!try_grab_page(page, flags))) {
>>
>> So for allowing FOLL_LONGTERM[0] would it be OK if we used page->pgmap after
>> try_grab_page() for checking pgmap type to see if we are in a device-dax
>> longterm pin?
>
> So, there is an effort to add a new pte bit p{m,u}d_special to disable
> gup-fast for huge pages [1]. I'd like to investigate whether we could
> use devmap + special as an encoding for "no longterm" and never
> consult the pgmap in the gup-fast path.
>
> [1]: https://lore.kernel.org/linux-mm/a1fa7fa2-914b-366d-9902-e5b784e8428c@shipmail.org/
>
Oh, nice! That would be ideal indeed, as we would skip the pgmap lookup enterily.
I suppose device-dax would use pfn_t PFN_MAP while fs-dax memory device would set PFN_MAP
| PFN_DEV (provided vmf_insert_pfn_{pmd,pud} calls mkspecial on PFN_DEV).
I haven't been following that thread, but for PMD/PUD special in vmf_* these might be useful:
https://lore.kernel.org/linux-mm/20200110190313.17144-2-joao.m.martins@oracle.com/
https://lore.kernel.org/linux-mm/20200110190313.17144-4-joao.m.martins@oracle.com/
Powered by blists - more mailing lists