lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <632cd9a2a023_3496294da@dwillia2-xfh.jf.intel.com.notmuch>
Date:   Thu, 22 Sep 2022 14:54:42 -0700
From:   Dan Williams <dan.j.williams@...el.com>
To:     Jason Gunthorpe <jgg@...dia.com>,
        Dan Williams <dan.j.williams@...el.com>
CC:     <akpm@...ux-foundation.org>, Matthew Wilcox <willy@...radead.org>,
        "Jan Kara" <jack@...e.cz>, "Darrick J. Wong" <djwong@...nel.org>,
        Christoph Hellwig <hch@....de>,
        John Hubbard <jhubbard@...dia.com>,
        <linux-fsdevel@...r.kernel.org>, <nvdimm@...ts.linux.dev>,
        <linux-xfs@...r.kernel.org>, <linux-mm@...ck.org>,
        <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH v2 10/18] fsdax: Manage pgmap references at entry
 insertion and deletion

Jason Gunthorpe wrote:
> On Wed, Sep 21, 2022 at 07:17:40PM -0700, Dan Williams wrote:
> > Jason Gunthorpe wrote:
> > > On Wed, Sep 21, 2022 at 05:14:34PM -0700, Dan Williams wrote:
> > > 
> > > > > Indeed, you could reasonably put such a liveness test at the moment
> > > > > every driver takes a 0 refcount struct page and turns it into a 1
> > > > > refcount struct page.
> > > > 
> > > > I could do it with a flag, but the reason to have pgmap->ref managed at
> > > > the page->_refcount 0 -> 1 and 1 -> 0 transitions is so at the end of
> > > > time memunmap_pages() can look at the one counter rather than scanning
> > > > and rescanning all the pages to see when they go to final idle.
> > > 
> > > That makes some sense too, but the logical way to do that is to put some
> > > counter along the page_free() path, and establish a 'make a page not
> > > free' path that does the other side.
> > > 
> > > ie it should not be in DAX code, it should be all in common pgmap
> > > code. The pgmap should never be freed while any page->refcount != 0
> > > and that should be an intrinsic property of pgmap, not relying on
> > > external parties.
> > 
> > I just do not know where to put such intrinsics since there is nothing
> > today that requires going through the pgmap object to discover the pfn
> > and 'allocate' the page.
> 
> I think that is just a new API that wrappers the set refcount = 1,
> percpu refcount and maybe building appropriate compound pages too.
> 
> Eg maybe something like:
> 
>   struct folio *pgmap_alloc_folios(pgmap, start, length)
> 
> And you get back maximally sized allocated folios with refcount = 1
> that span the requested range.
> 
> > In other words make dax_direct_access() the 'allocation' event that pins
> > the pgmap? I might be speaking a foreign language if you're not familiar
> > with the relationship of 'struct dax_device' to 'struct dev_pagemap'
> > instances. This is not the first time I have considered making them one
> > in the same.
> 
> I don't know enough about dax, so yes very foreign :)
> 
> I'm thinking broadly about how to make pgmap usable to all the other
> drivers in a safe and robust way that makes some kind of logical sense.

I think the API should be pgmap_folio_get() because, at least for DAX,
the memory is already allocated. The 'allocator' for fsdax is the
filesystem block allocator, and pgmap_folio_get() grants access to a
folio in the pgmap by a pfn that the block allocator knows about. If the
GPU use case wants to wrap an allocator around that they can, but the
fundamental requirement is check if the pgmap is dead and if not elevate
the page reference.

So something like:

/**
 * pgmap_get_folio() - reference a folio in a live @pgmap by @pfn
 * @pgmap: live pgmap instance, caller ensures this does not race @pgmap death
 * @pfn: page frame number covered by @pgmap
 */
struct folio *pgmap_get_folio(struct dev_pagemap *pgmap, unsigned long pfn)
{
        struct page *page;
        
        VM_WARN_ONCE(pgmap != xa_load(&pgmap_array, PHYS_PFN(phys)));
        
        if (WARN_ONCE(percpu_ref_is_dying(&pgmap->ref)))
                return NULL;
        page = pfn_to_page(pfn);
        return page_folio(page);
}

This does not create compound folios, that needs to be coordinated with
the caller and likely needs an explicit

    pgmap_construct_folio(pgmap, pfn, order)

...call that can be done while holding locks against operations that
will cause the folio to be broken down.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ