lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190129193123.GF3176@redhat.com>
Date:   Tue, 29 Jan 2019 14:31:24 -0500
From:   Jerome Glisse <jglisse@...hat.com>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ralph Campbell <rcampbell@...dia.com>,
        John Hubbard <jhubbard@...dia.com>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 09/10] mm/hmm: allow to mirror vma of a file on a DAX
 backed filesystem

On Tue, Jan 29, 2019 at 10:41:23AM -0800, Dan Williams wrote:
> On Tue, Jan 29, 2019 at 8:54 AM <jglisse@...hat.com> wrote:
> >
> > From: Jérôme Glisse <jglisse@...hat.com>
> >
> > This add support to mirror vma which is an mmap of a file which is on
> > a filesystem that using a DAX block device. There is no reason not to
> > support that case.
> >
> 
> The reason not to support it would be if it gets in the way of future
> DAX development. How does this interact with MAP_SYNC? I'm also
> concerned if this complicates DAX reflink support. In general I'd
> rather prioritize fixing the places where DAX is broken today before
> adding more cross-subsystem entanglements. The unit tests for
> filesystems (xfstests) are readily accessible. How would I go about
> regression testing DAX + HMM interactions?

HMM mirror CPU page table so anything you do to CPU page table will
be reflected to all HMM mirror user. So MAP_SYNC has no bearing here
whatsoever as all HMM mirror user must do cache coherent access to
range they mirror so from DAX point of view this is just _exactly_
the same as CPU access.

Note that you can not migrate DAX memory to GPU memory and thus for a
mmap of a file on a filesystem that use a DAX block device then you can
not do migration to device memory. Also at this time migration of file
back page is only supported for cache coherent device memory so for
instance on OpenCAPI platform.

Bottom line is you just have to worry about the CPU page table. What
ever you do there will be reflected properly. It does not add any
burden to people working on DAX. Unless you want to modify CPU page
table without calling mmu notifier but in that case you would not
only break HMM mirror user but other thing like KVM ...


For testing the issue is what do you want to test ? Do you want to test
that a device properly mirror some mmap of a file back by DAX ? ie
device driver which use HMM mirror keep working after changes made to
DAX.

Or do you want to run filesystem test suite using the GPU to access
mmap of the file (read or write) instead of the CPU ? In that case any
such test suite would need to be updated to be able to use something
like OpenCL for. At this time i do not see much need for that but maybe
this is something people would like to see.

Cheers,
Jérôme


> 
> > Note that unlike GUP code we do not take page reference hence when we
> > back-off we have nothing to undo.
> >
> > Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Dan Williams <dan.j.williams@...el.com>
> > Cc: Ralph Campbell <rcampbell@...dia.com>
> > Cc: John Hubbard <jhubbard@...dia.com>
> > ---
> >  mm/hmm.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++---------
> >  1 file changed, 112 insertions(+), 21 deletions(-)
> >
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index 8b87e1813313..1a444885404e 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -334,6 +334,7 @@ EXPORT_SYMBOL(hmm_mirror_unregister);
> >
> >  struct hmm_vma_walk {
> >         struct hmm_range        *range;
> > +       struct dev_pagemap      *pgmap;
> >         unsigned long           last;
> >         bool                    fault;
> >         bool                    block;
> > @@ -508,6 +509,15 @@ static inline uint64_t pmd_to_hmm_pfn_flags(struct hmm_range *range, pmd_t pmd)
> >                                 range->flags[HMM_PFN_VALID];
> >  }
> >
> > +static inline uint64_t pud_to_hmm_pfn_flags(struct hmm_range *range, pud_t pud)
> > +{
> > +       if (!pud_present(pud))
> > +               return 0;
> > +       return pud_write(pud) ? range->flags[HMM_PFN_VALID] |
> > +                               range->flags[HMM_PFN_WRITE] :
> > +                               range->flags[HMM_PFN_VALID];
> > +}
> > +
> >  static int hmm_vma_handle_pmd(struct mm_walk *walk,
> >                               unsigned long addr,
> >                               unsigned long end,
> > @@ -529,8 +539,19 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
> >                 return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
> >
> >         pfn = pmd_pfn(pmd) + pte_index(addr);
> > -       for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++)
> > +       for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
> > +               if (pmd_devmap(pmd)) {
> > +                       hmm_vma_walk->pgmap = get_dev_pagemap(pfn,
> > +                                             hmm_vma_walk->pgmap);
> > +                       if (unlikely(!hmm_vma_walk->pgmap))
> > +                               return -EBUSY;
> > +               }
> >                 pfns[i] = hmm_pfn_from_pfn(range, pfn) | cpu_flags;
> > +       }
> > +       if (hmm_vma_walk->pgmap) {
> > +               put_dev_pagemap(hmm_vma_walk->pgmap);
> > +               hmm_vma_walk->pgmap = NULL;
> > +       }
> >         hmm_vma_walk->last = end;
> >         return 0;
> >  }
> > @@ -617,10 +638,24 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> >         if (fault || write_fault)
> >                 goto fault;
> >
> > +       if (pte_devmap(pte)) {
> > +               hmm_vma_walk->pgmap = get_dev_pagemap(pte_pfn(pte),
> > +                                             hmm_vma_walk->pgmap);
> > +               if (unlikely(!hmm_vma_walk->pgmap))
> > +                       return -EBUSY;
> > +       } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
> > +               *pfn = range->values[HMM_PFN_SPECIAL];
> > +               return -EFAULT;
> > +       }
> > +
> >         *pfn = hmm_pfn_from_pfn(range, pte_pfn(pte)) | cpu_flags;
> >         return 0;
> >
> >  fault:
> > +       if (hmm_vma_walk->pgmap) {
> > +               put_dev_pagemap(hmm_vma_walk->pgmap);
> > +               hmm_vma_walk->pgmap = NULL;
> > +       }
> >         pte_unmap(ptep);
> >         /* Fault any virtual address we were asked to fault */
> >         return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
> > @@ -708,12 +743,84 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
> >                         return r;
> >                 }
> >         }
> > +       if (hmm_vma_walk->pgmap) {
> > +               put_dev_pagemap(hmm_vma_walk->pgmap);
> > +               hmm_vma_walk->pgmap = NULL;
> > +       }
> >         pte_unmap(ptep - 1);
> >
> >         hmm_vma_walk->last = addr;
> >         return 0;
> >  }
> >
> > +static int hmm_vma_walk_pud(pud_t *pudp,
> > +                           unsigned long start,
> > +                           unsigned long end,
> > +                           struct mm_walk *walk)
> > +{
> > +       struct hmm_vma_walk *hmm_vma_walk = walk->private;
> > +       struct hmm_range *range = hmm_vma_walk->range;
> > +       struct vm_area_struct *vma = walk->vma;
> > +       unsigned long addr = start, next;
> > +       pmd_t *pmdp;
> > +       pud_t pud;
> > +       int ret;
> > +
> > +again:
> > +       pud = READ_ONCE(*pudp);
> > +       if (pud_none(pud))
> > +               return hmm_vma_walk_hole(start, end, walk);
> > +
> > +       if (pud_huge(pud) && pud_devmap(pud)) {
> > +               unsigned long i, npages, pfn;
> > +               uint64_t *pfns, cpu_flags;
> > +               bool fault, write_fault;
> > +
> > +               if (!pud_present(pud))
> > +                       return hmm_vma_walk_hole(start, end, walk);
> > +
> > +               i = (addr - range->start) >> PAGE_SHIFT;
> > +               npages = (end - addr) >> PAGE_SHIFT;
> > +               pfns = &range->pfns[i];
> > +
> > +               cpu_flags = pud_to_hmm_pfn_flags(range, pud);
> > +               hmm_range_need_fault(hmm_vma_walk, pfns, npages,
> > +                                    cpu_flags, &fault, &write_fault);
> > +               if (fault || write_fault)
> > +                       return hmm_vma_walk_hole_(addr, end, fault,
> > +                                               write_fault, walk);
> > +
> > +               pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
> > +               for (i = 0; i < npages; ++i, ++pfn) {
> > +                       hmm_vma_walk->pgmap = get_dev_pagemap(pfn,
> > +                                             hmm_vma_walk->pgmap);
> > +                       if (unlikely(!hmm_vma_walk->pgmap))
> > +                               return -EBUSY;
> > +                       pfns[i] = hmm_pfn_from_pfn(range, pfn) | cpu_flags;
> > +               }
> > +               if (hmm_vma_walk->pgmap) {
> > +                       put_dev_pagemap(hmm_vma_walk->pgmap);
> > +                       hmm_vma_walk->pgmap = NULL;
> > +               }
> > +               hmm_vma_walk->last = end;
> > +               return 0;
> > +       }
> > +
> > +       split_huge_pud(vma, pudp, addr);
> > +       if (pud_none(*pudp))
> > +               goto again;
> > +
> > +       pmdp = pmd_offset(pudp, addr);
> > +       do {
> > +               next = pmd_addr_end(addr, end);
> > +               ret = hmm_vma_walk_pmd(pmdp, addr, next, walk);
> > +               if (ret)
> > +                       return ret;
> > +       } while (pmdp++, addr = next, addr != end);
> > +
> > +       return 0;
> > +}
> > +
> >  static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
> >                                       unsigned long start, unsigned long end,
> >                                       struct mm_walk *walk)
> > @@ -786,14 +893,6 @@ static void hmm_pfns_clear(struct hmm_range *range,
> >                 *pfns = range->values[HMM_PFN_NONE];
> >  }
> >
> > -static void hmm_pfns_special(struct hmm_range *range)
> > -{
> > -       unsigned long addr = range->start, i = 0;
> > -
> > -       for (; addr < range->end; addr += PAGE_SIZE, i++)
> > -               range->pfns[i] = range->values[HMM_PFN_SPECIAL];
> > -}
> > -
> >  /*
> >   * hmm_range_register() - start tracking change to CPU page table over a range
> >   * @range: range
> > @@ -911,12 +1010,6 @@ long hmm_range_snapshot(struct hmm_range *range)
> >                 if (vma == NULL || (vma->vm_flags & device_vma))
> >                         return -EFAULT;
> >
> > -               /* FIXME support dax */
> > -               if (vma_is_dax(vma)) {
> > -                       hmm_pfns_special(range);
> > -                       return -EINVAL;
> > -               }
> > -
> >                 if (is_vm_hugetlb_page(vma)) {
> >                         struct hstate *h = hstate_vma(vma);
> >
> > @@ -940,6 +1033,7 @@ long hmm_range_snapshot(struct hmm_range *range)
> >                 }
> >
> >                 range->vma = vma;
> > +               hmm_vma_walk.pgmap = NULL;
> >                 hmm_vma_walk.last = start;
> >                 hmm_vma_walk.fault = false;
> >                 hmm_vma_walk.range = range;
> > @@ -951,6 +1045,7 @@ long hmm_range_snapshot(struct hmm_range *range)
> >                 mm_walk.pte_entry = NULL;
> >                 mm_walk.test_walk = NULL;
> >                 mm_walk.hugetlb_entry = NULL;
> > +               mm_walk.pud_entry = hmm_vma_walk_pud;
> >                 mm_walk.pmd_entry = hmm_vma_walk_pmd;
> >                 mm_walk.pte_hole = hmm_vma_walk_hole;
> >                 mm_walk.hugetlb_entry = hmm_vma_walk_hugetlb_entry;
> > @@ -1018,12 +1113,6 @@ long hmm_range_fault(struct hmm_range *range, bool block)
> >                 if (vma == NULL || (vma->vm_flags & device_vma))
> >                         return -EFAULT;
> >
> > -               /* FIXME support dax */
> > -               if (vma_is_dax(vma)) {
> > -                       hmm_pfns_special(range);
> > -                       return -EINVAL;
> > -               }
> > -
> >                 if (is_vm_hugetlb_page(vma)) {
> >                         struct hstate *h = hstate_vma(vma);
> >
> > @@ -1047,6 +1136,7 @@ long hmm_range_fault(struct hmm_range *range, bool block)
> >                 }
> >
> >                 range->vma = vma;
> > +               hmm_vma_walk.pgmap = NULL;
> >                 hmm_vma_walk.last = start;
> >                 hmm_vma_walk.fault = true;
> >                 hmm_vma_walk.block = block;
> > @@ -1059,6 +1149,7 @@ long hmm_range_fault(struct hmm_range *range, bool block)
> >                 mm_walk.pte_entry = NULL;
> >                 mm_walk.test_walk = NULL;
> >                 mm_walk.hugetlb_entry = NULL;
> > +               mm_walk.pud_entry = hmm_vma_walk_pud;
> >                 mm_walk.pmd_entry = hmm_vma_walk_pmd;
> >                 mm_walk.pte_hole = hmm_vma_walk_hole;
> >                 mm_walk.hugetlb_entry = hmm_vma_walk_hugetlb_entry;
> > --
> > 2.17.2
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ