[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db10735e74d5a89aed73ad3268e0be40394efc31.camel@linux.ibm.com>
Date: Tue, 11 Jun 2024 15:23:24 +0200
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens
<hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev
<agordeev@...ux.ibm.com>,
Christian Borntraeger
<borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Alex
Williamson <alex.williamson@...hat.com>,
Gerd Bayer <gbayer@...ux.ibm.com>,
Matthew Rosato <mjrosato@...ux.ibm.com>,
Jason Gunthorpe <jgg@...pe.ca>
Cc: linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v3 1/3] s390/pci: Fix s390_mmio_read/write syscall page
fault handling
On Tue, 2024-06-11 at 14:08 +0200, Niklas Schnelle wrote:
> On Tue, 2024-06-11 at 13:21 +0200, Niklas Schnelle wrote:
> > On Wed, 2024-05-29 at 13:36 +0200, Niklas Schnelle wrote:
> > > The s390 MMIO syscalls when using the classic PCI instructions do not
> > > cause a page fault when follow_pte() fails due to the page not being
> > > present. Besides being a general deficiency this breaks vfio-pci's mmap()
> > > handling once VFIO_PCI_MMAP gets enabled as this lazily maps on first
> > > access. Fix this by following a failed follow_pte() with
> > > fixup_user_page() and retrying the follow_pte().
> > >
> > > Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
> > > Reviewed-by: Matthew Rosato <mjrosato@...ux.ibm.com>
> > > Signed-off-by: Niklas Schnelle <schnelle@...ux.ibm.com>
> > > ---
> > > arch/s390/pci/pci_mmio.c | 18 +++++++++++++-----
> > > 1 file changed, 13 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
> > > index 5398729bfe1b..80c21b1a101c 100644
> > > --- a/arch/s390/pci/pci_mmio.c
> > > +++ b/arch/s390/pci/pci_mmio.c
> > > @@ -170,8 +170,12 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
> > > goto out_unlock_mmap;
> > >
> > > ret = follow_pte(vma, mmio_addr, &ptep, &ptl);
> > > - if (ret)
> > > - goto out_unlock_mmap;
> > > + if (ret) {
> > > + fixup_user_fault(current->mm, mmio_addr, FAULT_FLAG_WRITE, NULL);
> > > + ret = follow_pte(vma, mmio_addr, &ptep, &ptl);
> > > + if (ret)
> > > + goto out_unlock_mmap;
> > > + }
> > >
> > > io_addr = (void __iomem *)((pte_pfn(*ptep) << PAGE_SHIFT) |
> > > (mmio_addr & ~PAGE_MASK));
> > > @@ -305,12 +309,16 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
> > > if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
> > > goto out_unlock_mmap;
> > > ret = -EACCES;
> > > - if (!(vma->vm_flags & VM_WRITE))
> > > + if (!(vma->vm_flags & VM_READ))
> > > goto out_unlock_mmap;
> > >
> > > ret = follow_pte(vma, mmio_addr, &ptep, &ptl);
> > > - if (ret)
> > > - goto out_unlock_mmap;
> > > + if (ret) {
> > > + fixup_user_fault(current->mm, mmio_addr, 0, NULL);
> > > + ret = follow_pte(vma, mmio_addr, &ptep, &ptl);
> > > + if (ret)
> > > + goto out_unlock_mmap;
> > > + }
> > >
> > > io_addr = (void __iomem *)((pte_pfn(*ptep) << PAGE_SHIFT) |
> > > (mmio_addr & ~PAGE_MASK));
> > >
> >
> > Ughh, I think I just stumbled over a problem with this. This is a
> > failing lock held assertion via __is_vma_write_locked() in
> > remap_pfn_range_notrack() but I'm not sure yet what exactly causes this
> >
> > [ 67.338855] ------------[ cut here ]------------
> > [ 67.338865] WARNING: CPU: 15 PID: 2056 at include/linux/rwsem.h:85 remap_pfn_range_notrack+0x596/0x5b0
> > [ 67.338874] Modules linked in: <--- 8< --->
> > [ 67.338931] CPU: 15 PID: 2056 Comm: vfio-test Not tainted 6.10.0-rc1-pci-pfault-00004-g193e3a513cee #5
> > [ 67.338934] Hardware name: IBM 3931 A01 701 (LPAR)
> > [ 67.338935] Krnl PSW : 0704c00180000000 000003e54c9730ea (remap_pfn_range_notrack+0x59a/0x5b0)
> > [ 67.338940] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> > [ 67.338944] Krnl GPRS: 0000000000000100 000003655915fb78 000002d80b9a5928 000003ff7fa00000
> > [ 67.338946] 0004008000000000 0000000000004000 0000000000000711 000003ff7fa04000
> > [ 67.338948] 000002d80c533f00 000002d800000100 000002d81bbe6c28 000002d80b9a5928
> > [ 67.338950] 000003ff7fa00000 000002d80c533f00 000003e54c973120 000003655915fab0
> > [ 67.338956] Krnl Code: 000003e54c9730de: a708ffea lhi %r0,-22
> > 000003e54c9730e2: a7f4fff6 brc 15,000003e54c9730ce
> > #000003e54c9730e6: af000000 mc 0,0
> > >000003e54c9730ea: a7f4fd6e brc 15,000003e54c972bc6
> > 000003e54c9730ee: af000000 mc 0,0
> > 000003e54c9730f2: af000000 mc 0,0
> > 000003e54c9730f6: 0707 bcr 0,%r7
> > 000003e54c9730f8: 0707 bcr 0,%r7
> > [ 67.339025] Call Trace:
> > [ 67.339027] [<000003e54c9730ea>] remap_pfn_range_notrack+0x59a/0x5b0
> > [ 67.339032] [<000003e54c973120>] remap_pfn_range+0x20/0x30
> > [ 67.339035] [<000003e4cce5396c>] vfio_pci_mmap_fault+0xec/0x1d0 [vfio_pci_core]
> > [ 67.339043] [<000003e54c977240>] handle_mm_fault+0x6b0/0x25a0
> > [ 67.339046] [<000003e54c966328>] fixup_user_fault+0x138/0x310
> > [ 67.339048] [<000003e54c63a91c>] __s390x_sys_s390_pci_mmio_read+0x28c/0x3a0
> > [ 67.339051] [<000003e54c5e200a>] do_syscall+0xea/0x120
> > [ 67.339055] [<000003e54d5f9954>] __do_syscall+0x94/0x140
> > [ 67.339059] [<000003e54d611020>] system_call+0x70/0xa0
> > [ 67.339063] Last Breaking-Event-Address:
> > [ 67.339065] [<000003e54c972bc2>] remap_pfn_range_notrack+0x72/0x5b0
> > [ 67.339067] ---[ end trace 0000000000000000 ]---
> >
>
> This has me a bit confused so far as __is_vma_write_locked() checks
> mmap_assert_write_locked(vma->vm_mm) but most other users of
> fixup_user_fault() hold mmap_read_lock() just like this code and
> clearly in the non page fault case we only need the read lock.
>
And it gets weirder, as I could have sworn that I properly tested this
on v1, I retested with v1 (tags/sent/vfio_pci_mmap-v1 on my
git.kernel.org/niks and based on v6.9) and there I don't get the above
warning. I also made sure that it's not caused by my change to
"current->mm" for v2. But I'm also not hitting the checks David moved
into follow_pte() so yeah not sure what's going on here.
Powered by blists - more mailing lists