lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230314141144.6a0892e6.alex.williamson@redhat.com>
Date:   Tue, 14 Mar 2023 14:11:44 -0600
From:   Alex Williamson <alex.williamson@...hat.com>
To:     Suren Baghdasaryan <surenb@...gle.com>
Cc:     akpm@...ux-foundation.org, michel@...pinasse.org,
        jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
        hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
        willy@...radead.org, liam.howlett@...cle.com, peterz@...radead.org,
        ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
        will@...nel.org, luto@...nel.org, songliubraving@...com,
        peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
        hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
        punit.agrawal@...edance.com, lstoakes@...il.com,
        peterjung1337@...il.com, rientjes@...gle.com,
        axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
        rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
        tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
        gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
        leewalsh@...gle.com, posk@...gle.com, linux-mm@...ck.org,
        linux-arm-kernel@...ts.infradead.org,
        linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Jason Gunthorpe <jgg@...dia.com>,
        Cédric Le Goater <clg@...hat.com>,
        dimitri.sivanich@....com
Subject: Re: [PATCH v4 0/7] introduce vm_flags modifier functions

On Thu, 26 Jan 2023 11:37:45 -0800
Suren Baghdasaryan <surenb@...gle.com> wrote:

> This patchset was originally published as a part of per-VMA locking [1] and
> was split after suggestion that it's viable on its own and to facilitate
> the review process. It is now a preprequisite for the next version of per-VMA
> lock patchset, which reuses vm_flags modifier functions to lock the VMA when
> vm_flags are being updated.
> 
> VMA vm_flags modifications are usually done under exclusive mmap_lock
> protection because this attrubute affects other decisions like VMA merging
> or splitting and races should be prevented. Introduce vm_flags modifier
> functions to enforce correct locking.
> 
> The patchset applies cleanly over mm-unstable branch of mm tree.

With this series, vfio-pci developed a bunch of warnings around not
holding the mmap_lock write semaphore while calling
io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault().

I suspect vdpa has the same issue for their use of remap_pfn_range()
from their fault handler, JasonW, MST, FYI.

It also looks like gru_fault() would have the same issue, Dimitri.

In all cases, we're preemptively setting vm_flags to what
remap_pfn_range_notrack() uses, so I thought we were safe here as I
specifically remember trying to avoid changing vm_flags from the
fault handler.  But apparently that doesn't take into account
track_pfn_remap() where VM_PAT comes into play.

The reason for using remap_pfn_range() on fault in vfio-pci is that
we're mapping device MMIO to userspace, where that MMIO can be disabled
and we'd rather zap the mapping when that occurs so that we can sigbus
the user rather than allow the user to trigger potentially fatal bus
errors on the host.

Peter Xu has suggested offline that a non-lazy approach to reinsert the
mappings might be more inline with mm expectations relative to touching
vm_flags during fault.  What's the right solution here?  Can the fault
handling be salvaged, is proactive remapping the right approach, or is
there something better?  Thanks,

Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ