lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210312135844.5e97aac7@omen.home.shazbot.org>
Date:   Fri, 12 Mar 2021 13:58:44 -0700
From:   Alex Williamson <alex.williamson@...hat.com>
To:     Jason Gunthorpe <jgg@...dia.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        peterx@...hat.com, prime.zeng@...ilicon.com, cohuck@...hat.com
Subject: Re: [PATCH] vfio/pci: Handle concurrent vma faults

On Fri, 12 Mar 2021 13:09:38 -0700
Alex Williamson <alex.williamson@...hat.com> wrote:

> On Fri, 12 Mar 2021 15:41:47 -0400
> Jason Gunthorpe <jgg@...dia.com> wrote:
> 
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.12.0-rc1+ #18 Not tainted
> ------------------------------------------------------
> CPU 0/KVM/1406 is trying to acquire lock:
> ffffffffa5a58d60 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0x83/0xd0
> 
> but task is already holding lock:
> ffff94c0f3e8fb08 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: vfio_device_io_remap_mapping_range+0x31/0x120 [vfio]
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #1 (&mapping->i_mmap_rwsem){++++}-{3:3}:  
>        down_write+0x3d/0x70
>        dma_resv_lockdep+0x1b0/0x298
>        do_one_initcall+0x5b/0x2d0
>        kernel_init_freeable+0x251/0x298
>        kernel_init+0xa/0x111
>        ret_from_fork+0x22/0x30
> 
> -> #0 (fs_reclaim){+.+.}-{0:0}:  
>        __lock_acquire+0x111f/0x1e10
>        lock_acquire+0xb5/0x380
>        fs_reclaim_acquire+0xa3/0xd0
>        kmem_cache_alloc_trace+0x30/0x2c0
>        memtype_reserve+0xc3/0x280
>        reserve_pfn_range+0x86/0x160
>        track_pfn_remap+0xa6/0xe0
>        remap_pfn_range+0xa8/0x610
>        vfio_device_io_remap_mapping_range+0x93/0x120 [vfio]
>        vfio_pci_test_and_up_write_memory_lock+0x34/0x40 [vfio_pci]
>        vfio_basic_config_write+0x12d/0x230 [vfio_pci]
>        vfio_pci_config_rw+0x1b7/0x3a0 [vfio_pci]
>        vfs_write+0xea/0x390
>        __x64_sys_pwrite64+0x72/0xb0
>        do_syscall_64+0x33/0x40
>        entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
..
> > Does current_gfp_context()/memalloc_nofs_save()/etc solve it?  

Yeah, we can indeed use memalloc_nofs_save/restore().  It seems we're
trying to allocate something for pfnmap tracking and that enables lots
of lockdep specific tests.  Is it valid to wrap io_remap_pfn_range()
around clearing this flag or am I just masking a bug?  Thanks,

Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ