[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALzav=dn-Oe1v9qTp=ag92Kn96JOb3AX9JJA4P5VcLksV8-vLw@mail.gmail.com>
Date: Mon, 6 Dec 2021 09:19:52 -0800
From: David Matlack <dmatlack@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kernel test robot <oliver.sang@...el.com>,
0day robot <lkp@...el.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
kvm@...r.kernel.org, Ben Gardon <bgardon@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Jim Mattson <jmattson@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Janis Schoetterl-Glausch <scgl@...ux.vnet.ibm.com>,
Junaid Shahid <junaids@...gle.com>,
Oliver Upton <oupton@...gle.com>,
Harish Barathvajasankar <hbarath@...gle.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>
Subject: Re: [KVM] d3750a0923: WARNING:possible_circular_locking_dependency_detected
On Sun, Dec 5, 2021 at 10:55 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 12/5/21 14:30, kernel test robot wrote:
> >
> > Chain exists of:
> > fs_reclaim --> mmu_notifier_invalidate_range_start --> &(kvm)->mmu_lock
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(&(kvm)->mmu_lock);
> > lock(mmu_notifier_invalidate_range_start);
> > lock(&(kvm)->mmu_lock);
> > lock(fs_reclaim);
> >
>
> David, this is yours; basically, kvm_mmu_topup_memory_cache must be
> called outside the mmu_lock.
Ah, I see. kvm_arch_mmu_enable_log_dirty_pt_masked is called with
mmu_lock already held. I'll make sure to address this in v1. In theory
this should just go away when I switch away from using split_caches to
Sean's suggestion of allocating under the mmu_lock with reclaim
disabled.
Powered by blists - more mailing lists