[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e0c91b89-d1e8-9831-00fe-23fe92d79fa2@redhat.com>
Date: Wed, 24 Jul 2019 10:17:14 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: syzbot <syzbot+e58112d71f77113ddb7b@...kaller.appspotmail.com>,
aarcange@...hat.com, akpm@...ux-foundation.org,
christian@...uner.io, davem@...emloft.net, ebiederm@...ssion.com,
elena.reshetova@...el.com, guro@...com, hch@...radead.org,
james.bottomley@...senpartnership.com, jglisse@...hat.com,
keescook@...omium.org, ldv@...linux.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-parisc@...r.kernel.org,
luto@...capital.net, mhocko@...e.com, mingo@...nel.org,
namit@...are.com, peterz@...radead.org,
syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk,
wad@...omium.org
Subject: Re: WARNING in __mmdrop
On 2019/7/23 下午11:02, Michael S. Tsirkin wrote:
> On Tue, Jul 23, 2019 at 09:34:29PM +0800, Jason Wang wrote:
>> On 2019/7/23 下午6:27, Michael S. Tsirkin wrote:
>>>> Yes, since there could be multiple co-current invalidation requests. We need
>>>> count them to make sure we don't pin wrong pages.
>>>>
>>>>
>>>>> I also wonder about ordering. kvm has this:
>>>>> /*
>>>>> * Used to check for invalidations in progress, of the pfn that is
>>>>> * returned by pfn_to_pfn_prot below.
>>>>> */
>>>>> mmu_seq = kvm->mmu_notifier_seq;
>>>>> /*
>>>>> * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
>>>>> * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
>>>>> * risk the page we get a reference to getting unmapped before we have a
>>>>> * chance to grab the mmu_lock without mmu_notifier_retry() noticing.
>>>>> *
>>>>> * This smp_rmb() pairs with the effective smp_wmb() of the combination
>>>>> * of the pte_unmap_unlock() after the PTE is zapped, and the
>>>>> * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
>>>>> * mmu_notifier_seq is incremented.
>>>>> */
>>>>> smp_rmb();
>>>>>
>>>>> does this apply to us? Can't we use a seqlock instead so we do
>>>>> not need to worry?
>>>> I'm not familiar with kvm MMU internals, but we do everything under of
>>>> mmu_lock.
>>>>
>>>> Thanks
>>> I don't think this helps at all.
>>>
>>> There's no lock between checking the invalidate counter and
>>> get user pages fast within vhost_map_prefetch. So it's possible
>>> that get user pages fast reads PTEs speculatively before
>>> invalidate is read.
>>>
>>> --
>>
>> In vhost_map_prefetch() we do:
>>
>> spin_lock(&vq->mmu_lock);
>>
>> ...
>>
>> err = -EFAULT;
>> if (vq->invalidate_count)
>> goto err;
>>
>> ...
>>
>> npinned = __get_user_pages_fast(uaddr->uaddr, npages,
>> uaddr->write, pages);
>>
>> ...
>>
>> spin_unlock(&vq->mmu_lock);
>>
>> Is this not sufficient?
>>
>> Thanks
> So what orders __get_user_pages_fast wrt invalidate_count read?
So in invalidate_end() callback we have:
spin_lock(&vq->mmu_lock);
--vq->invalidate_count;
spin_unlock(&vq->mmu_lock);
So even PTE is read speculatively before reading invalidate_count (only
in the case of invalidate_count is zero). The spinlock has guaranteed
that we won't read any stale PTEs.
Thanks
>
Powered by blists - more mailing lists