lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d786141f-9145-788d-6a10-6fa673dd584c@redhat.com>
Date:   Thu, 25 Jul 2019 11:44:27 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     syzbot <syzbot+e58112d71f77113ddb7b@...kaller.appspotmail.com>,
        aarcange@...hat.com, akpm@...ux-foundation.org,
        christian@...uner.io, davem@...emloft.net, ebiederm@...ssion.com,
        elena.reshetova@...el.com, guro@...com, hch@...radead.org,
        james.bottomley@...senpartnership.com, jglisse@...hat.com,
        keescook@...omium.org, ldv@...linux.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-parisc@...r.kernel.org,
        luto@...capital.net, mhocko@...e.com, mingo@...nel.org,
        namit@...are.com, peterz@...radead.org,
        syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk,
        wad@...omium.org
Subject: Re: WARNING in __mmdrop


On 2019/7/25 上午2:25, Michael S. Tsirkin wrote:
> On Wed, Jul 24, 2019 at 06:08:05PM +0800, Jason Wang wrote:
>> On 2019/7/24 下午4:05, Michael S. Tsirkin wrote:
>>> On Wed, Jul 24, 2019 at 10:17:14AM +0800, Jason Wang wrote:
>>>> On 2019/7/23 下午11:02, Michael S. Tsirkin wrote:
>>>>> On Tue, Jul 23, 2019 at 09:34:29PM +0800, Jason Wang wrote:
>>>>>> On 2019/7/23 下午6:27, Michael S. Tsirkin wrote:
>>>>>>>> Yes, since there could be multiple co-current invalidation requests. We need
>>>>>>>> count them to make sure we don't pin wrong pages.
>>>>>>>>
>>>>>>>>
>>>>>>>>> I also wonder about ordering. kvm has this:
>>>>>>>>>             /*
>>>>>>>>>               * Used to check for invalidations in progress, of the pfn that is
>>>>>>>>>               * returned by pfn_to_pfn_prot below.
>>>>>>>>>               */
>>>>>>>>>              mmu_seq = kvm->mmu_notifier_seq;
>>>>>>>>>              /*
>>>>>>>>>               * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
>>>>>>>>>               * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
>>>>>>>>>               * risk the page we get a reference to getting unmapped before we have a
>>>>>>>>>               * chance to grab the mmu_lock without mmu_notifier_retry() noticing.
>>>>>>>>>               *
>>>>>>>>>               * This smp_rmb() pairs with the effective smp_wmb() of the combination
>>>>>>>>>               * of the pte_unmap_unlock() after the PTE is zapped, and the
>>>>>>>>>               * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
>>>>>>>>>               * mmu_notifier_seq is incremented.
>>>>>>>>>               */
>>>>>>>>>              smp_rmb();
>>>>>>>>>
>>>>>>>>> does this apply to us? Can't we use a seqlock instead so we do
>>>>>>>>> not need to worry?
>>>>>>>> I'm not familiar with kvm MMU internals, but we do everything under of
>>>>>>>> mmu_lock.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>> I don't think this helps at all.
>>>>>>>
>>>>>>> There's no lock between checking the invalidate counter and
>>>>>>> get user pages fast within vhost_map_prefetch. So it's possible
>>>>>>> that get user pages fast reads PTEs speculatively before
>>>>>>> invalidate is read.
>>>>>>>
>>>>>>> -- 
>>>>>> In vhost_map_prefetch() we do:
>>>>>>
>>>>>>            spin_lock(&vq->mmu_lock);
>>>>>>
>>>>>>            ...
>>>>>>
>>>>>>            err = -EFAULT;
>>>>>>            if (vq->invalidate_count)
>>>>>>                    goto err;
>>>>>>
>>>>>>            ...
>>>>>>
>>>>>>            npinned = __get_user_pages_fast(uaddr->uaddr, npages,
>>>>>>                                            uaddr->write, pages);
>>>>>>
>>>>>>            ...
>>>>>>
>>>>>>            spin_unlock(&vq->mmu_lock);
>>>>>>
>>>>>> Is this not sufficient?
>>>>>>
>>>>>> Thanks
>>>>> So what orders __get_user_pages_fast wrt invalidate_count read?
>>>> So in invalidate_end() callback we have:
>>>>
>>>> spin_lock(&vq->mmu_lock);
>>>> --vq->invalidate_count;
>>>>           spin_unlock(&vq->mmu_lock);
>>>>
>>>>
>>>> So even PTE is read speculatively before reading invalidate_count (only in
>>>> the case of invalidate_count is zero). The spinlock has guaranteed that we
>>>> won't read any stale PTEs.
>>>>
>>>> Thanks
>>> I'm sorry I just do not get the argument.
>>> If you want to order two reads you need an smp_rmb
>>> or stronger between them executed on the same CPU.
>>>
>>> Executing any kind of barrier on another CPU
>>> will have no ordering effect on the 1st one.
>>>
>>>
>>> So if CPU1 runs the prefetch, and CPU2 runs invalidate
>>> callback, read of invalidate counter on CPU1 can bypass
>>> read of PTE on CPU1 unless there's a barrier
>>> in between, and nothing CPU2 does can affect that outcome.
>>>
>>>
>>> What did I miss?
>>
>> It doesn't harm if PTE is read before invalidate_count, this is because:
>>
>> 1) This speculation is serialized with invalidate_range_end() because of the
>> spinlock
>>
>> 2) This speculation can only make effect when we read invalidate_count as
>> zero.
>>
>> 3) This means the speculation is done after the last invalidate_range_end()
>> and because of the spinlock, when we enter the critical section of spinlock
>> in prefetch, we can not see any stale PTE that was unmapped before.
>>
>> Am I wrong?
>>
>> Thanks
> OK I think you are right. Sorry it took me a while to figure out.


No problem. So do you want me to send a V2 of the fixes (e.g with the 
conversion from synchronize_rcu() to kfree_rcu()). Or you want something 
else. (e.g revert or a config option)?

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ