lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211020095208.5e34679a.pasic@linux.ibm.com>
Date:   Wed, 20 Oct 2021 09:52:08 +0200
From:   Halil Pasic <pasic@...ux.ibm.com>
To:     Christian Borntraeger <borntraeger@...ibm.com>
Cc:     Janosch Frank <frankja@...ux.ibm.com>,
        Michael Mueller <mimu@...ux.ibm.com>,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
        David Hildenbrand <david@...hat.com>,
        Claudio Imbrenda <imbrenda@...ux.ibm.com>,
        Heiko Carstens <hca@...ux.ibm.com>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Alexander Gordeev <agordeev@...ux.ibm.com>,
        Pierre Morel <pmorel@...ux.ibm.com>,
        Tony Krowiak <akrowiak@...ux.ibm.com>,
        Matthew Rosato <mjrosato@...ux.ibm.com>,
        Niklas Schnelle <schnelle@...ux.ibm.com>, farman@...ux.ibm.com,
        kvm@...r.kernel.org, Halil Pasic <pasic@...ux.ibm.com>
Subject: Re: [PATCH 3/3] KVM: s390: clear kicked_mask if not idle after set

On Tue, 19 Oct 2021 23:35:25 +0200
Christian Borntraeger <borntraeger@...ibm.com> wrote:

> > @@ -426,6 +426,7 @@ static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
> >   {
> >   	kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
> >   	clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
> > +	clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.gisa_int.kicked_mask);

BTW, do you know are bit-ops garanteed to be serialized as seen by
another cpu even when acting on a different byte? I mean
could the kick_single_vcpu() set the clear of the kicked_mask bit but
not see the clear of the idle mask?

If that is not true we may need some barriers, or possibly merging the
two bitmasks like idle bit, kick bit alterating to ensure there
absolutely ain't no race.


> >   }
> >   
> >   static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
> > @@ -3064,7 +3065,11 @@ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
> >   			/* lately kicked but not yet running */
> >   			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
> >   				return;
> > -			kvm_s390_vcpu_wakeup(vcpu);
> > +			/* if meanwhile not idle: clear  and don't kick */
> > +			if (test_bit(vcpu_idx, kvm->arch.idle_mask))
> > +				kvm_s390_vcpu_wakeup(vcpu);
> > +			else
> > +				clear_bit(vcpu_idx, gi->kicked_mask);  
> 
> I think this is now a bug. We should not return but continue in that case, no?
> 

I don't think so. The purpose of this function is to kick a *single* vcpu
that can handle *some* of the I/O interrupts indicated by the
deliverable_mask. The deliverable mask predates the check of the idle_mask.
I believe if we selected a suitable vcpu, that was idle and before we
actually do a wakeup on it we see that it isn't idle any more, I believe
it is as good if not better as performing the wakeup (and a new wakeup()
call is pointless: this vcpu either already got the the irqs it can get,
or is about to enter SIE soon to do so. We just saved a pointless call
to wakeup().

> 
> I think it might be safer to also clear kicked_mask in __set_cpu_idle

It would not hurt, but my guess is that kvm_arch_vcpu_runnable() before
we really decide to go to sleep:

void kvm_vcpu_block(struct kvm_vcpu *vcpu)                                      
{ 
[..]
        for (;;) {                                                              
                set_current_state(TASK_INTERRUPTIBLE);                          
                                                                                
                if (kvm_vcpu_check_block(vcpu) < 0)     <=== calls runnable()                        
                        break;                                                  
                                                                                
                waited = true;                                                  
                schedule();                                                     
        } 

>  From a CPUs perspective: We have been running and are on our way to become idle.
> There is no way that someone kicked us for a wakeup. In other words as long as we
> are running, there is no point in kicking us but when going idle we should get rid
> of old kick_mask bit.
> Doesnt this cover your scenario?

In practice probably yes, in theory I don't think so. I hope this is
more of a theoretical problem than a practical one anyway. But let me
discuss the theory anyway.

Under the assumption that an arbitrary amount of time can pass between 
1) for_each_set_bit finds the vcpus bit in the idle mask set
and
2) test_and_set_bit(kicked_mask) that returns a false (bit was not set,
and we did set it)
then if we choose an absurdly large amount of time, it is possible that
we are past a whole cycle: an __unset_cpu_ilde() and an __set_cpu_idle()
but we didn't reach set_current_state(TASK_INTERRUPTIBLE). If
we set the bit at a suitable place there we theoretically may end up
in a situation where the wakeup is ineffective because the state didn't
change yet, but the bit gets set. So we end up in a stable sleeps and
does not want to get woken up state. If the clear in
kvm_arch_vcpu_runnable() does not save us... It could be that, that
clear alone is sufficient. Because, before we really go to sleep we kind
of attempt to wake up, and this guy clears on every attempted wakeup. So
the clear in kvm_arch_vcpu_runnable() may be the only clear we need. Or?

Anyway the scenario I described is very very far fetched I guess, but I
prefer solutions that are theoretically race free over solutions that
are practically race free if performance does not suffer.

Regards,
Halil

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ