lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 01 May 2015 12:03:07 -0400
From:	Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:	David Vrabel <david.vrabel@...rix.com>, konrad.wilk@...cle.com
CC:	xen-devel@...ts.xenproject.org, annie.li@...cle.com,
	linux-kernel@...r.kernel.org
Subject: Re: [Xen-devel] [PATCH v2 1/4] xen/events: Clear cpu_evtchn_mask
 before resuming

On 05/01/2015 11:25 AM, David Vrabel wrote:
> On 01/05/15 14:39, Boris Ostrovsky wrote:
>> On 05/01/2015 06:46 AM, David Vrabel wrote:
>>> On 29/04/15 22:10, Boris Ostrovsky wrote:
>>>> When a guest is resumed, the hypervisor may change event channel
>>>> assignments. If this happens and the guest uses 2-level events it
>>>> is possible for the interrupt to be claimed by wrong VCPU since
>>>> cpu_evtchn_mask bits may be stale. This can happen even though
>>>> evtchn_2l_bind_to_cpu() attempts to clear old bits: irq_info that
>>>> is passed in is not necessarily the original one (from pre-migration
>>>> times) but instead is freshly allocated during resume and so any
>>>> information about which CPU the channel was bound to is lost.
>>>>
>>>> Thus we should clear the mask during resume.
>>>>
>>>> We also need to make sure that bits for xenstore and console channels
>>>> are set when these two subsystems are resumed. While rebind_evtchn_irq()
>>>> (which is invoked for both of them on a resume) calls
>>>> irq_set_affinity(),
>>>> the latter will in fact postpone setting affinity until handling the
>>>> interrupt. But because cpu_evtchn_mask will have bits for these two
>>>> cleared we won't be able to take the interrupt.
>>>>
>>>> With that in mind, we need to bind those two channels explicitly in
>>>> rebind_evtchn_irq(). We will keep irq_set_affinity() so that we have a
>>>> pass through generic irq affinity code later, in case something needs
>>>> to be updated there as well.
>>>>
>>>> (Also replace cpumask_of(0) with cpumask_of(info->cpu) in
>>>> rebind_evtchn_irq(): it should be set to zero in preceding
>>>> xen_irq_info_evtchn_setup().)
>>> [...]
>>>> @@ -1279,8 +1280,16 @@ void rebind_evtchn_irq(int evtchn, int irq)
>>>>          mutex_unlock(&irq_mapping_update_lock);
>>>>    -    /* new event channels are always bound to cpu 0 */
>>>> -    irq_set_affinity(irq, cpumask_of(0));
>>>> +    bind_vcpu.port = evtchn;
>>>> +    bind_vcpu.vcpu = info->cpu;
>>>> +    if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu)
>>>> == 0)
>>>> +        bind_evtchn_to_cpu(evtchn, info->cpu);
>>> Isn't the hypercall is unnecessary since this is a new event channel
>>> it's already bound to VCPU 0 and info->cpu == 0?
>>>
>>> I think only the bind_evtchn_to_cpu() call is needed here.
>>
>> True. However, I added the hypercall here to make the routine
>> independent of what happens in other parts (hypervisor binding new
>> channels to cpu0, xen_irq_info_evtchn_setup() initializing to zero,
>> etc.). This way, if either of these two change in the future (unlikely,
>> but possible) this routine will still work as expected.
> New event channels being bound to VCPU0 is part of the ABI and cannot
> change, so I've taken the hypercall and its dodgy looking error path out.
>
> I've applied this series to for-linus-4.1b but before I push, do any of
> these need to be tagged for stable?

Yes, all of them need to. The first one can probably be easily 
backported starting from whenever FIFO events went in (3.14?) but the 
rest could go back all the way to 3.2.

-boris

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ