[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1458218750-5202-1-git-send-email-boris.ostrovsky@oracle.com>
Date: Thu, 17 Mar 2016 08:45:50 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: david.vrabel@...rix.com, konrad.wilk@...cle.com
Cc: xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
stable@...r.kernel.org
Subject: [PATCH] xen/events: Mask a moving irq
Moving an unmasked irq may result in irq handler being invoked on both
source and target CPUs.
With 2-level this can happen as follows:
On source CPU:
evtchn_2l_handle_events() ->
generic_handle_irq() ->
handle_edge_irq() ->
eoi_pirq():
irq_move_irq(data);
/***** WE ARE HERE *****/
if (VALID_EVTCHN(evtchn))
clear_evtchn(evtchn);
If at this moment target processor is handling an unrelated event in
evtchn_2l_handle_events()'s loop it may pick up our event since target's
cpu_evtchn_mask claims that this event belongs to it *and* the event is
unmasked and still pending. At the same time, source CPU will continue
executing its own handle_edge_irq().
With FIFO interrupt the scenario is similar: irq_move_irq() may result
in a EVTCHNOP_unmask hypercall which, in turn, may make the event
pending on the target CPU.
We can avoid this situation by moving and clearing the event while
keeping event masked.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: stable@...r.kernel.org
---
drivers/xen/events/events_base.c | 26 ++++++++++++++++++++++++--
1 files changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 524c221..c5725ee 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -483,12 +483,23 @@ static void eoi_pirq(struct irq_data *data)
int evtchn = evtchn_from_irq(data->irq);
struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
int rc = 0;
+ int need_unmask = 0;
- irq_move_irq(data);
+ if (unlikely(irqd_is_setaffinity_pending(data))) {
+ if (VALID_EVTCHN(evtchn))
+ need_unmask = !test_and_set_mask(evtchn);
+ }
if (VALID_EVTCHN(evtchn))
clear_evtchn(evtchn);
+ irq_move_irq(data);
+
+ if (VALID_EVTCHN(evtchn)) {
+ if (unlikely(need_unmask))
+ unmask_evtchn(evtchn);
+ }
+
if (pirq_needs_eoi(data->irq)) {
rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
WARN_ON(rc);
@@ -1356,11 +1367,22 @@ static void disable_dynirq(struct irq_data *data)
static void ack_dynirq(struct irq_data *data)
{
int evtchn = evtchn_from_irq(data->irq);
+ int need_unmask = 0;
- irq_move_irq(data);
+ if (unlikely(irqd_is_setaffinity_pending(data))) {
+ if (VALID_EVTCHN(evtchn))
+ need_unmask = !test_and_set_mask(evtchn);
+ }
if (VALID_EVTCHN(evtchn))
clear_evtchn(evtchn);
+
+ irq_move_irq(data);
+
+ if (VALID_EVTCHN(evtchn)) {
+ if (unlikely(need_unmask))
+ unmask_evtchn(evtchn);
+ }
}
static void mask_ack_dynirq(struct irq_data *data)
--
1.7.7.6
Powered by blists - more mailing lists