[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1461711744.603811287@decadent.org.uk>
Date: Wed, 27 Apr 2016 01:02:24 +0200
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Boris Ostrovsky" <boris.ostrovsky@...cle.com>,
"David Vrabel" <david.vrabel@...rix.com>
Subject: [PATCH 3.2 070/115] xen/events: Mask a moving irq
3.2.80-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
commit ff1e22e7a638a0782f54f81a6c9cb139aca2da35 upstream.
Moving an unmasked irq may result in irq handler being invoked on both
source and target CPUs.
With 2-level this can happen as follows:
On source CPU:
evtchn_2l_handle_events() ->
generic_handle_irq() ->
handle_edge_irq() ->
eoi_pirq():
irq_move_irq(data);
/***** WE ARE HERE *****/
if (VALID_EVTCHN(evtchn))
clear_evtchn(evtchn);
If at this moment target processor is handling an unrelated event in
evtchn_2l_handle_events()'s loop it may pick up our event since target's
cpu_evtchn_mask claims that this event belongs to it *and* the event is
unmasked and still pending. At the same time, source CPU will continue
executing its own handle_edge_irq().
With FIFO interrupt the scenario is similar: irq_move_irq() may result
in a EVTCHNOP_unmask hypercall which, in turn, may make the event
pending on the target CPU.
We can avoid this situation by moving and clearing the event while
keeping event masked.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Signed-off-by: David Vrabel <david.vrabel@...rix.com>
[bwh: Backported to 3.2:
- Adjust filename
- Add a suitable definition of test_and_set_mask()]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -337,6 +337,12 @@ static inline int test_evtchn(int port)
return sync_test_bit(port, &s->evtchn_pending[0]);
}
+static inline int test_and_set_mask(int port)
+{
+ struct shared_info *s = HYPERVISOR_shared_info;
+ return sync_test_and_set_bit(port, &s->evtchn_mask[0]);
+}
+
/**
* notify_remote_via_irq - send event to remote end of event channel via irq
@@ -507,9 +513,19 @@ static void eoi_pirq(struct irq_data *da
struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
int rc = 0;
- irq_move_irq(data);
+ if (!VALID_EVTCHN(evtchn))
+ return;
- if (VALID_EVTCHN(evtchn))
+ if (unlikely(irqd_is_setaffinity_pending(data))) {
+ int masked = test_and_set_mask(evtchn);
+
+ clear_evtchn(evtchn);
+
+ irq_move_masked_irq(data);
+
+ if (!masked)
+ unmask_evtchn(evtchn);
+ } else
clear_evtchn(evtchn);
if (pirq_needs_eoi(data->irq)) {
@@ -1427,9 +1443,19 @@ static void ack_dynirq(struct irq_data *
{
int evtchn = evtchn_from_irq(data->irq);
- irq_move_irq(data);
+ if (!VALID_EVTCHN(evtchn))
+ return;
- if (VALID_EVTCHN(evtchn))
+ if (unlikely(irqd_is_setaffinity_pending(data))) {
+ int masked = test_and_set_mask(evtchn);
+
+ clear_evtchn(evtchn);
+
+ irq_move_masked_irq(data);
+
+ if (!masked)
+ unmask_evtchn(evtchn);
+ } else
clear_evtchn(evtchn);
}
Powered by blists - more mailing lists