[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151102091639.GK17308@twins.programming.kicks-ass.net>
Date: Mon, 2 Nov 2015 10:16:39 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Noam Camus <noamc@...hip.com>
Cc: linux-snps-arc@...ts.infradead.org, linux-kernel@...r.kernel.org,
talz@...hip.com, gilf@...hip.com, cmetcalf@...hip.com
Subject: Re: [PATCH v1 05/20] ARC: rwlock: disable interrupts in !LLSC variant
On Sat, Oct 31, 2015 at 03:15:12PM +0200, Noam Camus wrote:
> From: Noam Camus <noamc@...hip.com>
>
> If we hold rw->lock_mutex and interrupt occures we may
> end up spinning on it for ever during softirq.
>
> Below you may see an example for interrupt we get while
> nl_table_lock is holding its rw->lock_mutex and we spinned
> on it for ever.
>
> The concept for the fix was taken from SPARC.
>
> [2015-05-12 19:16:12] Stack Trace:
> [2015-05-12 19:16:12] arc_unwind_core+0xb8/0x11c
> [2015-05-12 19:16:12] dump_stack+0x68/0xac
> [2015-05-12 19:16:12] _raw_read_lock+0xa8/0xac
> [2015-05-12 19:16:12] netlink_broadcast_filtered+0x56/0x35c
> [2015-05-12 19:16:12] nlmsg_notify+0x42/0xa4
> [2015-05-12 19:16:13] neigh_update+0x1fe/0x44c
> [2015-05-12 19:16:13] neigh_event_ns+0x40/0xa4
> [2015-05-12 19:16:13] arp_process+0x46e/0x5a8
> [2015-05-12 19:16:13] __netif_receive_skb_core+0x358/0x500
> [2015-05-12 19:16:13] process_backlog+0x92/0x154
> [2015-05-12 19:16:13] net_rx_action+0xb8/0x188
> [2015-05-12 19:16:13] __do_softirq+0xda/0x1d8
> [2015-05-12 19:16:14] irq_exit+0x8a/0x8c
> [2015-05-12 19:16:14] arch_do_IRQ+0x6c/0xa8
> [2015-05-12 19:16:14] handle_interrupt_level1+0xe4/0xf0
>
> Signed-off-by: Noam Camus <noamc@...hip.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
It might make sense to note that this is a lock internal lock and since
the lock is free to be used from any context, the lock needs to be
IRQ-safe.
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists