[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1005192244430.3368@localhost.localdomain>
Date: Wed, 19 May 2010 23:08:33 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Darren Hart <dvhltc@...ibm.com>
cc: michael@...erman.id.au, Brian King <brking@...ux.vnet.ibm.com>,
Jan-Bernd Themann <themann@...ibm.com>,
dvhltc@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
Will Schmidt <will_schmidt@...t.ibm.com>,
niv@...ux.vnet.ibm.com, Doug Maxey <doug.maxey@...ibm.com>,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH RT] ehea: make receive irq handler non-threaded
(IRQF_NODELAY)
On Wed, 19 May 2010, Thomas Gleixner wrote:
> > I'm still not clear on why the ultimate solution wasn't to have XICS report
> > edge triggered as edge triggered. Probably some complexity of the entire power
> > stack that I am ignorant of.
> >
> > > Apart from the issue of loosing interrupts there is also the fact that
> > > masking on the XICS requires an RTAS call which takes a global lock.
>
> Right, I'd love to avoid that but with real level interrupts we'd run
> into an interrupt storm. Though another solution would be to issue the
> EOI after the threaded handler finished, that'd work as well, but
> needs testing.
Thought more about that. The case at hand (ehea) is nasty:
The driver does _NOT_ disable the rx interrupt in the card in the rx
interrupt handler - for whatever reason.
So even in mainline you get repeated rx interrupts when packets
arrive while napi is processing the poll, which is suboptimal at
least. In fact it is counterproductive as the whole purpose of NAPI
is to _NOT_ get interrupts for consecutive incoming packets while the
poll is active.
Most of the other network drivers do:
rx_irq()
disable rx interrupts on card
napi_schedule()
Now when the napi poll is done (no more packets available) then the
driver reenables the rx interrupt on the card.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists