[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1165618025.1103.85.camel@localhost.localdomain>
Date: Sat, 09 Dec 2006 09:47:05 +1100
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Linas Vepstas <linas@...tin.ibm.com>
Cc: Jeff Garzik <jgarzik@...ox.com>, Andrew Morton <akpm@...l.org>,
Arnd Bergmann <arnd@...db.de>, netdev@...r.kernel.org,
James K Lewis <jklewis@...ibm.com>, linuxppc-dev@...abs.org
Subject: Re: [PATCH 3/16] Spidernet RX Locking
A spinlock is expensive in the fast path, which is why Jeff says it's
invasive.
> spider_net_decode_one_descr() is called from
> spider_net_poll() (which is the netdev->poll callback)
> and also from spider_net_handle_rxram_full().
>
> The rxramfull routine is called from a tasklet that
> is fired off after a "RX ram full" interrupt is receved.
> This interrupt is generated when the hardware runs out
> of space to store incoming packets. We are seeing this
> interrupt fire when the CPU is heavily loaded, and a
> lot of traffic is being fired at the device.
How often does that interrupt happen in that case ?
A better approach is to keep the fast path (ie. poll()) lockless, and in
handle_rxram_full(), the slow path, protect against poll using
netif_disable_poll(). Though that means using a work queue, not a
tasklet, since it needs to schedule.
> > and what other
> > non-sledgehammer approaches were discarded before arriving at this one?
>
> Well, I'm not that good at kernel programming, so I guess
> I did not perceive this as a "sledgehammer." And alternative
> approach is to simply ignore the rxramfull interrupt entirely,
> and depend on poll() do all the work. I'll try this shortly.
or you can schedule rx work from the rxramfull interrupt after setting a
"something bad happened" flag. Then, poll can check this flag and do the
right thing.
Ben.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists