[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20080917.193438.112772191.davem@davemloft.net>
Date: Wed, 17 Sep 2008 19:34:38 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: netdev@...r.kernel.org
CC: jens.axboe@...cle.com, nickpiggin@...oo.com.au
Subject: [PATCH 0/2]: Software RX flow seperation
The other day I got wind that Jens Axboe had this cool
facility he was testing in his block development tree
that allows scheduling softirq work on remote cpus cheaply.
So I ran home from kernel summit as fast as I could and
started trying to make it generic and then try and use it
for networking packet receive processing.
These patches suck as-is, on the networking side.
For example, I do flow seperation for netif_rx(), which for
loopback is a lose for every benchmark I've tried. For
things like tbench on localhost the cpus are already loaded
already and the RX flow seperation just adds more overhead,
increases latency, and decreases bandwidth.
That's easy to change, just make netif_rx() use
target_cpu = smp_processor_id();
But for non-NAPI hardware devices it might still make sense.
Next, adding the "struct call_single_data" object to the sk_buff
is also negatively effecting performance. I have some ideas
to cure that, for example I've always wanted sk_buff to use
a struct list_head instead of it's by-hand list implementation
hacks.
But this patch might improve some routing, firewalling, and
IPSEC gateway configurations when using cards that don't
support RX flow seperation in hardware.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists