[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070607161335.GA4987@2ka.mipt.ru>
Date: Thu, 7 Jun 2007 20:13:35 +0400
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: jamal <hadi@...erus.ca>
Cc: Krishna Kumar2 <krkumar2@...ibm.com>,
Gagan Arneja <gaagaan@...il.com>, netdev@...r.kernel.org,
Rick Jones <rick.jones2@...com>,
Sridhar Samudrala <sri@...ibm.com>,
David Miller <davem@...emloft.net>,
Robert Olsson <Robert.Olsson@...a.slu.se>
Subject: Re: [WIP][PATCHES] Network xmit batching
On Thu, Jun 07, 2007 at 07:43:49AM -0400, jamal (hadi@...erus.ca) wrote:
> Folks, we need help. Please run this on different hardware. Evgeniy, i
> thought this kind of stuff excites you, no? ;-> (wink, wink).
> Only the sender needs the patch but the receiver must be a more powerful
> machine (so that it is not the bottleneck).
> A very interesting test will be say 10K flows serving different packet
> sizes to simulate a busy server.
Actually I wonder where the devil lives, but I do not see how that
patchset can improve sending situation.
Let me clarify: there are two possibilities to send data:
1. via batched sending, which runs via queue of packets and performs
prepare call (which only setups some private flags, no work with
hardware) and then sending call.
2. old xmit function (which seems to be unused by kernel now?)
Btw, prep_queue_frame seems to be always called under tx_lock, but it
old e1000 xmit function calls it without lock. Locked case is correct,
since it accesses private registers via e1000_transfer_dhcp_info() for
some adapters.
So, essentially batched sending is
lock
while ((skb = dequue))
send
unlock
where queue of skbs are prepared by stack using the same transmit lock.
Where is a gain?
Btw, this one forces a smile:
if (unlikely(ret != NETDEV_TX_OK))
return NETDEV_TX_OK;
P.S. I do not have e1000 hardware to test, the only testing machine has
r8169 driver.
--
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists