[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DB77D03.9070507@hotmail.com>
Date: Tue, 26 Apr 2011 22:18:43 -0400
From: John Lumby <johnlumby@...mail.com>
To: Francois Romieu <romieu@...zoreil.com>
CC: netdev@...r.kernel.org, Ben Hutchings <bhutchings@...arflare.com>,
nic_swsd@...ltek.com
Subject: Re: r8169 : always copying the rx buffer to new skb
Anyone have any further thoughts on the proposal to avoid memcpy'ing?
(see earlier post)
I also have a question concerning NAPI. I've found that much of the
CPU saved from not memcpy'ing is burned in extra rx_interrupt'ing, and
much of that seems to be wasted (no new packets). So the actual
benefit is rather less than I think should be possible.
I've tried some tinkering with the napi weight but can't find any
setting which really improves the ratio of rx packets to hard interrupts
significantly. The problem seems to be that each successive
rtl8169_poll() is driven too soon after the last one (in this
particular workload). The napi weight doesn't directly influence that.
So - question :
is there any way, when returning from rtl8169_poll, to tell napi
something like :
" finish this interrupt context and let something else run on this
CPU (always CPU0 on my machine) BUT reschedule another napi poll on
this same device at some time after that "
the point being that rtl8169_poll will, for this case, NOT re-enable
the NIC's napi interrupts, in the hope that maybe some user work can be
dispatched, so something else will have to schedule the next napi
poll for it. Conceptually, if rtl8169_poll finds no rx work done
on this call, it wants to cause a yield() and then try again.
Except it can't from within the interrupt.
I appreciate this could lead to delays in handling new work so might be
dangerous, but it seems to me to be in line with NAPI objectives so I
wanted to try it . But don't know how. Any hints or thoughts
appreciated.
John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists