[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4911E661.6030701@cosmosbay.com>
Date: Wed, 05 Nov 2008 19:30:57 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Andre Schwarz <andre.schwarz@...rix-vision.de>
CC: netdev@...r.kernel.org
Subject: Re: Questions on kernel skb send / netdev queue monitoring
Andre Schwarz a écrit :
> Hi,
>
> we're running 2.6.27 on a MPC8343 based board.
> The board is working as a camera and is supposed to stream image data
> over 1000M Ethernet.
>
> Ethernet is connected via 2x Vitesse VSC8601 RGMII PHY, i.e. "eth0" and
> "eth1" present.
>
> Basically the system is running fine for quite some time - starting with
> kernel 2.6.19.
> Lately I have some trouble regarding performance and errors.
>
> Obviously I'm doing something wrong ... hopefully someone can enlighten me.
>
>
> How the system works :
>
> - Kernel driver allocates static list of skb to hold a complete image.
> This can be up to 4k skb depending on mtu.
> - Imaging device (FPGA @ PCI) initiates DMA into skb.
> - driver sends the skb out.
>
>
> 1. Sending
>
> This is my "inner loop" send function and is called for every skb in the
> list.
>
> static inline int gevss_send_get_ehdr(TGevStream *gevs, struct sk_buff *skb)
> {
> int result;
> struct sk_buff *slow_skb = skb_clone(skb, GFP_ATOMIC);
>
> atomic_inc(&slow_skb->users);
> result = gevs->rt->u.dst.output(slow_skb);
> kfree_skb(slow_skb);
>
> return result;
> }
>
> Is there really any need for cloning each skb before sending ?
> I'd really like to send the static skb without consuming it. How can
> this be done ?
>
You have replied to yourself... you clone skb because you want to keep skb.
> Is "gevs->rt->u.dst.output(slow_skb)" reasonable ?
> What about "hard_start_xmit" and/or "dev_queue_xmit" inside netdev ?
> Are these functions supposed to be used by other drivers ?
>
> What result can I expect if there's a failure, i.e. the HW-queue is full ?
> How should this be handled ? retry,i.e. send again after a while ?
> Can I query the xmit queue size/usage ?
>
> Actually I'm checking for NETDEV_TX_OK and NETDEV_TX_BUSY.
> Is this reasonable ?
>
>
> 2. "overruns"
>
> I've never seen that before. The overrun counter is incrementing quite
> fast even during proper operation.
> Looks like this is also an issue with not throttling the sender when the
> xmit queue is full ... :-(
> How can I avoid this ?
>
> eth0 Link encap:Ethernet HWaddr 00:0C:8D:30:40:25
> inet addr:192.168.65.55 Bcast:192.168.65.255 Mask:255.255.255.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:929 errors:0 dropped:0 overruns:0 frame:0
> TX packets:180937 errors:0 dropped:0 overruns:54002 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:65212 (63.6 KiB) TX bytes:262068658 (249.9 MiB)
> Base address:0xa000
If your driver has to push 4096 skb at once, and you dont want to handle overruns,
you might need to change eth0 settings
ifconfig eth0 txqueuelen 5000
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists