[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0908261506590.9933@gentwo.org>
Date: Wed, 26 Aug 2009 15:09:49 -0400 (EDT)
From: Christoph Lameter <cl@...ux-foundation.org>
To: Sridhar Samudrala <sri@...ibm.com>
cc: David Stevens <dlstevens@...ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
niv@...ux.vnet.ibm.com
Subject: Re: UDP multicast packet loss not reported if TX ring overrun?
On Wed, 26 Aug 2009, Sridhar Samudrala wrote:
> > They are reported for IP and UDP.
> Not clear what you meant by this.
The SNMP and UDP statistics show the loss. qdisc level does not show the
loss.
> > root@...strategy3-deb64:/home/clameter#tc -s qdisc show
> > qdisc pfifo_fast 0: dev eth0 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1
> > 1 1 1 1
> > Sent 6208 bytes 64 pkt (dropped 0, overlimits 0 requeues 0)
> > rate 0bit 0pps backlog 0b 0p requeues 0
>
> Even the Sent count seems to be too low. Are you looking at the right
> device?
I would think that tc displays all queues? It says eth0 and eth0 is the
device that we sent the data out on.
> So based on the current analysis, the packets are getting dropped after
> the call to ip_local_out() in ip_push_pending_frames(). ip_local_out()
> is failing with NET_XMIT_DROP. But we are not sure where they are
> getting dropped. Is that right?
ip_local_out is returning ENOBUFS. Something at the qdisc layer is
dropping the packet and not incrementing counters.
> I think we need to figure out where they are getting dropped and then
> decide on the appropriate counter to be incremented.
Right. Where in the qdisc layer do drops occur?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists