lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJzFV35yDOpCx=yPrrN3o4QLFdJLq9E2Jo+v2HsyyVP1q30aUg@mail.gmail.com>
Date:	Thu, 27 Feb 2014 13:40:53 -0700
From:	Sharat Masetty <sharat04@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org, harout@...eshian.net
Subject: Re: Packet drops observed @ LINUX_MIB_TCPBACKLOGDROP

Hi Eric,

We are using kernel version 3.10.0 and network driver is our own
driver designed to work on hsic-usb interconnect. One thing is for
sure is that we do not pre allocate lots buffers to hold incoming
traffic, we only allocate buffers of required size when there is data
to read off the bus.

I was under the impression that this backlog queue is charged against
the socket buffer space(sndbuf and rcvbuf), so I am trying to
understand in more detail how the driver implementation can be linked
to this drops in the backlog queue.

Can you explain to me the significance of this backlog queue? mainly
from the perspective of the userspace application(in this case iperf)
socket API calls and the kernel/network stack enqueuing packets up the
stack..

Thanks
Sharat


On Thu, Feb 27, 2014 at 10:54 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Wed, 2014-02-26 at 19:00 -0700, Sharat Masetty wrote:
>> Hi,
>>
>> We are trying to achieve category 4 data rates on an ARM device. We
>> see that with an incoming TCP stream(IP packets coming in and acks
>> going out) lots of packets are getting dropped when the backlog queue
>> is full. This is impacting overall data TCP throughput. I am trying to
>> understand the full context of why this queue is getting full so
>> often.
>>
>> From my brief look at the code, it looks to me like the user space
>> process is slow and busy in pulling the data from the socket buffer,
>> therefore the TCP stack is using this backlog queue in the mean time.
>> This queue is also charged against the main socket buffer allocation.
>>
>> Can you please explain this backlog queue, and possibly confirm if my
>> understanding this  matter is accurate?
>> Also can you suggest any ideas on how to mitigate these drops?
>
> You forgot to tell us things like :
>
> 1) Kernel version
> 2) Network driver in use.
>
> Some drivers allocate huge buffers to store incoming frames.
>
> Unfortunately we have to avoid OOM issues in the kernel, so such drivers
> are likely to have some skb dropped by the backlog.
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ