lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL8zT=joBA5pgXB7QfDM5qhOizmdneghXsSnwN5G74-yoGzg_Q@mail.gmail.com>
Date:	Thu, 14 Jun 2012 17:43:58 +0200
From:	Jean-Michel Hautbois <jhautbois@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev <netdev@...r.kernel.org>
Subject: Re: Regression on TX throughput when using bonding

2012/6/14 Eric Dumazet <eric.dumazet@...il.com>:
> On Thu, 2012-06-14 at 16:14 +0200, Jean-Michel Hautbois wrote:
>
>> ~# tc -s -d qdisc show dev eth1 > before_tc && sleep 10 && tc -s -d
>> qdisc show dev eth1 > after_tc && ./beforeafter before_tc after_tc
>> qdisc mq 0: root
>>  Sent 3185900568 bytes 788681 pkt (dropped 0, overlimits 0 requeues 620)
>>  backlog 0b 0p requeues 620
>>
>> As you can see, 2.5Gbps without any difficulties :).
>>
>> Thanks,
>> JM
>
> I have no idea why throughput on ethernet link is changed.
>
> There is another bug elsewhere.  Use a thousand of sockets instead of
> few, and you'll hit the bug.
>
> Orphaning skbs should not lower speed of the device, only drops excess
> packets, instead of blocking the application, waiting the socket wmem
> alloc being freed by destructors.
>
> Are you playing with process priorities ?
>
> If the ksoftirqd cannot run, this could explain the problem.
>

As suggested by Eric, here is a description I wish to be as precise as possible.
I send three RAW video frames, 1920x1088@...ps on three udp sockets to
the same NIC.
Each sending is in a thread, so I will focus on the numbers for one thread.

This generates burst of send(), as this : each 1/30s send 3.133.440
bytes to the ethernet interface.
This is in fact something similar to this :
while (n != 0)
{
  sendto(socket, packet, 4000);
  n -= 4000;
  packet += 4000
}

My interface is a bond with a 10Gbps interface and MTU set to 4096.
This means I have 784 packets each 1/30s which are sent on my
interface by one thread, then I wait for the next burst, and so on.
The videos are not necessarily the same video, so the threads may send
simultaneously or not...

My socket is in blocking mode.

JM
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ