lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <500040AC.3070800@intel.com>
Date:	Fri, 13 Jul 2012 08:37:16 -0700
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	netdev@...r.kernel.org, davem@...emloft.net,
	jeffrey.t.kirsher@...el.com, edumazet@...gle.com,
	bhutchings@...arflare.com, therbert@...gle.com,
	alexander.duyck@...il.com
Subject: Re: [RFC PATCH 1/2] net: Add new network device function to allow
 for MMIO batching

On 07/13/2012 12:38 AM, Eric Dumazet wrote:
> On Thu, 2012-07-12 at 08:39 -0700, Alexander Duyck wrote:
>
>> The problem is in both of the cases where I have seen the issue the
>> qdisc is actually empty.
>>
> You mean a router workload, with links of same bandwidth.
> (BQL doesnt trigger)
>
> Frankly what percentage of linux powered machines act as high perf
> routers ?
Actually I was seeing this issue with the sending application on the
same CPU as the Tx cleanup.  The problem was the CPU would stall and
consume cycles instead of putting work into placing more packets on the
queue. 

>> In the case of pktgen it does not use the qdisc layer at all.  It just
>> directly calls ndo_start_xmit.
> pktgen is in kernel, adding a complete() call in it is certainly ok,
> if we can avoid kernel bloat.
>
> I mean, pktgen represents less than 0.000001 % of real workloads.
I realize that, but it does provide a valid means of stress testing an
interface and demonstrating that the MMIO writes are causing significant
stalls and bus utilization.

>> In the standard networking case we never fill the qdisc because the MMIO
>> write stalls the entire CPU so the application never gets a chance to
>> get ahead of the hardware.  From what I can tell the only case in which
>> the qdisc_run solution would work is if the ndo_start_xmit was called on
>> a different CPU from the application that is doing the transmitting.
> Hey, I can tell that qdisc is not empty on many workloads.
> But BQL and TSO mean we only send one or two packets per qdisc run.
>
> I understand this MMIO batching helps routers workloads, or workloads
> using many small packets.
>
> But on other workloads, this adds a significant latency source
> (NET_TX_SOFTIRQ)
>
> It would be good to instrument the extra delay on a single UDP send.
>
> (entering do_softirq() path is not a few instructions...)
These kind of issues are one of the reasons why this feature is disabled
by default.  You have to explicitly enable it by setting the
dispatch_limit to something other than 0.

I suppose I could just make it a part of the Tx cleanup itself since I
am only doing a trylock instead of waiting and taking the full lock.  I
am open to any other suggestions for alternatives other than NET_TX_SOFTIRQ.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ