lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Oct 2007 02:37:16 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	David Miller <davem@...emloft.net>
Cc:	hadi@...erus.ca, shemminger@...ux-foundation.org,
	andi@...stfloor.org, jeff@...zik.org, johnpol@....mipt.ru,
	herbert@...dor.apana.org.au, gaagaan@...il.com,
	Robert.Olsson@...a.slu.se, netdev@...r.kernel.org,
	rdreier@...co.com, peter.p.waskiewicz.jr@...el.com,
	mcarlson@...adcom.com, jagana@...ibm.com,
	general@...ts.openfabrics.org, mchan@...adcom.com, tgraf@...g.ch,
	randy.dunlap@...cle.com, sri@...ibm.com, kaber@...sh.net
Subject: Re: [ofa-general] Re: [PATCH 2/3][NET_BATCH] net core use batching

On Tue, Oct 09, 2007 at 05:04:35PM -0700, David Miller wrote:
> We have to keep in mind, however, that the sw queue right now is 1000
> packets.  I heavily discourage any driver author to try and use any
> single TX queue of that size.  

Why would you discourage them? 

If 1000 is ok for a software queue why would it not be ok
for a hardware queue?

> Which means that just dropping on back
> pressure might not work so well.
> 
> Or it might be perfect and signal TCP to backoff, who knows! :-)

1000 packets is a lot. I don't have hard data, but gut feeling 
is less would also do.

And if the hw queues are not enough a better scheme might be to
just manage this in the sockets in sendmsg. e.g. provide a wait queue that
drivers can wake up and let them block on more queue.

> The idea is that the network stack, as in the pure hw queue scheme,
> unconditionally always submits new packets to the driver.  Therefore
> even if the hw TX queue is full, the driver can still queue to an
> internal sw queue with some limit (say 1000 for ethernet, as is used
> now).
>
> 
> When the hw TX queue gains space, the driver self-batches packets
> from the sw queue to the hw queue.

I don't really see the advantage over the qdisc in that scheme.
It's certainly not simpler and probably more code and would likely
also not require less locks (e.g. a currently lockless driver
would need a new lock for its sw queue). Also it is unclear to me
it would be really any faster.

-Andi

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ