lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 09 Oct 2007 04:02:55 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	krkumar2@...ibm.com
Cc:	peter.p.waskiewicz.jr@...el.com, gaagaan@...il.com,
	general@...ts.openfabrics.org, hadi@...erus.ca,
	herbert@...dor.apana.org.au, jagana@...ibm.com, jeff@...zik.org,
	johnpol@....mipt.ru, kaber@...sh.net, mcarlson@...adcom.com,
	mchan@...adcom.com, netdev@...r.kernel.org,
	randy.dunlap@...cle.com, rdreier@...co.com, rick.jones2@...com,
	Robert.Olsson@...a.slu.se, shemminger@...ux-foundation.org,
	sri@...ibm.com, tgraf@...g.ch, xma@...ibm.com
Subject: Re: [PATCH 2/3][NET_BATCH] net core use batching

From: Krishna Kumar2 <krkumar2@...ibm.com>
Date: Tue, 9 Oct 2007 16:28:27 +0530

> Isn't it enough that the multiqueue+batching drivers handle skbs
> belonging to different queue's themselves, instead of qdisc having
> to figure that out? This will reduce costs for most skbs that are
> neither batched nor sent to multiqueue devices.
> 
> Eg, driver can keep processing skbs and put to the correct tx_queue
> as long as mapping remains the same. If the mapping changes, it posts
> earlier skbs (with the correct lock) and then iterates for the other
> skbs that have the next different mapping, and so on.

The complexity in most of these suggestions is beginning to drive me a
bit crazy :-)

This should be the simplest thing in the world, when TX queue has
space, give it packets.  Period.

When I hear suggestions like "have the driver pick the queue in
->hard_start_xmit() and return some special status if the queue
becomes different".....  you know, I really begin to wonder :-)

If we have to go back, get into the queueing layer locks, have these
special cases, and whatnot, what's the point?

This code should eventually be able to run lockless all the way to the
TX queue handling code of the driver.  The queueing code should know
what TX queue the packet will be bound for, and always precisely
invoke the driver in a state where the driver can accept the packet.

Ignore LLTX, it sucks, it was a big mistake, and we will get rid of
it.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ