lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 09 Oct 2007 17:04:35 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	hadi@...erus.ca
Cc:	shemminger@...ux-foundation.org, andi@...stfloor.org,
	jeff@...zik.org, johnpol@....mipt.ru, herbert@...dor.apana.org.au,
	gaagaan@...il.com, Robert.Olsson@...a.slu.se,
	netdev@...r.kernel.org, rdreier@...co.com,
	peter.p.waskiewicz.jr@...el.com, mcarlson@...adcom.com,
	jagana@...ibm.com, general@...ts.openfabrics.org,
	mchan@...adcom.com, tgraf@...g.ch, randy.dunlap@...cle.com,
	sri@...ibm.com, kaber@...sh.net
Subject: Re: [ofa-general] Re: [PATCH 2/3][NET_BATCH] net core use batching

From: jamal <hadi@...erus.ca>
Date: Tue, 09 Oct 2007 17:56:46 -0400

> if the h/ware queues are full because of link pressure etc, you drop. We
> drop today when the s/ware queues are full. The driver txmit lock takes
> place of the qdisc queue lock etc. I am assuming there is still need for
> that locking. The filter/classification scheme still works as is and
> select classes which map to rings. tc still works as is etc.

I understand your suggestion.

We have to keep in mind, however, that the sw queue right now is 1000
packets.  I heavily discourage any driver author to try and use any
single TX queue of that size.  Which means that just dropping on back
pressure might not work so well.

Or it might be perfect and signal TCP to backoff, who knows! :-)

While working out this issue in my mind, it occured to me that we
can put the sw queue into the driver as well.

The idea is that the network stack, as in the pure hw queue scheme,
unconditionally always submits new packets to the driver.  Therefore
even if the hw TX queue is full, the driver can still queue to an
internal sw queue with some limit (say 1000 for ethernet, as is used
now).

When the hw TX queue gains space, the driver self-batches packets
from the sw queue to the hw queue.

It sort of obviates the need for mid-level queue batching in the
generic networking.  Compared to letting the driver self-batch,
the mid-level batching approach is pure overhead.

We seem to be sort of all mentioning similar ideas.  For example, you
can get the above kind of scheme today by using a mid-level queue
length of zero, and I believe this idea was mentioned by Stephen
Hemminger earlier.

I may experiment with this in the NIU driver.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ