lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071008.142626.26988698.davem@davemloft.net>
Date:	Mon, 08 Oct 2007 14:26:26 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	hadi@...erus.ca
Cc:	peter.p.waskiewicz.jr@...el.com, krkumar2@...ibm.com,
	johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net,
	shemminger@...ux-foundation.org, jagana@...ibm.com,
	Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
	gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
	mcarlson@...adcom.com, jeff@...zik.org, mchan@...adcom.com,
	general@...ts.openfabrics.org, tgraf@...g.ch,
	randy.dunlap@...cle.com, sri@...ibm.com
Subject: Re: [PATCH 2/3][NET_BATCH] net core use batching

From: jamal <hadi@...erus.ca>
Date: Mon, 08 Oct 2007 16:48:50 -0400

> On Mon, 2007-08-10 at 12:46 -0700, Waskiewicz Jr, Peter P wrote:
> 
> > 	I still have concerns how this will work with Tx multiqueue.
> > The way the batching code looks right now, you will probably send a
> > batch of skb's from multiple bands from PRIO or RR to the driver.  For
> > non-Tx multiqueue drivers, this is fine.  For Tx multiqueue drivers,
> > this isn't fine, since the Tx ring is selected by the value of
> > skb->queue_mapping (set by the qdisc on {prio|rr}_classify()).  If the
> > whole batch comes in with different queue_mappings, this could prove to
> > be an interesting issue.
> 
> true, that needs some resolution. Heres a hand-waving thought:
> Assuming all packets of a specific map end up in the same qdiscn queue,
> it seems feasible to ask the qdisc scheduler to give us enough packages
> (ive seen people use that terms to refer to packets) for each hardware
> ring's available space. With the patches i posted, i do that via
> dev->xmit_win that assumes only one view of the driver; essentially a
> single ring.  
> If that is doable, then it is up to the driver to say
> "i have space for 5 in ring[0], 10 in ring[1] 0 in ring[2]" based on
> what scheduling scheme the driver implements - the dev->blist can stay
> the same. Its a handwave, so there may be issues there and there could
> be better ways to handle this.

Add xmit_win to struct net_device_subqueue, problem solved.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ