lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Sep 2007 16:47:06 -0700
From:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
To:	<hadi@...erus.ca>
Cc:	"David Miller" <davem@...emloft.net>, <krkumar2@...ibm.com>,
	<johnpol@....mipt.ru>, <herbert@...dor.apana.org.au>,
	<kaber@...sh.net>, <shemminger@...ux-foundation.org>,
	<jagana@...ibm.com>, <Robert.Olsson@...a.slu.se>,
	<rick.jones2@...com>, <xma@...ibm.com>, <gaagaan@...il.com>,
	<netdev@...r.kernel.org>, <rdreier@...co.com>,
	<mcarlson@...adcom.com>, <jeff@...zik.org>, <mchan@...adcom.com>,
	<general@...ts.openfabrics.org>, <kumarkr@...ux.ibm.com>,
	<tgraf@...g.ch>, <randy.dunlap@...cle.com>, <sri@...ibm.com>
Subject: RE: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock

> On Mon, 2007-24-09 at 15:57 -0700, Waskiewicz Jr, Peter P wrote:
> 
> > I've looked at that as a candidate to use.  The lock for 
> enqueue would 
> > be needed when actually placing the skb into the 
> appropriate software 
> > queue for the qdisc, so it'd be quick.
> 
> The enqueue is easy to comprehend. The single device queue 
> lock should suffice. The dequeue is interesting:

We should make sure we're symmetric with the locking on enqueue to
dequeue.  If we use the single device queue lock on enqueue, then
dequeue will also need to check that lock in addition to the individual
queue lock.  The details of this are more trivial than the actual
dequeue to make it efficient though.

> Maybe you can point me to some doc or describe to me the 
> dequeue aspect; are you planning to have an array of txlocks 
> per, one per ring?
> How is the policy to define the qdisc queues locked/mapped to 
> tx rings? 

The dequeue locking would be pushed into the qdisc itself.  This is how
I had it originally, and it did make the code more complex, but it was
successful at breaking the heavily-contended queue_lock apart.  I have a
subqueue structure right now in netdev, which only has queue_state (for
netif_{start|stop}_subqueue).  This state is checked in sch_prio right
now in the dequeue for both prio and rr.  My approach is to add a
queue_lock in that struct, so each queue allocated by the driver would
have a lock per queue.  Then in dequeue, that lock would be taken when
the skb is about to be dequeued.

The skb->queue_mapping field also maps directly to the queue index
itself, so it can be unlocked easily outside of the context of the
dequeue function.  The policy would be to use a spin_trylock() in
dequeue, so that dequeue can still do work if enqueue or another dequeue
is busy.  And the allocation of qdisc queues to device queues is assumed
to be one-to-one (that's how the qdisc behaves now).

I really just need to put my nose to the grindstone and get the patches
together and to the list...stay tuned.

Thanks,
-PJ Waskiewicz
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ