lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 15 Jul 2008 21:14:11 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	kaber@...sh.net
Cc:	netdev@...r.kernel.org
Subject: Re: [PATCH 0/14]: Make packet scheduler multiqueue aware.

From: David Miller <davem@...emloft.net>
Date: Mon, 14 Jul 2008 18:48:49 -0700 (PDT)

> One idea is to allow all of the queues to point at a
> single qdisc.  We'd just need to work out how to do the
> locking.
> 
> For example, the Qdisc has a lock member, and a pointer.
> For simple qdiscs the pointer points at the netdev_queue
> lock.  But when sharing, we use the in-Qdisc static lock.

I've investigated several avenues to implement this, and so I've hit
several brick walls with each approach.  It's not easy.

Much of the locking, and some of the semantics, can be trivially
sorted out:

1) I have a change which moves gso_skb into the Qdisc.  That
   definitely would be needed for qdisc sharing amongst netdev_queue
   objects.

2) Qdiscs want to lock the qdisc tree and do several things, ror
   example, unlink and walk up the parents to adjust the qlen.

   For that, locking the root qdisc ought to be sufficient, I suppose.

3) The netdev_queue lock also synchronizes the ->qdisc linkage of the
   root.  In order to combat some of this, I wrote up a change that moved
   the bulk of qdisc_destroy()'s work into the qdisc destroy RCU handler.
   On top of that I passed the first seen qdisc in a qdisc_run() call all
   the way down into qdisc_restart() and removed all of the "recheck
   txq->qdisc" scatters all over these parts.

   Basically this works because we can just enqueue or whatever since
   BH's are disabled the whole time and thus RCU's are blocked out.  The
   RCU will reset and destroy the queue, freeing up any packets therein.

But then we get into the issue of which netdev_queue backpointer
should be used?  This is especially important for a shared qdisc.

At the top level we enqueue an SKB for a particular TXQ.  But as we
call down into qdisc_run(), the SKB we get out of q->dequeue() could
be for another TXQ.

This gets even more interesting when a qdisc_watchdog() fires.  Which
TXQ should we call netif_schedule for? :-)

I guess this means qdisc_run() and netif_schedule() should operate
on root qdiscs, rather than netdev_queue objects.

Then, as qdisc_restart() pulls packets out of the qdisc, it determines
the TXQ for that SKB and uses that to lock and call down into the
->hard_start_xmit() handler.

Just FYI and I'll try to explore this further.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists