lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 25 Mar 2012 11:43:50 +0100
From:	David Woodhouse <dwmw2@...radead.org>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org
Subject: Re: [STRAW MAN PATCH] sch_teql doesn't load-balance ppp(oatm) slaves

On Thu, 2012-03-22 at 23:03 -0400, David Miller wrote:
> From: David Woodhouse <dwmw2@...radead.org>
> Date: Thu, 22 Mar 2012 21:03:38 +0000
> 
> > teql_dequeue() will *always* give up a skb when it's called, if there is
> > one. If there's *not*, and the tx queue becomes empty, then the device
> > for which teql_dequeue() was called is 'promoted' to the front of the
> > line (master->slaves). That device will receive the next packet that
> > comes in, even if there are other devices which are *also* idle and
> > waiting for packets. Whenever a new packet comes in, the *last* device
> > to call teql_dequeue() gets it.
> 
> The teql master ->ndo_start_xmit() method is where the slave iteration
> occurs, and it occurs on every successful transmit of a single packet.

Thanks for the response.

I'd seen this in teql_master_xmit(), and it works *perfectly*, *if* we
let it do its job.

The only problem here is that the PPP code is greedily sucking up all
the packets it can, calling skb_dequeue() in a loop and not letting the
*other* device(s) get any of the packets. Even when it *doesn't* get a
packet because it's emptied the queue, it gets bumped to the front of
the slaves list again, so it'll get the *next* one!

Is it that behaviour which makes you say PPP is effectively a virtual
device for this purpose? I wonder if I should just *fix* that instead,
so that it behaves as like a real device.

It's a bad idea to have huge hidden queues (a whole wmem_default worth
of packets are in a hidden queue between ppp_generic and the ATM device,
ffs!) anyway, so perhaps if we just fix *that* within PPP, it should
work a bit better with TEQL?

The other odd thing that PPP does is call skb_dequeue, attempt to feed a
packet into the low-level driver, and then *requeue* the skb if that
fails. Which it *will* do, a lot of the time. So perhaps the PPP
low-level driver could have a method call to *ask* if it's able to
accept a new packet, to avoid that dequeue-and-requeue behaviour in
ppp_generic? I'll experiment with that.

> But this cannot, and is documented not to, work when device stacking
> is involved.
> 
> If you're dealing with (what amounts to) virtual devices, you cannot
> use TEQL and must use something like drivers/net/eql.c

I'd looked briefly at eql.c. I eventually found eql-1.2.tar.gz... with a
timestamp from a few months before I first encountered Linux in 1995, a
ZMAGIC binary in the tarball, and source code which probably hasn't
compiled for a decade... so then I figured I'd try TEQL a bit more
first :)

Having fixed up the userspace, eql.c *does* work OK — but it seems to be
fairly unloved, and mostly duplicates the functionality of TEQL. The
fact that it forgets its slaves when you take it down and up is a bit of
a PITA too. I think I'd be happier getting TEQL working.

-- 
dwmw2

Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5818 bytes)

Powered by blists - more mailing lists