lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 Apr 2012 21:28:34 +0100
From:	David Woodhouse <dwmw2@...radead.org>
To:	chas williams - CONTRACTOR <chas@....nrl.navy.mil>
Cc:	netdev@...r.kernel.org, David Miller <davem@...emloft.net>,
	paulus@...ba.org, Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH] pppoatm: Fix excessive queue bloat

On Tue, 2012-04-10 at 10:26 -0400, chas williams - CONTRACTOR wrote:
> On Sun, 08 Apr 2012 21:53:57 +0200
> David Woodhouse <dwmw2@...radead.org> wrote:
> 
> > Seriously, this gets *much* easier if we just ditch the checks against
> > sk_sndbuf. We just wake up whenever decrementing ->inflight from zero.
> > Can I?
> 
> i dont know.  on a 'low' speed connection like queuing 2 packets might
> be enough to keep something busy but imagine an interace like oc3 or
> oc12.  i dont know anything running pppoatm on such an interface but it
> seems like just dropping sk_sndbuf isnt right either.

That looks like a response to my patch, not to the question that you
cited. My patch reduces the buffering to MAX(vcc->sk_sndbuf, 2 packets)
so if there are issues with keeping faster devices busy, they'll happen
anyway with my patch. (My question was just whether we can ditch the
sk_sndbuf bit altogether, and just make it a hard-coded two packets.)

The limit of two packets was chosen on the basis that the PPP core is
designed to feed us new packets when we need them, with *low* latency.
So when the hardware finishes sending packet #1 and starts on packet #2,
we *should* be able to get a packet #3 into its queue by the time it
needs it.

But if that approach could really cause an issue with keeping faster
devices busy, perhaps the limit should be "x ms worth of packets" based
on the upload speed of the device, rather than a hard-coded 2?

Not that we *know* the upload speed of the device, for asymmetric
links... do we?

> sk_sndbuf is per vcc and that isnt right either since the transmit
> limit is actually the atm device's transmit queue depth.

Hm, I don't think that's a limit that I care about. You're talking about
the maximum number of packets that can be queued to the hardware at a
time. What I care about is the *minimum* number of packets that *need*
to be queued to the hardware, to ensure that it doesn't stall waiting
for us to replenish its queue.

Which is largely a factor of how well the PPP core does the job it was
*designed* to do, as I see it.

> what is the "queue depth" of your atm device's transmit queue?

We're still using MMIO for the Solos ADSL2+ devices at the moment, so
there's no descriptor ring. It used to have internal buffering which was
only limited by the device's internal memory — it would continue to
accept packets from the host until it had nowhere else to put them. I
got them to fix that in its firmware, so now it only has two or three
packets queued internally. But as far as the host is concerned, those
packets are *gone* already.

-- 
dwmw2

Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5818 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ