lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Apr 2019 18:32:02 +0300
From:   David Woodhouse <dwmw2@...radead.org>
To:     Toke Høiland-Jørgensen <toke@...hat.com>,
        Jason Wang <jasowang@...hat.com>, netdev@...r.kernel.org
Subject: Re: Tun congestion/BQL

On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote:
> David Woodhouse <dwmw2@...radead.org> writes:
> 
> > On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
> > > > > That doesn't seem to make much difference at all; it's still dropping a
> > > > > lot of packets because ptr_ring_produce() is returning non-zero.
> > > > 
> > > > 
> > > > I think you need try to stop the queue just in this case? Ideally we may 
> > > > want to stop the queue when the queue is about to full, but we don't 
> > > > have such helper currently.
> > 
> > I don't quite understand. If the ring isn't full after I've put a
> > packet into it... how can it be full subsequently? We can't end up in
> > tun_net_xmit() concurrently, right? I'm not (knowingly) using XDP.
> > 
> > > Ideally we want to react when the queue starts building rather than when
> > > it starts getting full; by pushing back on upper layers (or, if
> > > forwarding, dropping packets to signal congestion).
> > 
> > This is precisely what my first accidental if (!ptr_ring_empty())
> > variant was doing, right? :)
> 
> Yeah, I guess. But maybe a too aggressively? How are you processing
> packets on the dequeue side (for crypto)? One at a time, or is there
> some kind of batching in play?

Slight batching. The main loop in OpenConnect will suck packets out of
the tun device until its queue is "full", which by default is 10
packets but tweaking that makes little difference at all to my testing
until I take it below 3.

(Until fairly recently, I was *ignoring* the result of sendto() on the
UDP side, which meant that I was wasting time encrypting packets that
got dropped. Now I react appropriately to -EAGAIN (-ENOBUFS?) on the
sending side, and I don't pull any more packets from the tun device
until my packet queue is no longer "full". The latest 8.02 release of
OpenConnect still has that behaviour.)


> > > In practice, this means tuning the TX ring to the *minimum* size it can
> > > be without starving (this is basically what BQL does for Ethernet), and
> > > keeping packets queued in the qdisc layer instead, where it can be
> > > managed...
> > 
> > I was going to add BQL (as $SUBJECT may have caused you to infer) but
> > trivially adding the netdev_sent_queue() in tun_net_xmit() and
> > netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
> > the BUG in dql_completed(). I just ripped that part out and focused on
> > the queue stop/start and haven't gone back to it yet.
> 
> Right, makes sense. What qdisc are you running on the tun device? Also,
> I assume that netperf is running on the same host that has the tun
> device on it, right?

I haven't looked at the qdisc, so it's fq_codel which it got as
standard. And yes, netperf is running on the same host.

My test setup is here (inline in the first, refined and attached to the
second):

https://lists.infradead.org/pipermail/openconnect-devel/2019-April/005261.html
https://lists.infradead.org/pipermail/openconnect-devel/2019-April/005265.html

Basically, I started by setting up in-kernel ESP over UDP between the
two hosts, then tricked OpenConnect into thinking there was a real VPN
server on one of them.


Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5174 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ