lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875zrlvr9a.fsf@toke.dk>
Date:   Wed, 10 Apr 2019 17:01:21 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     David Woodhouse <dwmw2@...radead.org>,
        Jason Wang <jasowang@...hat.com>, netdev@...r.kernel.org
Subject: Re: Tun congestion/BQL

David Woodhouse <dwmw2@...radead.org> writes:

> On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
>> > > That doesn't seem to make much difference at all; it's still dropping a
>> > > lot of packets because ptr_ring_produce() is returning non-zero.
>> > 
>> > 
>> > I think you need try to stop the queue just in this case? Ideally we may 
>> > want to stop the queue when the queue is about to full, but we don't 
>> > have such helper currently.
>
> I don't quite understand. If the ring isn't full after I've put a
> packet into it... how can it be full subsequently? We can't end up in
> tun_net_xmit() concurrently, right? I'm not (knowingly) using XDP.
>
>> Ideally we want to react when the queue starts building rather than when
>> it starts getting full; by pushing back on upper layers (or, if
>> forwarding, dropping packets to signal congestion).
>
> This is precisely what my first accidental if (!ptr_ring_empty())
> variant was doing, right? :)

Yeah, I guess. But maybe a too aggressively? How are you processing
packets on the dequeue side (for crypto)? One at a time, or is there
some kind of batching in play?

>> In practice, this means tuning the TX ring to the *minimum* size it can
>> be without starving (this is basically what BQL does for Ethernet), and
>> keeping packets queued in the qdisc layer instead, where it can be
>> managed...
>
> I was going to add BQL (as $SUBJECT may have caused you to infer) but
> trivially adding the netdev_sent_queue() in tun_net_xmit() and
> netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
> the BUG in dql_completed(). I just ripped that part out and focused on
> the queue stop/start and haven't gone back to it yet.

Right, makes sense. What qdisc are you running on the tun device? Also,
I assume that netperf is running on the same host that has the tun
device on it, right?

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ