lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Apr 2019 15:22:55 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     David Woodhouse <dwmw2@...radead.org>,
        Toke Høiland-Jørgensen <toke@...hat.com>,
        netdev@...r.kernel.org
Subject: Re: Tun congestion/BQL


On 2019/4/10 下午11:32, David Woodhouse wrote:
> On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote:
>> David Woodhouse<dwmw2@...radead.org>  writes:
>>
>>> On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
>>>>>> That doesn't seem to make much difference at all; it's still dropping a
>>>>>> lot of packets because ptr_ring_produce() is returning non-zero.
>>>>> I think you need try to stop the queue just in this case? Ideally we may
>>>>> want to stop the queue when the queue is about to full, but we don't
>>>>> have such helper currently.
>>> I don't quite understand. If the ring isn't full after I've put a
>>> packet into it... how can it be full subsequently? We can't end up in
>>> tun_net_xmit() concurrently, right? I'm not (knowingly) using XDP.
>>>
>>>> Ideally we want to react when the queue starts building rather than when
>>>> it starts getting full; by pushing back on upper layers (or, if
>>>> forwarding, dropping packets to signal congestion).
>>> This is precisely what my first accidental if (!ptr_ring_empty())
>>> variant was doing, right?:)
>> Yeah, I guess. But maybe a too aggressively? How are you processing
>> packets on the dequeue side (for crypto)? One at a time, or is there
>> some kind of batching in play?
> Slight batching. The main loop in OpenConnect will suck packets out of
> the tun device until its queue is "full", which by default is 10
> packets but tweaking that makes little difference at all to my testing
> until I take it below 3.
>
> (Until fairly recently, I was*ignoring*  the result of sendto() on the
> UDP side, which meant that I was wasting time encrypting packets that
> got dropped. Now I react appropriately to -EAGAIN (-ENOBUFS?) on the
> sending side, and I don't pull any more packets from the tun device
> until my packet queue is no longer "full". The latest 8.02 release of
> OpenConnect still has that behaviour.)
>
>

If you care about userspace performance, you may want to try vhost + TAP 
instead. I admit the API is not user friendly which needs to be improved 
but then there will be no syscall overhead on packet transmission and 
receiving, and eventfd will be used for notification.

Thanks


Powered by blists - more mailing lists