[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <123cfccb766a6f55312d6a477764d3e7b88ad221.camel@infradead.org>
Date: Wed, 10 Apr 2019 17:33:03 +0300
From: David Woodhouse <dwmw2@...radead.org>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
Jason Wang <jasowang@...hat.com>, netdev@...r.kernel.org
Subject: Re: Tun congestion/BQL
On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
> > > That doesn't seem to make much difference at all; it's still dropping a
> > > lot of packets because ptr_ring_produce() is returning non-zero.
> >
> >
> > I think you need try to stop the queue just in this case? Ideally we may
> > want to stop the queue when the queue is about to full, but we don't
> > have such helper currently.
I don't quite understand. If the ring isn't full after I've put a
packet into it... how can it be full subsequently? We can't end up in
tun_net_xmit() concurrently, right? I'm not (knowingly) using XDP.
> Ideally we want to react when the queue starts building rather than when
> it starts getting full; by pushing back on upper layers (or, if
> forwarding, dropping packets to signal congestion).
This is precisely what my first accidental if (!ptr_ring_empty())
variant was doing, right? :)
> In practice, this means tuning the TX ring to the *minimum* size it can
> be without starving (this is basically what BQL does for Ethernet), and
> keeping packets queued in the qdisc layer instead, where it can be
> managed...
I was going to add BQL (as $SUBJECT may have caused you to infer) but
trivially adding the netdev_sent_queue() in tun_net_xmit() and
netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
the BUG in dql_completed(). I just ripped that part out and focused on
the queue stop/start and haven't gone back to it yet.
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5174 bytes)
Powered by blists - more mailing lists