[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79f9e78d6f653a4a4ccd2fad76d8c39622491172.camel@infradead.org>
Date: Thu, 11 Apr 2019 11:56:06 +0300
From: David Woodhouse <dwmw2@...radead.org>
To: Jason Wang <jasowang@...hat.com>,
Toke Høiland-Jørgensen <toke@...hat.com>,
netdev@...r.kernel.org
Cc: "Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: Tun congestion/BQL
On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote:
> > > Ideally we want to react when the queue starts building rather than when
> > > it starts getting full; by pushing back on upper layers (or, if
> > > forwarding, dropping packets to signal congestion).
> >
> > This is precisely what my first accidental if (!ptr_ring_empty())
> > variant was doing, right? :)
>
>
> But I give a try on your ptr_ring_full() patch on VM, looks like it
> works (single flow), no packets were dropped by TAP anymore. How many
> flows did you use?
Hm, I thought I was only using one. This is just a simple case of
userspace opening /dev/net/tun, TUNSETIFF, and reading/writing.
But if I was stopping the *wrong* queue that might explain things.
This is a persistent tun device.
>
> >
> > > In practice, this means tuning the TX ring to the *minimum* size it can
> > > be without starving (this is basically what BQL does for Ethernet), and
> > > keeping packets queued in the qdisc layer instead, where it can be
> > > managed...
> >
> > I was going to add BQL (as $SUBJECT may have caused you to infer) but
> > trivially adding the netdev_sent_queue() in tun_net_xmit() and
> > netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
> > the BUG in dql_completed().
>
>
> Something like https://lists.openwall.net/netdev/2012/11/12/6767 ?
Fairly much.
Except again I was being lazy for the proof-of-concept, ignoring 'txq'
and just using netdev_sent_queue() etc.
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5174 bytes)
Powered by blists - more mailing lists