[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130217161836.GA24375@redhat.com>
Date: Sun, 17 Feb 2013 18:18:36 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Sebastian Pöhn <sebastian.poehn@...glemail.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: tuntap: Overload handling
On Sun, Feb 17, 2013 at 05:08:13PM +0100, Sebastian Pöhn wrote:
> On Sun, 2013-02-17 at 15:24 +0200, Michael S. Tsirkin wrote:
> > On Thu, Feb 14, 2013 at 09:01:30AM -0800, Eric Dumazet wrote:
> > > On Thu, 2013-02-14 at 18:42 +0200, Michael S. Tsirkin wrote:
> > >
> > > > Hmm so ~1000 packets in the tun queue is not enough?
> > > > You always have the option to increase it some more ...
> > > >
> > > > > You should ask Michael S. Tsirkin, as he removed the flow control
> > > > > in commit 5d097109257c03a71845729f8db6b5770c4bbedc
> > > > > (tun: only queue packets on device)
> > > > >
> > > >
> > > > Eric in the past you said the following things
> > > > (http://lkml.indiana.edu/hypermail/linux/kernel/1204.1/00784.html)
> > > > > > In your case I would just not use qdisc at all, like other virtual
> > > > > > devices.
> > > > ...
> > > > > > Anyway, with a 500 packet limit in TUN queue itself, qdisc layer should
> > > > > > be always empty. Whats the point storing more than 500 packets for a
> > > > > > device ? Thats a latency killer.
> > > > you don't think this applies, anymore?
> > > >
> > >
> > > Users have the choice to setup a qdisc or not.
> > >
> > > Having no qdisc can help raw performance, at the expense of bufferbloat.
> > > Thats all I was saying.
> > >
> > > It seems tun.c has no longer the possibility to effectively use a qdisc,
> > > (allowing the queue to buildup at qdisc layer)
> > >
> >
> > But, userspace is in no position to decide whether using
> > the qdisc is a good or a bad thing.
> > The issue I tried to solve is that with tun, it's trivially easy for
> > userspace to lock up resources forever.
> > Simply not stopping the qdisc is probably the simplest solution.
> >
> > An alternative is to orphan the skbs before we queue them.
> > At some point I posted a proposal doing exactly this
> > subj of "net: orphan queued skbs if device tx can stall".
> > Do you think it's worth revisiting this?
> >
> > Also - does anyone know of a testcase showing there's a problem
> > with the simplest solution we now have in place?
> >
>
> I think the solution is good as it is. Of course if you want to do odd
> things with it like me - it's not, but that's not its usual use-case.
Tap+UIO seems actually pretty close to a VM case.
Do you know it's not good for your usecase, or do you speculate?
What's the tx queue length in your setup?
--
MST
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists