lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sun, 17 Feb 2013 21:54:00 +0100
From:	Sebastian Pöhn <sebastian.poehn@...glemail.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: tuntap: Overload handling

On Sun, 2013-02-17 at 18:18 +0200, Michael S. Tsirkin wrote:
> On Sun, Feb 17, 2013 at 05:08:13PM +0100, Sebastian Pöhn wrote:
> > On Sun, 2013-02-17 at 15:24 +0200, Michael S. Tsirkin wrote:
> > > On Thu, Feb 14, 2013 at 09:01:30AM -0800, Eric Dumazet wrote:
> > > > On Thu, 2013-02-14 at 18:42 +0200, Michael S. Tsirkin wrote:
> > > > 
> > > > > Hmm so ~1000 packets in the tun queue is not enough?
> > > > > You always have the option to increase it some more ...
> > > > > 
> > > > > > You should ask Michael S. Tsirkin, as he removed the flow control
> > > > > > in commit 5d097109257c03a71845729f8db6b5770c4bbedc
> > > > > > (tun: only queue packets on device)
> > > > > > 
> > > > > 
> > > > > Eric in the past you said the following things
> > > > > (http://lkml.indiana.edu/hypermail/linux/kernel/1204.1/00784.html)
> > > > > > > In your case I would just not use qdisc at all, like other virtual
> > > > > > > devices.
> > > > > ...
> > > > > > > Anyway, with a 500 packet limit in TUN queue itself, qdisc layer should
> > > > > > > be always empty. Whats the point storing more than 500 packets for a
> > > > > > > device ? Thats a latency killer.
> > > > > you don't think this applies, anymore?
> > > > > 
> > > > 
> > > > Users have the choice to setup a qdisc or not.
> > > > 
> > > > Having no qdisc can help raw performance, at the expense of bufferbloat.
> > > > Thats all I was saying.
> > > > 
> > > > It seems tun.c has no longer the possibility to effectively use a qdisc,
> > > > (allowing the queue to buildup at qdisc layer)
> > > > 
> > > 
> > > But, userspace is in no position to decide whether using
> > > the qdisc is a good or a bad thing.
> > > The issue I tried to solve is that with tun, it's trivially easy for
> > > userspace to lock up resources forever.
> > > Simply not stopping the qdisc is probably the simplest solution.
> > > 
> > > An alternative is to orphan the skbs before we queue them.
> > > At some point I posted a proposal doing exactly this
> > > subj of "net: orphan queued skbs if device tx can stall".
> > > Do you think it's worth revisiting this?
> > > 
> > > Also - does anyone know of a testcase showing there's a problem
> > > with the simplest solution we now have in place?
> > > 
> > 
> > I think the solution is good as it is. Of course if you want to do odd
> > things with it like me - it's not, but that's not its usual use-case.
> 
> Tap+UIO seems actually pretty close to a VM case.
In this case, no. What I have is a over-provisioned wan line. If you
have double the load you can handle - having a QoS system which is
slowing down your clients is essential, dropping in a device driver is
the last you may want.
> Do you know it's not good for your usecase, or do you speculate?
Well the scheme I work in is like this: The wan line says 'Hey I have
some bandwidth, give me some traffic.' So it's not a good idea that the
network subsystem is blowing in a lot of traffic.
> What's the tx queue length in your setup?
No decided yet.
> 
But again I think it's not the aim of tuntap to satisfy my exotic usage.
I'll gonna do some changes in it anyway.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ