lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 14 Feb 2013 18:42:42 +0200 From: "Michael S. Tsirkin" <mst@...hat.com> To: Eric Dumazet <eric.dumazet@...il.com> Cc: Sebastian Pöhn <sebastian.poehn@...glemail.com>, netdev@...r.kernel.org Subject: Re: tuntap: Overload handling On Thu, Feb 14, 2013 at 08:32:27AM -0800, Eric Dumazet wrote: > On Thu, 2013-02-14 at 12:50 +0100, Sebastian Pöhn wrote: > > I am having a look on the tun driver to realize an userspace network > > driver ( TAP + UIO ). Maybe that's not the use-case tun is intended > > for. > > > > What I've noticed is that in tun.c Line 741 > > > > static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) > > > > /* Limit the number of packets queued by dividing txq length with the > > * number of queues. > > */ > > if (skb_queue_len(&tfile->socket.sk->sk_receive_queue) > > >= dev->tx_queue_len / tun->numqueues) > > goto drop; > > > > If a frame can not be tx it is dropped by the driver. > > Wouldn't it be more correct to netif_tx_stop_queue() so that packet > > drops are performed by the overlying traffic control code? > > > > Of course this is not very likely in virtual environments but as soon > > as any real network hop is involved it could be important. > > > > (I also had a look on some two year old version of tun.c. There > > queue/tx stopping was done correctly.) Hmm so ~1000 packets in the tun queue is not enough? You always have the option to increase it some more ... > You should ask Michael S. Tsirkin, as he removed the flow control > in commit 5d097109257c03a71845729f8db6b5770c4bbedc > (tun: only queue packets on device) > Eric in the past you said the following things (http://lkml.indiana.edu/hypermail/linux/kernel/1204.1/00784.html) > > In your case I would just not use qdisc at all, like other virtual > > devices. ... > > Anyway, with a 500 packet limit in TUN queue itself, qdisc layer should > > be always empty. Whats the point storing more than 500 packets for a > > device ? Thats a latency killer. you don't think this applies, anymore? -- MST -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists