lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 07 Oct 2008 14:03:08 +0200
From:	Patrick McHardy <kaber@...sh.net>
To:	Jarek Poplawski <jarkao2@...il.com>
CC:	Simon Horman <horms@...ge.net.au>, netdev@...r.kernel.org,
	David Miller <davem@...emloft.net>
Subject: Re: Possible regression in HTB

Jarek Poplawski wrote:
> On Tue, Oct 07, 2008 at 03:51:47PM +1100, Simon Horman wrote:
>> With the following patch (basically a reversal of ""pkt_sched: Always use
>> q->requeue in dev_requeue_skb()" forward ported to the current
>> net-next-2.6 tree (tcp: Respect SO_RCVLOWAT in tcp_poll()), I get some
>> rather nice numbers (IMHO).
>>
>> 10194: 666780666bits/s 666Mbits/s
>> 10197: 141154197bits/s 141Mbits/s
>> 10196: 141023090bits/s 141Mbits/s
>> -----------------------------------
>> total: 948957954bits/s 948Mbits/s
>>
>> I'm not sure what evil things this patch does to other aspects
>> of the qdisc code.
> 
> I'd like to establish this too. This patch was meant to remove some
> other problems possibly the simplest way. Maybe it's too simple.
> Anyway, it's kind of RFC, so the rest of the requeuing code is left
> unchanged, just for easy revoking like below. But first we should
> try to understand this more.

Shooting in the dark: I don't see how this change could affect
the bandwidth expect by introducing higher dequeue-latency due
using netif_schedule instead of qdisc_watchdog. Does anyone know
how device scheduling latencies compare to hrtimers?

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists