lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 09 May 2007 08:56:43 -0400
From:	jamal <hadi@...erus.ca>
To:	David Miller <davem@...emloft.net>
Cc:	krkumar2@...ibm.com, netdev@...r.kernel.org
Subject: Re: [PATCH] sched: Optimize return value of qdisc_restart

On Wed, 2007-09-05 at 01:12 -0700, David Miller wrote:

> Something this evening is obviously making it impossible
> for my brain to understand this function and your patch,
> so I'm going to sleep on it and try again tomorrow :-)

It is one of those areas that are hard to size-up in a blink;->
Gut-feeling: It doesnt sit right with me as well.
 
With (2.6.18-rxX++) QDISC_RUNNING changes that mean only one of N CPUs
will be dequeueing while the N-1 maybe enqueueing concurently. All N
CPUs contend for the queue lock; and theres a possible window between
releasing the queue lock by the dequeuer-CPU and enqueuer-CPU for a
race. The dequeuer-CPU entering one last time helps.

Krishna, you probably saw this "wasted entry into qdisc" under low
traffic conditions with more than likely only one CPU sending, am i
correct? Under heavier traffic when we have multiple CPUs funneling to
the same device, that entry is not really a "waste" because we endup
only go in once per X number of packets enqueued on the qdisc and that
check is absolutely necessary because a different CPU may have enqueued
while you were not looking. In the case of low traffic, X=1 - so it is a
waste there albeit a necessary one.

cheers,
jamal


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ