lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 May 2007 20:17:08 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	hadi@...erus.ca
Cc:	David Miller <davem@...emloft.net>,
	J Hadi Salim <j.hadi123@...il.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] sched: Optimize return value of qdisc_restart

Hi Jamal,

J Hadi Salim <j.hadi123@...il.com> wrote on 05/09/2007 06:26:43 PM:

> On Wed, 2007-09-05 at 01:12 -0700, David Miller wrote:
>
> > Something this evening is obviously making it impossible
> > for my brain to understand this function and your patch,
> > so I'm going to sleep on it and try again tomorrow :-)
>
> It is one of those areas that are hard to size-up in a blink;->
> Gut-feeling: It doesnt sit right with me as well.
>
> With (2.6.18-rxX++) QDISC_RUNNING changes that mean only one of N CPUs
> will be dequeueing while the N-1 maybe enqueueing concurently. All N

Concurrently is not possible, since everyone needs the queue_lock
to add/delete. Did I misunderstand your comment ?

> CPUs contend for the queue lock; and theres a possible window between
> releasing the queue lock by the dequeuer-CPU and enqueuer-CPU for a
> race. The dequeuer-CPU entering one last time helps.

The dev->queue_lock is held by both enqueue'r and dequeue'r (though
the dequeue'r drops it before calling xmit). But once the dequeue'r
re-gets the lock, it is guaranteed that no one else has the lock
Other CPU's trying to add will block on the lock, or if they have
already added by getting the lock for a short time while my CPU was
doing the xmit, then their qdisc_run returns doing nothing as RUNNING
is true.

Since I am holding a lock in these two changed areas till I return
back to __qdisc_run (which clears the RUNNING bit) and then drop the
lock, there is no way packets can be on the queue while I falsely
return 0, or no packets on the queue while I falsely return -1.

I hope my explanation was not confusing.

> Krishna, you probably saw this "wasted entry into qdisc" under low
> traffic conditions with more than likely only one CPU sending, am i
> correct? Under heavier traffic when we have multiple CPUs funneling to

Actually, I have never seen this problem, just code reading while
trying to implement something else in this area, for which I plan to
bug Dave in a few days.

Thanks,

- KK

> the same device, that entry is not really a "waste" because we endup
> only go in once per X number of packets enqueued on the qdisc and that
> check is absolutely necessary because a different CPU may have enqueued
> while you were not looking. In the case of low traffic, X=1 - so it is a
> waste there albeit a necessary one.


>
> cheers,
> jamal
>
>

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ