lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 25 Aug 2008 06:06:40 +0000
From:	Jarek Poplawski <jarkao2@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	hadi@...erus.ca, alexander.duyck@...il.com,
	jeffrey.t.kirsher@...el.com, jeff@...zik.org,
	netdev@...r.kernel.org, alexander.h.duyck@...el.com
Subject: Re: [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler

On Sun, Aug 24, 2008 at 05:49:49PM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@...il.com>
> Date: Sun, 24 Aug 2008 21:19:05 +0200
> 
> > On Sun, Aug 24, 2008 at 09:39:23AM -0400, jamal wrote:
> > ...
> > > With current controls being per qdisc instead of per netdevice,
> > > the hol fear is unfounded. 
> > > You send and when hw cant keep up, you block just the one hwqueue.
> > > While hwqueue is blocked, you can accumulate packets in the prio qdisc
> > > (hence my statement it may not be necessary to accumulate packets in
> > > driver).
> > 
> > Jamal, maybe I miss something, but this could be like this only with
> > default pfifo_fast qdiscs, which really are per dev hwqueue. Other
> > qdiscs, including prio, are per device, so with prio, if a band with
> > the highest priority is blocked it would be requeued blocking other
> > bands (hwqueues in Alexander's case).
> 
> It only blocks if the highest priority band's HW queue is blocked, and
> that's what you want to happen.
> 
> Think about it, if the highest priority HW queue is full, queueing
> packets to the lower priority queues won't make anything happen.
> 
> As the highest priority queue opens up and begins to have space,
> we'll feed it high priority packets from the prio qdisc, and so
> on and so forth.

It seems the priority can really be misleading here. Do you mean these
hwqueues are internally prioritized too? This would be strange to me,
because why would we need this independent locking per hwqueue if
everything has to wait for the most prioritized hwqueue anyway? And,
if so, current dev_pick_tx() with simple_tx_hash() would always harm
some flows directing them to lower priority hwqueues?!

But, even if it's true, let's take a look at fifo: a packet at the
head of the qdisc's queue could be hashed to the last hwqueue. If
it's stopped for some reason, this packed would be constantly
requeued blocking all other packets, while their hwqueues are ready
and empty!

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ