lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1366762196.8964.46.camel@edumazet-glaptop>
Date:	Tue, 23 Apr 2013 17:09:56 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Stephen Hemminger <stephen@...workplumber.org>
Cc:	Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org,
	davem@...emloft.net
Subject: Re: [PATCH net-next v4] rps: selective flow shedding during softnet
 overflow

On Tue, 2013-04-23 at 14:52 -0700, Stephen Hemminger wrote:

> I just don't want to get tied down to one hard coded policy.
> User seem have different ideas about what constitutes a flow and what policy for drop should be.
> Existing ingress qdisc is inflexible and ifb is a pain to setup and adds
> another queue transistion.

qdisc code has a hardcoded dev_hard_start_xmit() call, thats why ifb
hack is used. Not mentioning device flow control.

It might be possible to use a q->xmit() method instead, so that it can
be used on ingress without ifb.

Then we would have to allow one qdisc per RX queue, and not use qdisc
lock (assuming NAPI protects us from reentrancy).

So napi device handler would queue skbs in qdisc (q->enqueue()),
(allowing a standing queue to build so that some clever qdisc can drop
some selected packets)

Not really clear how we would allow packets being delivered to another
queue (RPS/RFS), and not clear how/when doing the qdisc_run() to dequeue
packets and deliver them to stack.

I don't know, this looks like a lot of changes.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ