lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150521070712.GY24769@pengutronix.de>
Date:	Thu, 21 May 2015 09:07:12 +0200
From:	Uwe Kleine-König 
	<u.kleine-koenig@...gutronix.de>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Cong Wang <cwang@...pensource.com>,
	netdev <netdev@...r.kernel.org>,
	Jamal Hadi Salim <jhs@...atatu.com>
Subject: Re: ingress policying for realtime protocol

On Wed, May 20, 2015 at 05:30:40PM -0700, Eric Dumazet wrote:
> On Wed, 2015-05-20 at 16:46 -0700, Cong Wang wrote:
> 
> > There is very little to do on ingress side since there is no queue at all,
> > not to mention priority, you could try ifb to see if it fits your need.
> 
> Note that if the need is to police traffic, ifb is not really needed :
> 
> TC="tc"
> DEV="dev eth0"
> IP=10.246.11.51/32
> $TC qdisc del $DEV ingress 2>/dev/null
> $TC qdisc add $DEV ingress
> $TC filter add $DEV parent ffff: protocol ip u32 match ip src $IP \
> 	police rate 1Mbit burst 10Mbit mtu 66000 action drop/continue
> 
> $TC -s filter ls $DEV parent ffff: protocol ip
I have something like that (matching on dst mac addresses instead of src ip):

	tc qdisc add dev eth0 handle ffff: ingress
	tc filter add dev eth0 parent ffff: protocol all prio 10 u32 match ether dst 01:15:4E:00:00:01 police pass
	tc filter add dev eth0 parent ffff: protocol all prio 50 u32 match u32 0 0 at 0 police rate 100kbit burst 10k drop

. So Cong interpreted my question right and probably I just used the
wrong keywords to make you understand the same. I try again to put my
idea in words to make it explicit:

I imagine that it could help in my case if I could assert that MRP
packets are handled priorized over other traffic without throwing away
so many unrelated packets. For egress that works by e.g. using a prio
qdisc. For ingress however only shaping is available.

So the question essentially is: Why doesn't this work for ingress? Cong
wrote "there is no queue at all [for ingress]". Is this by design? Or is
it just not implemented because noone spend the effort to work on that?
Do you think it would help me?

Maybe there is another bottleneck in the application that currently
forces us to use this tight limit on ingress shaping. I will try to work
on that, maybe shaping is good enough then?! I will report back.

Best regards
Uwe

-- 
Pengutronix e.K.                           | Uwe Kleine-König            |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ