lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 02 Oct 2010 08:10:36 -0400
From:	jamal <hadi@...erus.ca>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Jarek Poplawski <jarkao2@...il.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next V2] net: dynamic ingress_queue allocation

On Fri, 2010-10-01 at 15:56 +0200, Eric Dumazet wrote: 
> Le vendredi 01 octobre 2010 à 07:45 -0400, jamal a écrit :

> 
> I first thought of this, and found it would add a new dependence on
> vmlinux :
> 
> If someone wants to add NET_SCH_INGRESS module, he would need to
> recompile whole kernel and reboot.
> 
> This is probably why ing_filter() and handle_ing() are enclosed with
> CONFIG_NET_CLS_ACT, not CONFIG_NET_SCH_INGRESS.
> 
> Since struct net_dev only holds one pointer (after this patch), I
> believe its better to use same dependence.
> 

Ok - I just noticed that CONFIG_NET_SCH_INGRESS depends on 
CONFIG_NET_CLS_ACT - probably my fault too. I have a use case
for having just CONFIG_NET_SCH_INGRESS without need for
CONFIG_NET_CLS_ACT; but you are right to keep in the current
spirit it is fair to keep it as is; i wonder if it needs to be fixed.

> > 
> > > +static inline struct netdev_queue *dev_ingress_queue(struct net_device *dev)
> > > +{
> > > +#ifdef CONFIG_NET_CLS_ACT
> > > +	return dev->ingress_queue;
> > > +#else
> > > +	return NULL;
> > > +#endif
> > 
> > Above, if you just returned dev->ingress_queue wouldnt it always be 
> > NULL if it was not allocated?
> > 
> 
> ingress_queue is not defined in "struct net_device *dev" if 
> !CONFIG_NET_CLS_ACT
> 
> Returning NULL here permits dead code elimination by compiler.
> 
> Then, probably nobody unset CONFIG_NET_CLS_ACT, so we can probably avoid
> this preprocessor stuff.
> 

Ok


> I see, this adds an indirection and a conditional branch, but this
> should be in cpu cache and well predicted.

Ok - good enough for me.

> I thought adding a fake "struct netdev_queue" object, with a qdisc
> pointing to noop_qdisc. But this would slow down a bit non ingress
> users ;)

nod.

> For people not using ingress, my solution removes an access to an extra
> cache line. So network latency is improved a bit when cpu caches are
> full of user land data.

Who are these lunatics running without ingress anyways?

> > 1) compile support for ingress and add/delete ingress qdisc
> 
> This worked for me, but I dont know complex setups.
> 

I think if you covered the basic case, should be equivalent
to any other case. Maybe a replace after you do an add etc.

> > 2) Dont compile support and add/delete ingress qdisc
> 
> tc gives an error (a bit like !CONFIG_NET_SCH_INGRESS)
> 
> # tc qdisc add dev eth0 ingress
> RTNETLINK answers: No such file or directory
> # tc -s -d qdisc show dev eth0
> qdisc mq 0: root 
>  Sent 636 bytes 10 pkt (dropped 0, overlimits 0 requeues 0) 
>  backlog 0b 0p requeues 0 
> 

nice

> 
> > 3) Compile ingress as a module and add/delete ingress qdisc
> > 
> Seems to work like 1)
> 

ok then, nothing else to bitch about.

Acked-by: Jamal Hadi Salim <hadi@...erus.ca>

> I took a look at ifb as suggested by Stephen but could not see trivial
> changes (LLTX or per-cpu stats), since central lock is needed I am
> afraid. 

I wasnt thinking ifb - but ifb (and probably other netdevs) could
benefit a tiny bit by rearranging net_stats struct rx and tx parts into
different cache lines since these are typically updated in different
paths. In kernel, rx path for ifb is guaranteed to run in one cpu at a
time but tx could be many cpus (so at least for rx path theres benefit)

> And qdisc are the same, stats updates are mostly free as we
> dirtied cache line for the lock.

ok - qdiscs are easier because a qdisc instance can be accessed by one
cpu at a time. actions could be configured to be shared by many filters;
in such a mode they could be accessed across many cpus.

cheers,
jamal

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ