lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 05 Mar 2011 00:02:39 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	xiaosuo@...il.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next-2.6] inetpeer: seqlock optimization

Le vendredi 04 mars 2011 à 14:44 -0800, David Miller a écrit :

> Applied, thanks Eric!
> 
> With this and the following patch applied to my no-routing-cache tree,
> output route lookup on my Niagara2 is down to 2966 cycles!  For reference
> with just the plain routing cache removal, it was as much as 3832 cycles.
> 
> udpflood is a lot faster too, with plain routing cache removal it ran as:
> 
> bash$ time ./bin/udpflood -l 10000000 10.2.2.11
> real		     3m9.921s
> user		     0m9.520s
> sys		     3m0.440s
> 
> But now it's:
> 
> bash$ time ./bin/udpflood -l 10000000 10.2.2.11
> real	2m45.903s
> user	0m8.640s
> sys	2m37.280s
> 
> :-)
> 

Nice indeed :)

> --------------------
> ipv4: Optimize flow initialization in output route lookup.
> 
> We burn a lot of useless cycles, cpu store buffer traffic, and
> memory operations memset()'ing the on-stack flow used to perform
> output route lookups in __ip_route_output_key().
> 
> Only the first half of the flow object members even matter for
> output route lookups in this context, specifically:
> 
> FIB rules matching cares about:
> 
> 	dst, src, tos, iif, oif, mark
> 
> FIB trie lookup cares about:
> 
> 	dst
> 
> FIB semantic match cares about:
> 
> 	tos, scope, oif
> 
> Therefore only initialize these specific members and elide the
> memset entirely.
> 
> On Niagara2 this kills about ~300 cycles from the output route
> lookup path.
> 
> Likely, we can take things further, since all callers of output
> route lookups essentially throw away the on-stack flow they use.
> So they don't care if we use it as a scratch-pad to compute the
> final flow key.
> 
> Signed-off-by: David S. Miller <davem@...emloft.net>
> ---
>  net/ipv4/route.c |   18 ++++++++++--------
>  1 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/net/ipv4/route.c b/net/ipv4/route.c
> index 04b8954..e3a5a89 100644
> --- a/net/ipv4/route.c
> +++ b/net/ipv4/route.c
> @@ -1670,14 +1670,7 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
>  struct rtable *__ip_route_output_key(struct net *net, const struct flowi *oldflp)
>  {
>  	u32 tos	= RT_FL_TOS(oldflp);
> -	struct flowi fl = { .fl4_dst = oldflp->fl4_dst,
> -			    .fl4_src = oldflp->fl4_src,
> -			    .fl4_tos = tos & IPTOS_RT_MASK,
> -			    .fl4_scope = ((tos & RTO_ONLINK) ?
> -					  RT_SCOPE_LINK : RT_SCOPE_UNIVERSE),
> -			    .mark = oldflp->mark,
> -			    .iif = net->loopback_dev->ifindex,
> -			    .oif = oldflp->oif };
> +	struct flowi fl;
>  	struct fib_result res;
>  	unsigned int flags = 0;
>  	struct net_device *dev_out = NULL;
> @@ -1688,6 +1681,15 @@ struct rtable *__ip_route_output_key(struct net *net, const struct flowi *oldflp
>  	res.r		= NULL;
>  #endif
>  
> +	fl.oif = oldflp->oif;
> +	fl.iif = net->loopback_dev->ifindex;
> +	fl.mark = oldflp->mark;
> +	fl.fl4_dst = oldflp->fl4_dst;
> +	fl.fl4_src = oldflp->fl4_src;
> +	fl.fl4_tos = tos & IPTOS_RT_MASK;
> +	fl.fl4_scope = ((tos & RTO_ONLINK) ?
> +			RT_SCOPE_LINK : RT_SCOPE_UNIVERSE);
> +
>  	rcu_read_lock();
>  	if (oldflp->fl4_src) {
>  		rth = ERR_PTR(-EINVAL);

Acked-by: Eric Dumazet <eric.dumazet@...il.com>


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ