lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48F967BA.4090102@cosmosbay.com>
Date:	Sat, 18 Oct 2008 06:36:10 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Neil Horman <nhorman@...driver.com>
Cc:	David Miller <davem@...emloft.net>, billfink@...dspring.com,
	netdev@...r.kernel.org, kuznet@....inr.ac.ru, pekkas@...core.fi,
	jmorris@...ei.org, yoshfuji@...ux-ipv6.org, kaber@...sh.net,
	johnpol@....mipt.ru, Stephen Hemminger <shemminger@...tta.com>
Subject: Re: [PATCH] net: implement emergency route cache rebulds when	gc_elasticity
 is exceeded

Neil Horman a écrit :
> Sorry for the additional noise, but Eric just pointed out that I'd missed an
> email from him and consequently a few comments.  I've made the appropriate
> updates in this patch, which is otherwise unchanged.  The only comment I'd
> skipped was the request for an additional stat in /proc/net/stat/rt_cache for a
> per net rebuild count.  I figure thats a good break point to submit an
> additional follow on patch for.

OK, yet an admin cannot know if its route cache is disabled or not...
Right its can be addressed in a follow path.

> 
> This is a patch to provide on demand route cache rebuilding.  Currently, our
> route cache is rebulid periodically regardless of need.  This introduced
> unneeded periodic latency.  This patch offers a better approach.  Using code
> provided by Eric Dumazet, we compute the standard deviation of the average hash
> bucket chain length while running rt_check_expire.  Should any given chain
> length grow to larger that average plus 4 standard deviations, we trigger an
> emergency hash table rebuild for that net namespace.  This allows for the common
> case in which chains are well behaved and do not grow unevenly to not incur any
> latency at all, while those systems (which may be being maliciously attacked),
> only rebuild when the attack is detected.  This patch take 2 other factors into
> account:
> 1) chains with multiple entries that differ by attributes that do not affect the
> hash value are only counted once, so as not to unduly bias system to rebuilding
> if features like QOS are heavily used
> 2) if rebuilding crosses a certain threshold (which is adjustable via the added
> sysctl in this patch), route caching is disabled entirely for that net
> namespace, since constant rebuilding is less efficient that no caching at all
> 
> Tested successfully by me.
> 
> Regards
> Neil
> 
> Signed-off-by: Neil Horman <nhorman@...driver.com>

OK, its almost done Neil :)

Please rebase your patch against latest net-2.6 tree that includes my previous patch.

http://git2.kernel.org/?p=linux/kernel/git/davem/net-2.6.git;a=commitdiff;h=00269b54edbf25f3bb0dccb558ae23a6fc77ed86

Please read *all* this mail to catch all final comments, in order to avoid upset netdev 
readers with small details. You also can submit it privatly to me, if you wish.

Please add my Signed-off-by after yours

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>

> +static inline bool rt_caching(struct net *net)

As Stephen pointed out, you *should* add a const qualifier here to "struct net *net",

-> static inline bool rt_caching(const struct net *net)

> +{
> +	return net->ipv4.current_rt_cache_rebuild_count <=
> +		net->ipv4.sysctl_rt_cache_rebuild_count;
> +}
> +
> +static inline int compare_hash_inputs(struct flowi *fl1, struct flowi *fl2)

const qualifiers too, please.

> +{
> +	return (__force u32)(((fl1->nl_u.ip4_u.daddr ^ fl2->nl_u.ip4_u.daddr) |
> +		(fl1->nl_u.ip4_u.saddr ^ fl2->nl_u.ip4_u.saddr) |
> +		(fl1->iif ^ fl2->iif)) == 0);
> +}
> +
>  static inline int compare_keys(struct flowi *fl1, struct flowi *fl2)

previous code can be cleaned up in a followup patch.

New code is not forced to follow old and not "clean by modern standards" code :)

>  {
>  	return ((__force u32)((fl1->nl_u.ip4_u.daddr ^ fl2->nl_u.ip4_u.daddr) |
> @@ -748,11 +764,24 @@ static void rt_do_flush(int process_context)
>  	}
>  }
>  

... snip to ease your job ...

>  #endif
> -	rt_hash_table[hash].chain = rt;
> +	if (rthi)
> +		rthi->u.dst.rt_next = rt;
> +	else
> +		rt_hash_table[hash].chain = rt;
>  	spin_unlock_bh(rt_hash_lock_addr(hash));

Based on latest tree, this should be something like :

> -	rcu_assign_pointer(rt_hash_table[hash].chain, rt);
> +	if (rthi)
> +		rcu_assign_pointer(rthi->u.dst.rt_next, rt);
> +	else
> +		rcu_assign_pointer(rt_hash_table[hash].chain, rt);
>  	spin_unlock_bh(rt_hash_lock_addr(hash));



Thanks



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ