lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 May 2007 03:04:12 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	djohnson+linux-kernel@...starentnetworks.com
Cc:	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH] improved locking performance in rt_run_flush()

From: Dave Johnson <djohnson+linux-kernel@...starentnetworks.com>
Date: Sat, 12 May 2007 12:36:47 -0400

> 
> While testing adding/deleting large numbers of interfaces, I found
> rt_run_flush() was the #1 cpu user in a kernel profile by far.
> 
> The below patch changes rt_run_flush() to only take each spinlock
> protecting the rt_hash_table once instead of taking a spinlock for
> every hash table bucket (and ending up taking the same small set 
> of locks over and over).
> 
> Deleting 256 interfaces on a 4-way SMP system with 16K buckets reduced
> overall cpu-time more than 50% and reduced wall-time about 33%.  I
> suspect systems with large amounts of memory (and more buckets) will
> see an even greater benefit.
> 
> Note there is a small change in that rt_free() is called while the
> lock is held where before it was called without the lock held.  I
> don't think this should be an issue.
> 
> Signed-off-by: Dave Johnson <djohnson+linux-kernel@...starentnetworks.com>

Thanks for this patch.

I'm not ignoring it I'm just trying to brainstorm whether there
is a better way to resolve this inefficiency. :-)
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists