lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 11 Jun 2011 08:54:09 +0800
From:	Changli Gao <xiaosuo@...il.com>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
Cc:	David Miller <davem@...emloft.net>, eric.dumazet@...il.com,
	netdev@...r.kernel.org, andi@...stfloor.org
Subject: Re: [PATCH net-next-2.6] inetpeer: lower false sharing effect

On Sat, Jun 11, 2011 at 6:33 AM, Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
> You're right.  By adding the TCP connection, inet peer shows up now in
> my profile of the patched kernel with Eric's two patches.
>
> Eric's patches produced much better cpu utilization.
> The addr_compare (used to consume 10% cpu) and atomic_dec_and_lock (used
> to consume 20.5% cpu) in inet_putpeer is eliminated and inet_putpeer
> uses only 10% cpu now.  Though inet_getpeer and inet_putpeer still
> consumes significant cpu compared to the other test case when peer is
> not present.
>
> Tim
>
> Profile with Eric's two patches and peer forced to be present with TCP
> added looks like this:
>
> -     19.38%     memcached  [kernel.kallsyms]             [k] inet_getpeer
>   - inet_getpeer
>      + 99.97% inet_getpeer_v4
> -     11.49%     memcached  [kernel.kallsyms]             [k] inet_putpeer
>   - inet_putpeer
>      - 99.96% ipv4_dst_destroy
>           dst_destroy
>         + dst_release
> -      5.71%     memcached  [kernel.kallsyms]             [k] rt_set_nexthop.clone.30
>   - rt_set_nexthop.clone.30
>      + 99.89% __ip_route_output_key
> -      5.60%     memcached  [kernel.kallsyms]             [k] atomic_add_unless.clone.34
>   - atomic_add_unless.clone.34
>      + 99.94% neigh_lookup
> +      3.02%     memcached  [kernel.kallsyms]             [k] do_raw_spin_lock
> +      2.87%     memcached  [kernel.kallsyms]             [k] atomic_dec_and_test
> +      1.45%     memcached  [kernel.kallsyms]             [k] atomic_add
> +      1.04%     memcached  [kernel.kallsyms]             [k] _raw_spin_lock_irqsave
> +      1.03%     memcached  [kernel.kallsyms]             [k] bit_spin_lock.clone.41
>
>

Did you disable routing cache when profiling? If so, enable it and try again.

-- 
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ