lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Dec 2006 23:27:00 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Stephen Hemminger <shemminger@...l.org>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH] convert hh_lock to seqlock

Stephen Hemminger a écrit :
> On Thu, 07 Dec 2006 21:23:07 +0100
> Eric Dumazet <dada1@...mosbay.com> wrote:
> 
>> Stephen Hemminger a écrit :
>>> The hard header cache is in the main output path, so using
>>> seqlock instead of reader/writer lock should reduce overhead.
>>>
>> Nice work Stephen, I am very interested.
>>
>> Did you benchmarked it ?
>>
>> I ask because I think hh_refcnt frequent changes may defeat the gain you want 
>> (ie avoiding cache line ping pongs between cpus). seqlock are definitly better 
>> than rwlock, but if we really keep cache lines shared.
>>
>> So I would suggest reordering fields of hh_cache and adding one 
>> ____cacheline_aligned_in_smp to keep hh_refcnt in another cache line.
>>
>> (hh_len, hh_lock and hh_data should be placed on a 'mostly read' cache line)
>>
>> Thank you
>> Eric
> 
> It doesn't make any visible performance difference for real networks; 
> copies and device issues are much larger.

Hum, so 'my' machines must be unreal :)

> 
> The hh_refcnt is used only when creating destroying neighbor entries,
> so except under DoS attack it doesn't make a lot of difference.
> The hh_lock is used on each packet sent.

Some machines create/delete 10.000 entries per second in rt_cache.
I believe they are real. DoS ? you tell it, some people wont agree.


# grep eth0 /proc/net/rt_cache|head -n 10000|cut -f13|sort -u|wc -l
     359
(13th field of /proc/net/rt_cache is HHRef)

# rtstat -c1000 -i1
rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|
  entries|  in_hit|in_slow_|in_slow_|in_no_ro|  in_brd|in_marti|in_marti| 
out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis|
         |        |     tot|      mc|     ute|        |  an_dst|  an_src| 
    |    _tot|     _mc|        |      ed|    miss| verflow| _search|t_search|
  2467048|2479640328|1334812199|       0|       0|      34|       0| 
117112|6139056109|7510556324|       0|       0|       0|       0| 
0|8864696485|9819074477|
  2465642|   16594|    4791|       0|       0|       0|       0|       0| 
2387|    2738|       0|       0|       0|       0|       0|   21878|    7478|
  2464482|   16505|    4765|       0|       0|       0|       0|       0| 
2460|    2669|       0|       0|       0|       0|       0|   22224|    7499|
  2463512|   17281|    4640|       0|       0|       0|       0|       0| 
2449|    2632|       0|       0|       0|       0|       0|   22069|    7240|
  2462651|   16504|    4314|       0|       0|       0|       0|       0| 
2446|    2497|       0|       0|       0|       0|       0|   20796|    6979|
  2462175|   18152|    5792|       0|       0|       0|       0|       0| 
2448|    2791|       0|       0|       0|       0|       0|   26164|    7731|
  2461889|   16970|    5059|       0|       0|       0|       0|       0| 
2535|    2829|       0|       0|       0|       0|       0|   22614|    7595|
  2461719|   16446|    4643|       0|       0|       0|       0|       0| 
2496|    2717|       0|       0|       0|       0|       0|   21347|    7354|
  2461775|   17098|    4782|       0|       0|       0|       0|       0| 
2386|    2570|       0|       0|       0|       0|       0|   22448|    7049|



Thank you
Eric
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ