lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 5 Oct 2018 06:13:39 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Stephen Suryaputra <ssuryaextr@...il.com>, eric.dumazet@...il.com
Cc:     netdev@...r.kernel.org
Subject: Re: [PATCH net-next,v2] IPv6 ifstats separation



On 10/05/2018 06:00 AM, Stephen Suryaputra wrote:
> On Thu, Oct 4, 2018 at 4:42 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>>
>> How have you decided some counters can be 'slow' and other 'fast' ?
>>
>> I can tell you I see many ultra-fast candidates in your 'slow' list :/
> 
> Based on what others have categorized based on what's in the code and
> IMHO they make sense:

Well, you better test, because you missed a few counters that are hit hard in the fast
path for normal (non DDOS) packets.

> 
> enum
> {
>      IPSTATS_MIB_NUM = 0,
>      /* frequently written fields in fast path, kept in same cache line */
>      IPSTATS_MIB_INPKTS, /* InReceives */
>      IPSTATS_MIB_INOCTETS, /* InOctets */
>      IPSTATS_MIB_INDELIVERS, /* InDelivers */
>      IPSTATS_MIB_OUTFORWDATAGRAMS, /* OutForwDatagrams */
>      IPSTATS_MIB_OUTPKTS, /* OutRequests */
>      IPSTATS_MIB_OUTOCTETS, /* OutOctets */
>      /* other fields */
>      IPSTATS_MIB_INHDRERRORS, /* InHdrErrors */
>      ...
>      __IPSTATS_MIB_MAX
> };
> 
>>
>> Also think about DDOS.
>>
>> After your patch, all these 'wrong packets' will incur an expensive
>> operation on a shared and highly contented cache line,
>> effectively making the attack easier to conduct.
>>
> 
> I agree about it is becoming more expensive to hit the slow counters
> due to the check whether they are enabled or not.

What do you mean ?

The real cost is having dozens of cpus updating the same cache lines if the SNMP counter
is an atomic instead of per-cpu counters.

Make sure to test this on a configuration with 16 (or more) RX queues,
and cpus handling NIC IRQS spread on multiple NUMA nodes.

Powered by blists - more mailing lists