lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5bc2b508-11aa-ddd9-5519-0116cdb16d09@gmail.com>
Date:   Thu, 25 Oct 2018 10:32:13 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Keyur Amrutbhai Patel <keyurp@...inx.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: netif_receive_skb is taking long time


Please do not top post, and use normal quoting.

On 10/25/2018 10:22 AM, Keyur Amrutbhai Patel wrote:
> Hi Eric,
> 
> First of all thank you for replying and giving some spotlight.
> 
> First step would be to read Documentation/networking/scaling.txt and see if anything there helps.
>  - This is good article. I had gone through it.  Any suggestion on RSS? How to configure it? Do I need to take care anything specially in my NIC driver?

Just read the page and apply the various configurations.

> 
> Have you tried to profile the kernel and see if some contention or hot function appears ?
> - I have added time stampings in different functions. That is how I came to know that almost ~3375 neno seconds are used by just " netif_receive_skb " don’t know why. With less than that time my DMA operation is finishes and descriptors are managed.
> Current time consuming function are " netif_receive_skb " and " napi_alloc_skb " these two function calls are taking maximum about of time
> 

So... networking spend more time in upper stacks than a driver.

A driver does almost nothing, just passing around bits that that NIC put in memory.

In most workloads, a driver would not use more than 5% of total cpu cycles.

Now, if all you need is to impress your friends/boss about some
crazy number of RX packets per second,
just do not allocate skbs, and not call netif_receive_skb(),
use something like XDP to drop incoming frames :)

> Maybe use a faster cpu, or remove not needed features like too heavy netfilter rules.
> - I am using Intex Xeon Platinum series processors. These are fast enough CPUs available in market with 64 cores. 2 CPU nodes (each has 32 core)
> 
> We can not really answer your question, you do not provide enough information.
> - Please let me know what additional details you need. We have 6 queues in HW. Each is mapped to MSI-X vector. Each vector is giving interrupt on different CPU. From interrupt I am scheduling napi and from napi poll function I am getting DMA page and constructing skb and passing it to network layer with "netif_receive_skb".
> 
> Let me know additional details which are required.
> 
> Regards,
> Keyur
> 
> -----Original Message-----
> From: Eric Dumazet <eric.dumazet@...il.com> 
> Sent: Thursday, October 25, 2018 10:38 PM
> To: Keyur Amrutbhai Patel <keyurp@...inx.com>; netdev@...r.kernel.org
> Subject: Re: netif_receive_skb is taking long time
> 
> EXTERNAL EMAIL
> 
> On 10/25/2018 08:39 AM, Keyur Amrutbhai Patel wrote:
>> Hi,
>>
>> In my NIC driver "netif_receive_skb" is taking too long time. Almost 3375 neno seconds. Which is more than whole packet processing from interrupt.
>>
>> Could anyone please help me to understand what could be the reason behind this? How to solve it to take minimum time?
>>
>> Is there any standard calls which we need to follow in order to get faster performance?
>>
> 
> First step would be to read Documentation/networking/scaling.txt and see if anything there helps.
> 
> Have you tried to profile the kernel and see if some contention or hot function appears ?
> 
> Maybe use a faster cpu, or remove not needed features like too heavy netfilter rules.
> 
> We can not really answer your question, you do not provide enough information.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ