lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c6eb7b9-c9fc-60f6-c890-5d69c0cc1cd9@annapurnalabs.com>
Date:   Mon, 5 Dec 2016 20:29:18 +0200
From:   Netanel Belgazal <netanel@...apurnalabs.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     linux-kernel@...r.kernel.org, davem@...emloft.net,
        netdev@...r.kernel.org, dwmw@...zon.com, zorik@...apurnalabs.com,
        alex@...apurnalabs.com, saeed@...apurnalabs.com, msw@...zon.com,
        aliguori@...zon.com, nafea@...apurnalabs.com
Subject: Re: [PATCH V2 net 10/20] net/ena: remove redundant logic in napi
 callback for busy poll mode



On 12/05/2016 07:45 AM, Eric Dumazet wrote:
> On Sun, 2016-12-04 at 15:19 +0200, Netanel Belgazal wrote:
>> sk_busy_loop can call the napi callback few million times a sec.
>> For each call there is unmask interrupt.
>> We want to reduce the number of unmasks.
>>
>> Add an atomic variable that will tell the napi handler if
>> it was called from irq context or not.
>> Unmask the interrupt only from irq context.
>>
>> A schenario where the driver left with missed unmask isn't feasible.
>> when ena_intr_msix_io is called the driver have 2 options:
>> 1)Before napi completes and call napi_complete_done
>> 2)After calling napi_complete_done
>>
>> In the former case the napi will unmask the interrupt as needed.
>> In the latter case napi_complete_done will remove napi from the schedule
>> list so napi will be rescheduled (by ena_intr_msix_io) and interrupt
>> will be unmasked as desire in the 2nd napi call.
>>
>> Signed-off-by: Netanel Belgazal <netanel@...apurnalabs.com>
>> ---
>
> This looks very complicated to me.
>
> I guess you missed the recent patches that happened on net-next ?

You are correct.
I didn't see the patches.
It is much better to use the napi_complete_done() return value.
I'll rework my patch.

>
> 2e713283751f494596655d9125c168aeb913f71d net/mlx4_en: use napi_complete_done() return value
> 364b6055738b4c752c30ccaaf25c624e69d76195 net: busy-poll: return busypolling status to drivers
> 21cb84c48ca0619181106f0f44f3802a989de024 net: busy-poll: remove need_resched() from sk_can_busy_loop()
> 217f6974368188fd8bd7804bf5a036aa5762c5e4 net: busy-poll: allow preemption in sk_busy_loop()
>
> napi_complete_done() return code can be used by a driver,
> no need to add yet another atomic operation in fast path.
>
> Anyway, this looks wrong :
>
> @@ -1186,6 +1201,7 @@ static irqreturn_t ena_intr_msix_io(int irq, void *data)
>   {
>          struct ena_napi *ena_napi = data;
>   
> +       atomic_set(&ena_napi->unmask_interrupt, 1);
>          napi_schedule(&ena_napi->napi);
>   
> You probably wanted :
>
> if (napi_schedule_prep(n)) {
> 	atomic_set(&ena_napi->unmask_interrupt, 1);
> 	__napi_schedule(n);
> }
>
>
>
> Please rework this napi poll using core infrastructure.
>
> busypoll logic should be centralized, not reimplemented in different ways in a driver.
>
> Thanks.
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ