lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Mar 2023 15:36:48 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Eric Dumazet <edumazet@...gle.com>
CC:     "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Jason Xing <kernelxing@...cent.com>, <netdev@...r.kernel.org>,
        <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 4/4] net: optimize ____napi_schedule() to avoid
 extra NET_RX_SOFTIRQ

On 2023/3/30 14:47, Yunsheng Lin wrote:
> On 2023/3/30 10:57, Eric Dumazet wrote:
>> On Thu, Mar 30, 2023 at 4:33 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>>
>>> On 2023/3/29 23:47, Eric Dumazet wrote:
>>>> On Wed, Mar 29, 2023 at 2:47 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>>>>
>>>>> On 2023/3/29 7:50, Eric Dumazet wrote:
>>>>>> ____napi_schedule() adds a napi into current cpu softnet_data poll_list,
>>>>>> then raises NET_RX_SOFTIRQ to make sure net_rx_action() will process it.
>>>>>>
>>>>>> Idea of this patch is to not raise NET_RX_SOFTIRQ when being called indirectly
>>>>>> from net_rx_action(), because we can process poll_list from this point,
>>>>>> without going to full softirq loop.
>>>>>>
>>>>>> This needs a change in net_rx_action() to make sure we restart
>>>>>> its main loop if sd->poll_list was updated without NET_RX_SOFTIRQ
>>>>>> being raised.
>>>>>>
>>>>>> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>>>>>> Cc: Jason Xing <kernelxing@...cent.com>
>>>>>> ---
>>>>>>  net/core/dev.c | 22 ++++++++++++++++++----
>>>>>>  1 file changed, 18 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>>>>> index f34ce93f2f02e7ec71f5e84d449fa99b7a882f0c..0c4b21291348d4558f036fb05842dab023f65dc3 100644
>>>>>> --- a/net/core/dev.c
>>>>>> +++ b/net/core/dev.c
>>>>>> @@ -4360,7 +4360,11 @@ static inline void ____napi_schedule(struct softnet_data *sd,
>>>>>>       }
>>>>>>
>>>>>>       list_add_tail(&napi->poll_list, &sd->poll_list);
>>>>>> -     __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>>>>>> +     /* If not called from net_rx_action()
>>>>>> +      * we have to raise NET_RX_SOFTIRQ.
>>>>>> +      */
>>>>>> +     if (!sd->in_net_rx_action)
>>>>>
>>>>> It seems sd->in_net_rx_action may be read/writen by different CPUs at the same
>>>>> time, does it need something like READ_ONCE()/WRITE_ONCE() to access
>>>>> sd->in_net_rx_action?
>>>>
>>>> You probably missed the 2nd patch, explaining the in_net_rx_action is
>>>> only read and written by the current (owning the percpu var) cpu.
>>>>
>>>> Have you found a caller that is not providing to ____napi_schedule( sd
>>>> = this_cpu_ptr(&softnet_data)) ?
>>>
>>> You are right.
>>>
>>> The one small problem I see is that ____napi_schedule() call by a irq handle
>>> may preempt the running net_rx_action() in the current cpu, I am not sure if
>>> it worth handling, given that it is expected that the irq should be disabled
>>> when net_rx_action() is running?
>>
>> And what will happen ? If the interrupts comes before
>>
>> in_net_rx_action = val;
>>
>> The interrupt handler will see the old value, this is fine really in all points.
>>
>> If it comes after the assignment, the interrupt handler will see the new value,
>> because a cpu can not reorder its own reads/writes.
>>
>> Otherwise simple things like this would fail:
>>
>> a = 2;
>> b = a ;
>> assert (b == 2) ;
>>
>>
>> 1) Note that the local_irq_disable(); after the first
>>
>> sd->in_net_rx_action = true;
>>
>> in net_rx_action() already provides a strong barrier.
>>
>> 2) sd->in_net_rx_action = false before the barrier() is enough to
>> provide needed safety for _this_ cpu.
>>
>> 3) Final sd->in_net_rx_action = false; at the end of net_rx_action()
>> is performed while hard irq are masked.
>>
>>
>>
>>
>>> Do we need to protect against buggy hw or unbehaved driver?
>>
>> If you think there is an issue please elaborate with the exact call
>> site/ interruption point, because I do not see any.
> 
> I was thinking if load/store tearing and out of out-of-order execution
> would make something go wrong here.
> 
> For load/store tearing, in_net_rx_action in 'struct softnet_data' is a
> bool, so I think it should be ok here, it would be better to make it
> clear by using READ_ONCE()/WRITE_ONCE()?
> LWN article about load/store tearing:
> https://lwn.net/Articles/793253/
> 
> For out-of-order execution, I am not sure if it is really a problem
> for irq preempting softirq in the same CPU yet, for example, for the below code,
> if list_empty(&sd->poll_list) checking is executed out-of-order with
> "sd->in_net_rx_action = false", and the irq which calls ____napi_schedule()
> preempt between list_empty(&sd->poll_list) checking and "sd->in_net_rx_action = false",
> then ____napi_schedule() will not raise softirq as sd->in_net_rx_action is
> still true, after irq finishs, as list_empty(&sd->poll_list) is already
> checked, it may not goto 'start' in net_rx_action().

As the LWN article above, it seems barrier() is enough?

"As it turns out, this isn't a problem. Modern machines have
"exact exceptions" and "exact interrupts", meaning that any
interrupt or exception will appear to have happened at a
specific place in the instruction stream. Consequently,
the handler will see the effect of all prior instructions,
but won't see the effect of any subsequent instructions."

Thanks for clarifying.

> 
> 
> +				sd->in_net_rx_action = false;
> +				barrier();
> +				/* We need to check if ____napi_schedule()
> +				 * had refilled poll_list while
> +				 * sd->in_net_rx_action was true.
> +				 */
> +				if (!list_empty(&sd->poll_list))
> 
> 
> 
>>
>>
>>>
>>>>
>>>>
>>>>
>>>>>
>>>>>> +             __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>>>>>>  }
>>>>>>
>>>>>>  #ifdef CONFIG_RPS
>>>>>> @@ -6648,6 +6652,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
>>>>>>       LIST_HEAD(list);
>>>>>>       LIST_HEAD(repoll);
>>>>>>
>>>>>> +start:
>>>>>>       sd->in_net_rx_action = true;
>>>>>>       local_irq_disable();
>>>>>>       list_splice_init(&sd->poll_list, &list);
>>>>>> @@ -6659,9 +6664,18 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
>>>>>>               skb_defer_free_flush(sd);
>>>>>>
>>>>>>               if (list_empty(&list)) {
>>>>>> -                     sd->in_net_rx_action = false;
>>>>>> -                     if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
>>>>>> -                             goto end;
>>>>>> +                     if (list_empty(&repoll)) {
>>>>>> +                             sd->in_net_rx_action = false;
>>>>>> +                             barrier();
>>>>>
>>>>> Do we need a stronger barrier to prevent out-of-order execution
>>>>> from cpu?
>>>>
>>>> We do not need more than barrier() to make sure local cpu wont move this
>>>> write after the following code.
>>>
>>> Is there any reason why we need the barrier() if we are not depending
>>> on how list_empty() is coded?
>>> It seems not obvious to me at least:)
>>>
>>>>
>>>> It should not, even without the barrier(), because of the way
>>>> list_empty() is coded,
>>>> but a barrier() makes things a bit more explicit.
>>>
>>> In that case, a comment explaining that may help a lot.
>>>
>>> Thanks.
>>>
>>>>
>>>>> Do we need a barrier between list_add_tail() and sd->in_net_rx_action
>>>>> checking in ____napi_schedule() to pair with the above barrier?
>>>>
>>>> I do not think so.
>>>>
>>>> While in ____napi_schedule(), sd->in_net_rx_action is stable
>>>> because we run with hardware IRQ masked.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>> .
>>
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ