lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 May 2021 17:03:23 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Jason Wang <jasowang@...hat.com>, <davem@...emloft.net>,
        <kuba@...nel.org>
CC:     <will@...nel.org>, <peterz@...radead.org>, <paulmck@...nel.org>,
        <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
        <mst@...hat.com>, <brouer@...hat.com>
Subject: Re: [PATCH net-next] ptr_ring: make __ptr_ring_empty() checking more
 reliable

On 2021/5/27 16:05, Jason Wang wrote:
> 
> 在 2021/5/27 下午3:21, Yunsheng Lin 写道:
>> On 2021/5/27 14:53, Jason Wang wrote:
>>> 在 2021/5/27 下午2:07, Yunsheng Lin 写道:
>>>> On 2021/5/27 12:57, Jason Wang wrote:
>>>>> 在 2021/5/26 下午8:29, Yunsheng Lin 写道:
>>>>>> Currently r->queue[] is cleared after r->consumer_head is moved
>>>>>> forward, which makes the __ptr_ring_empty() checking called in
>>>>>> page_pool_refill_alloc_cache() unreliable if the checking is done
>>>>>> after the r->queue clearing and before the consumer_head moving
>>>>>> forward.
>>>>>>
>>>>>> Move the r->queue[] clearing after consumer_head moving forward
>>>>>> to make __ptr_ring_empty() checking more reliable.
>>>>> If I understand this correctly, this can only happens if you run __ptr_ring_empty() in parallel with ptr_ring_discard_one().
>>>> Yes.
>>>>
>>>>> I think those two needs to be serialized. Or did I miss anything?
>>>> As the below comment in __ptr_ring_discard_one, if the above is true, I
>>>> do not think we need to keep consumer_head valid at all times, right?
>>>>
>>>>
>>>>      /* Note: we must keep consumer_head valid at all times for __ptr_ring_empty
>>>>       * to work correctly.
>>>>       */
>>>
>>> I'm not sure I understand. But my point is that you need to synchronize the __ptr_ring_discard_one() and __ptr_empty() as explained in the comment above __ptr_ring_empty():
>> I am saying if __ptr_ring_empty() and __ptr_ring_discard_one() is
>> always serialized, then it seems that the below commit is unnecessary?
> 
> 
> Just to make sure we are at the same page. What I really meant is "synchronized" not "serialized". So they can be called at the same time but need synchronization.
> 
> 
>>
>> 406de7555424 ("ptr_ring: keep consumer_head valid at all times")
> 
> 
> This still needed in this case.
> 
> 
>>
>>> /*
>>>   * Test ring empty status without taking any locks.
>>>   *
>>>   * NB: This is only safe to call if ring is never resized.
>>>   *
>>>   * However, if some other CPU consumes ring entries at the same time, the value
>>>   * returned is not guaranteed to be correct.
>>>   *
>>>   * In this case - to avoid incorrectly detecting the ring
>>>   * as empty - the CPU consuming the ring entries is responsible
>>>   * for either consuming all ring entries until the ring is empty,
>>>   * or synchronizing with some other CPU and causing it to
>>>   * re-test __ptr_ring_empty and/or consume the ring enteries
>>>   * after the synchronization point.
>> I am not sure I understand "incorrectly detecting the ring as empty"
>> means, is it because of the data race described in the commit log?
> 
> 
> It means "the ring might be empty but __ptr_ring_empty() returns false".

But the ring might be non-empty but __ptr_ring_empty() returns true
for the data race described in the commit log:)

> 
> 
>> Or other data race? I can not think of other data race if consuming
>> and __ptr_ring_empty() is serialized:)
>>
>> I am agreed that __ptr_ring_empty() checking is not totally reliable
>> without taking r->consumer_lock, that is why I use "more reliable"
>> in the title:)
> 
> 
> Is __ptr_ring_empty() synchronized with the consumer in your case? If yes, have you done some benchmark to see the difference?
> 
> Have a look at page pool, this only helps when multiple refill request happens in parallel which can make some of the refill return early if the ring has been consumed.
> 
> This is the slow-path and I'm not sure we see any difference. If one the request runs faster then the following request will go through the fast path.

Yes, I am agreed there may not be any difference.
But it is better to make it more reliable, right?

> 
> If it really helps, can we do it more simpler by:
> 
> 
> diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
> index 808f9d3ee546..c3a72dc77337 100644
> --- a/include/linux/ptr_ring.h
> +++ b/include/linux/ptr_ring.h
> @@ -264,6 +264,10 @@ static inline void __ptr_ring_discard_one(struct ptr_ring *r)
>         int consumer_head = r->consumer_head;
>         int head = consumer_head++;
> 
> +        /* matching READ_ONCE in __ptr_ring_empty for lockless tests */
> +       WRITE_ONCE(r->consumer_head,
> +                   consumer_head < r->size ? consumer_head : 0);
> +
>         /* Once we have processed enough entries invalidate them in
>          * the ring all at once so producer can reuse their space in the ring.
>          * We also do this when we reach end of the ring - not mandatory
> @@ -281,11 +285,8 @@ static inline void __ptr_ring_discard_one(struct ptr_ring *r)
>                 r->consumer_tail = consumer_head;
>         }
>         if (unlikely(consumer_head >= r->size)) {

What I am thinking is that we can remove the above testing for
the likely case when the above checking is moved into the body
of "if (unlikely(consumer_head - r->consumer_tail >= r->batch ||
consumer_head >= r->size))".

Or is there any specific reason why we keep the testing for likely
case?


> -               consumer_head = 0;
>                 r->consumer_tail = 0;
>         }
> -       /* matching READ_ONCE in __ptr_ring_empty for lockless tests */
> -       WRITE_ONCE(r->consumer_head, consumer_head);
>  }
> 
>  static inline void *__ptr_ring_consume(struct ptr_ring *r)
> 
> 
> Thanks
> 
> 
>>
>>
>>
>>>   *
>>>   * Note: callers invoking this in a loop must use a compiler barrier,
>>>   * for example cpu_relax().
>>>   */
>>>
>>> Thanks
>>>
>>>
>>>
> 
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ