lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <339a7156-9ef1-1f3c-30b8-3cc3558d124e@mellanox.com>
Date:   Sun, 21 Jan 2018 18:24:55 +0200
From:   Tariq Toukan <tariqt@...lanox.com>
To:     Tariq Toukan <tariqt@...lanox.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        "jianchao.wang" <jianchao.w.wang@...cle.com>,
        Jason Gunthorpe <jgg@...pe.ca>
Cc:     junxiao.bi@...cle.com, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
        Saeed Mahameed <saeedm@...lanox.com>
Subject: Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before
 prod db updating



On 21/01/2018 11:31 AM, Tariq Toukan wrote:
> 
> 
> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote:
>>> Hi Tariq
>>>
>>> Very sad that the crash was reproduced again after applied the patch.

Memory barriers vary for different Archs, can you please share more 
details regarding arch and repro steps?

>>>
>>> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>>> @@ -252,6 +252,7 @@ static inline bool mlx4_en_is_ring_empty(struct 
>>> mlx4_en_rx_ring *ring)
>>>   static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring 
>>> *ring)
>>>   {
>>> +    dma_wmb();
>>
>> So... is wmb() here fixing the issue ?
>>
>>>       *ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
>>>   }
>>>
>>> I analyzed the kdump, it should be a memory corruption.
>>>
>>> Thanks
>>> Jianchao
> 
> Hmm, this is actually consistent with the example below [1].
> 
> AIU from the example, it seems that the dma_wmb/dma_rmb barriers are 
> good for synchronizing cpu/device accesses to the "Streaming DMA mapped" 
> buffers (the descriptors, went through the dma_map_page() API), but not 
> for the doorbell (a coherent memory, typically allocated via 
> dma_alloc_coherent) that requires using the stronger wmb() barrier.
> 
> 
> [1] Documentation/memory-barriers.txt
> 
>   (*) dma_wmb();
>   (*) dma_rmb();
> 
>       These are for use with consistent memory to guarantee the ordering
>       of writes or reads of shared memory accessible to both the CPU and a
>       DMA capable device.
> 
>       For example, consider a device driver that shares memory with a 
> device
>       and uses a descriptor status value to indicate if the descriptor 
> belongs
>       to the device or the CPU, and a doorbell to notify it when new
>       descriptors are available:
> 
>      if (desc->status != DEVICE_OWN) {
>          /* do not read data until we own descriptor */
>          dma_rmb();
> 
>          /* read/modify data */
>          read_data = desc->data;
>          desc->data = write_data;
> 
>          /* flush modifications before status update */
>          dma_wmb();
> 
>          /* assign ownership */
>          desc->status = DEVICE_OWN;
> 
>          /* force memory to sync before notifying device via MMIO */
>          wmb();
> 
>          /* notify device of new descriptors */
>          writel(DESC_NOTIFY, doorbell);
>      }
> 
>       The dma_rmb() allows us guarantee the device has released ownership
>       before we read the data from the descriptor, and the dma_wmb() allows
>       us to guarantee the data is written to the descriptor before the 
> device
>       can see it now has ownership.  The wmb() is needed to guarantee 
> that the
>       cache coherent memory writes have completed before attempting a 
> write to
>       the cache incoherent MMIO region.
> 
>       See Documentation/DMA-API.txt for more information on consistent 
> memory.
> 
> 
>>> On 01/15/2018 01:50 PM, jianchao.wang wrote:
>>>> Hi Tariq
>>>>
>>>> Thanks for your kindly response.
>>>>
>>>> On 01/14/2018 05:47 PM, Tariq Toukan wrote:
>>>>> Thanks Jianchao for your patch.
>>>>>
>>>>> And Thank you guys for your reviews, much appreciated.
>>>>> I was off-work on Friday and Saturday.
>>>>>
>>>>> On 14/01/2018 4:40 AM, jianchao.wang wrote:
>>>>>> Dear all
>>>>>>
>>>>>> Thanks for the kindly response and reviewing. That's really 
>>>>>> appreciated.
>>>>>>
>>>>>> On 01/13/2018 12:46 AM, Eric Dumazet wrote:
>>>>>>>> Does this need to be dma_wmb(), and should it be in
>>>>>>>> mlx4_en_update_rx_prod_db ?
>>>>>>>>
>>>>>>>
>>>>>>> +1 on dma_wmb()
>>>>>>>
>>>>>>> On what architecture bug was observed ?
>>>>>>
>>>>>> This issue was observed on x86-64.
>>>>>> And I will send a new patch, in which replace wmb() with 
>>>>>> dma_wmb(), to customer
>>>>>> to confirm.
>>>>>
>>>>> +1 on dma_wmb, let us know once customer confirms.
>>>>> Please place it within mlx4_en_update_rx_prod_db as suggested.
>>>>
>>>> Yes, I have recommended it to customer.
>>>> Once I get the result, I will share it here.
>>>>> All other calls to mlx4_en_update_rx_prod_db are in control/slow 
>>>>> path so I prefer being on the safe side, and care less about 
>>>>> bulking the barrier.
>>>>>
>>>>> Thanks,
>>>>> Tariq
>>>>>
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ