[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89066a75-43db-0f62-f171-70b0abaa8ea0@oracle.com>
Date: Thu, 25 Jan 2018 14:25:04 +0800
From: "jianchao.wang" <jianchao.w.wang@...cle.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Tariq Toukan <tariqt@...lanox.com>,
Jason Gunthorpe <jgg@...pe.ca>
Cc: junxiao.bi@...cle.com, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
Saeed Mahameed <saeedm@...lanox.com>
Subject: Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before
prod db updating
Hi Eric
Thanks for you kindly response and suggestion.
That's really appreciated.
Jianchao
On 01/25/2018 11:55 AM, Eric Dumazet wrote:
> On Thu, 2018-01-25 at 11:27 +0800, jianchao.wang wrote:
>> Hi Tariq
>>
>> On 01/22/2018 10:12 AM, jianchao.wang wrote:
>>>>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>>>>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote:
>>>>>>> Hi Tariq
>>>>>>>
>>>>>>> Very sad that the crash was reproduced again after applied the patch.
>>>>
>>>> Memory barriers vary for different Archs, can you please share more details regarding arch and repro steps?
>>> The hardware is HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015
>>> The xen is installed. The crash occurred in DOM0.
>>> Regarding to the repro steps, it is a customer's test which does heavy disk I/O over NFS storage without any guest.
>>>
>>
>> What is the finial suggestion on this ?
>> If use wmb there, is the performance pulled down ?
>
> Since https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_davem_net-2Dnext.git_commit_-3Fid-3Ddad42c3038a59d27fced28ee4ec1d4a891b28155&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26ATvfo6QSTvZyQ&m=c0oI8duFkyFBILMQYDsqRApHQrOlLY_2uGiz_utcd7s&s=E4_XKmSI0B63qB0DLQ1EX_fj1bOP78ZdeYADBf33B-k&e=
>
> we batch allocations, so mlx4_en_refill_rx_buffers() is not called that often.
>
> I doubt the additional wmb() will have serious impact there.
>
>
Powered by blists - more mailing lists