[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f9f5f04-039c-ce56-3456-4f04022f80e8@gmail.com>
Date: Wed, 29 Mar 2017 10:13:07 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Pankaj Gupta <pagupta@...hat.com>,
Tariq Toukan <tariqt@...lanox.com>, netdev@...r.kernel.org,
akpm@...ux-foundation.org, linux-mm <linux-mm@...ck.org>,
Saeed Mahameed <saeedm@...lanox.com>
Subject: Re: Page allocator order-0 optimizations merged
On 28/03/2017 9:24 PM, Jesper Dangaard Brouer wrote:
> On Tue, 28 Mar 2017 19:05:12 +0300
> Tariq Toukan <ttoukan.linux@...il.com> wrote:
>
>> On 28/03/2017 10:32 AM, Tariq Toukan wrote:
>>>
>>>
>>> On 27/03/2017 4:32 PM, Mel Gorman wrote:
>>>> On Mon, Mar 27, 2017 at 02:39:47PM +0200, Jesper Dangaard Brouer wrote:
>>>>> On Mon, 27 Mar 2017 10:55:14 +0200
>>>>> Jesper Dangaard Brouer <brouer@...hat.com> wrote:
>>>>>
>>>>>> A possible solution, would be use the local_bh_{disable,enable} instead
>>>>>> of the {preempt_disable,enable} calls. But it is slower, using numbers
>>>>>> from [1] (19 vs 11 cycles), thus the expected cycles saving is
>>>>>> 38-19=19.
>>>>>>
>>>>>> The problematic part of using local_bh_enable is that this adds a
>>>>>> softirq/bottom-halves rescheduling point (as it checks for pending
>>>>>> BHs). Thus, this might affects real workloads.
>>>>>
>>>>> I implemented this solution in patch below... and tested it on mlx5 at
>>>>> 50G with manually disabled driver-page-recycling. It works for me.
>>>>>
>>>>> To Mel, that do you prefer... a partial-revert or something like this?
>>>>>
>>>>
>>>> If Tariq confirms it works for him as well, this looks far safer patch
>>>
>>> Great.
>>> I will test Jesper's patch today in the afternoon.
>>>
>>
>> It looks very good!
>> I get line-rate (94Gbits/sec) with 8 streams, in comparison to less than
>> 55Gbits/sec before.
>
> Just confirming, this is when you have disabled mlx5 driver
> page-recycling, right?
>
>
Right.
This is a great result!
>>>> than having a dedicate IRQ-safe queue. Your concern about the BH
>>>> scheduling point is valid but if it's proven to be a problem, there is
>>>> still the option of a partial revert.
>
Powered by blists - more mailing lists