lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 25 Feb 2018 11:56:59 -0800
From:   Santosh Shilimkar <santosh.shilimkar@...cle.com>
To:     Majd Dibbiny <majd@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>
Cc:     "davem@...emloft.net" <davem@...emloft.net>,
        "dledford@...hat.com" <dledford@...hat.com>,
        Jason Gunthorpe <jgg@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Leon Romanovsky <leonro@...lanox.com>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "leon@...nel.org" <leon@...nel.org>,
        Yonatan Cohen <yonatanc@...lanox.com>
Subject: Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue
 (CQ)

n 2/24/2018 1:40 AM, Majd Dibbiny wrote:
> 
>> On Feb 23, 2018, at 9:13 PM, Saeed Mahameed <saeedm@...lanox.com> wrote:
>>
>>> On Thu, 2018-02-22 at 16:04 -0800, Santosh Shilimkar wrote:
>>> Hi Saeed
>>>
>>>> On 2/21/2018 12:13 PM, Saeed Mahameed wrote:

[...]

>>>
>>> Jason mentioned about this patch to me off-list. We were
>>> seeing similar issue with SRQs & QPs. So wondering whether
>>> you have any plans to do similar change for other resouces
>>> too so that they don't rely on higher order page allocation
>>> for icm tables.
>>>
>>
>> Hi Santosh,
>>
>> Adding Majd,
>>
>> Which ULP is in question ? how big are the QPs/SRQs you create that
>> lead to this problem ?
>>
>> For icm tables we already allocate only order 0 pages:
>> see alloc_system_page() in
>> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>>
>> But for kernel RDMA SRQ and QP buffers there is a place for
>> improvement.
>>
>> Majd, do you know if we have any near future plans for this.
> 
> It’s in our plans to move all the buffers to use 0-order pages.
> 
> Santosh,
> 
> Is this RDS? Do you have persistent failure with some configuration? Can you please share more information?
> 
No the issue seen with user verbs and actually MLX4 driver. My
last question was more for both MLX4 and MLX5 drivers icm
allocation for all the resources.

With MLX4 driver, we have seen corruption issues with MLX4_NO_RR
while recycling the issues. So we ended up switching to round robin
bitmap allocation as it was before which was changed by one of
Jacks commit 7c6d74d23 {mlx4_core: Roll back round robin bitmap
allocation commit for CQs, SRQs, and MPTs}

With default round robin, the corruption issue went away but then
its undesired effect of bloating the icm tables till you hit the
resource limit means more memory fragmentation. Since these resources
makes use of higher order allocations and in fragmented memory scenarios
we see contention on mm lock for seconds since compaction layer is
trying to stitch pages which takes time.

If these alloaction don't make use of higher order pages, the issue
can be certainly avoided and hence the reason behind the question.

Ofcourse we wouldn't have ended up with this issue if 'MLX4_NO_RR'
worked without corruption :-)

Regards,
Santosh








Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ