lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0bfa913-1837-6c5d-6e77-2a188fd094e1@oracle.com>
Date:   Fri, 1 Jun 2018 15:05:26 -0700
From:   Qing Huang <qing.huang@...cle.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        David Miller <davem@...emloft.net>, tariqt@...lanox.com,
        haakon.bugge@...cle.com, yanjun.zhu@...cle.com,
        netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
        linux-kernel@...r.kernel.org, gi-oh.kim@...fitbricks.com,
        "santosh.shilimkar@...cle.com" <santosh.shilimkar@...cle.com>
Subject: Re: [PATCH V4] mlx4_core: allocate ICM memory in page size chunks



On 6/1/2018 12:31 AM, Michal Hocko wrote:
> On Thu 31-05-18 19:04:46, Qing Huang wrote:
>>
>> On 5/31/2018 2:10 AM, Michal Hocko wrote:
>>> On Thu 31-05-18 10:55:32, Michal Hocko wrote:
>>>> On Thu 31-05-18 04:35:31, Eric Dumazet wrote:
>>> [...]
>>>>> I merely copied/pasted from alloc_skb_with_frags() :/
>>>> I will have a look at it. Thanks!
>>> OK, so this is an example of an incremental development ;).
>>>
>>> __GFP_NORETRY was added by ed98df3361f0 ("net: use __GFP_NORETRY for
>>> high order allocations") to prevent from OOM killer. Yet this was
>>> not enough because fb05e7a89f50 ("net: don't wait for order-3 page
>>> allocation") didn't want an excessive reclaim for non-costly orders
>>> so it made it completely NOWAIT while it preserved __GFP_NORETRY in
>>> place which is now redundant. Should I send a patch?
>>>
>> Just curious, how about GFP_ATOMIC flag? Would it work in a similar fashion?
>> We experimented
>> with it a bit in the past but it seemed to cause other issue in our tests.
>> :-)
> GFP_ATOMIC is a non-sleeping (aka no reclaim) context with an access to
> memory reserves. So the risk is that you deplete those reserves and
> cause issues to other subsystems which need them as well.
>
>> By the way, we didn't encounter any OOM killer events. It seemed that the
>> mlx4_alloc_icm() triggered slowpath.
>> We still had about 2GB free memory while it was highly fragmented.
> The compaction was able to make a reasonable forward progress for you.
> But considering mlx4_alloc_icm is called with GFP_KERNEL resp. GFP_HIGHUSER
> then the OOM killer is clearly possible as long as the order is lower
> than 4.

The allocation was 256KB so the order was much higher than 4. The 
compaction seemed to be the root
cause for our problem. It took too long to finish its work while putting 
mlx4_alloc_icm to sleep in a heavily
fragmented memory situation . Will NORETRY flag avoid the compaction ops 
and fail the 256KB allocation
immediately so mlx4_alloc_icm can enter adjustable lower order 
allocation code path quickly?

Thanks.

>
>>   #0 [ffff8801f308b380] remove_migration_pte at ffffffff811f0e0b
>>   #1 [ffff8801f308b3e0] rmap_walk_file at ffffffff811cb890
>>   #2 [ffff8801f308b440] rmap_walk at ffffffff811cbaf2
>>   #3 [ffff8801f308b450] remove_migration_ptes at ffffffff811f0db0
>>   #4 [ffff8801f308b490] __unmap_and_move at ffffffff811f2ea6
>>   #5 [ffff8801f308b4e0] unmap_and_move at ffffffff811f2fc5
>>   #6 [ffff8801f308b540] migrate_pages at ffffffff811f3219
>>   #7 [ffff8801f308b5c0] compact_zone at ffffffff811b707e
>>   #8 [ffff8801f308b650] compact_zone_order at ffffffff811b735d
>>   #9 [ffff8801f308b6e0] try_to_compact_pages at ffffffff811b7485
>> #10 [ffff8801f308b770] __alloc_pages_direct_compact at ffffffff81195f96
>> #11 [ffff8801f308b7b0] __alloc_pages_slowpath at ffffffff811978a1
>> #12 [ffff8801f308b890] __alloc_pages_nodemask at ffffffff81197ec1
>> #13 [ffff8801f308b970] alloc_pages_current at ffffffff811e261f
>> #14 [ffff8801f308b9e0] mlx4_alloc_icm at ffffffffa01f39b2 [mlx4_core]
>>
>> Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ