[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d9b7e5f-a6c0-2079-90e7-c02aaeb1f4c0@redhat.com>
Date: Thu, 16 Dec 2021 11:56:59 +0100
From: David Hildenbrand <david@...hat.com>
To: Aisheng Dong <aisheng.dong@....com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dongas86@...il.com" <dongas86@...il.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Jason Liu <jason.hui.liu@....com>, Leo Li <leoyang.li@....com>,
Abel Vesa <abel.vesa@....com>,
"shawnguo@...nel.org" <shawnguo@...nel.org>,
dl-linux-imx <linux-imx@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"lecopzer.chen@...iatek.com" <lecopzer.chen@...iatek.com>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Shijie Qin <shijie.qin@....com>
Subject: Re: [PATCH 1/2] mm: cma: fix allocation may fail sometimes
On 16.12.21 03:54, Aisheng Dong wrote:
>> From: David Hildenbrand <david@...hat.com>
>> Sent: Wednesday, December 15, 2021 8:31 PM
>>
>> On 15.12.21 09:02, Dong Aisheng wrote:
>>> We met dma_alloc_coherent() fail sometimes when doing 8 VPU decoder
>>> test in parallel on a MX6Q SDB board.
>>>
>>> Error log:
>>> cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
>>> cma: number of available pages:
>>>
>> 3@...+20@...+12@...+4@...+32@...+17@...7+23@...3+20@...7
>> 6+99@...77+108
>>> @40852+44@...08+20@...96+108@...64+108@...20+
>>>
>> 108@...00+108@...56+483@...61+1763@...41+1440@...12+20@49
>> 324+20@...88+
>>> 5076@...52+2304@...40+35@...41+20@...20+20@...84+
>>> 7188@...48+84@...20+7276@...52+227@...25+6371@...49=>
>> 33161 free of
>>> 81920 total pages
>>>
>>> When issue happened, we saw there were still 33161 pages (129M) free
>>> CMA memory and a lot available free slots for 148 pages in CMA bitmap
>>> that we want to allocate.
>>>
>>> If dumping memory info, we found that there was also ~342M normal
>>> memory, but only 1352K CMA memory left in buddy system while a lot of
>>> pageblocks were isolated.
>>>
>>> Memory info log:
>>> Normal free:351096kB min:30000kB low:37500kB high:45000kB
>> reserved_highatomic:0KB
>>> active_anon:98060kB inactive_anon:98948kB active_file:60864kB
>> inactive_file:31776kB
>>> unevictable:0kB writepending:0kB present:1048576kB
>> managed:1018328kB mlocked:0kB
>>> bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB
>>> lowmem_reserve[]: 0 0 0
>>> Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB
>> (UMECI) 65*64kB (UMCI)
>>> 36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI)
>> 4*2048kB (MI) 8*4096kB (EI)
>>> 8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB
>>>
>>> The root cause of this issue is that since commit a4efc174b382
>>> ("mm/cma.c: remove redundant cma_mutex lock"), CMA supports
>> concurrent
>>> memory allocation. It's possible that the pageblock process A try to
>>> alloc has already been isolated by the allocation of process B during
>>> memory migration.
>>>
>>> When there're multi process allocating CMA memory in parallel, it's
>>> likely that other the remain pageblocks may have also been isolated,
>>> then CMA alloc fail finally during the first round of scanning of the
>>> whole available CMA bitmap.
>>
>> I already raised in different context that we should most probably convert that
>> -EBUSY to -EAGAIN -- to differentiate an actual migration problem from a
>> simple "concurrent allocations that target the same MAX_ORDER -1 range".
>>
>
> Thanks for the info. Is there a patch under review?
No, and I was too busy for now to send it out.
> BTW i wonder that probably makes no much difference for my patch since we may
> prefer retry the next pageblock rather than busy waiting on the same isolated pageblock.
Makes sense. BUT as of now we isolate not only a pageblock but a
MAX_ORDER -1 page (e.g., 2 pageblocks on x86-64 (!) ). So you'll have
the same issue in that case.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists