[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aab854b3-d9fd-3454-c06b-01ff441dec08@suse.cz>
Date: Tue, 8 Mar 2022 17:04:59 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>,
wangjianxing <wangjianxing@...ngson.cn>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH 1/1] mm/page_alloc: add scheduling point to
free_unref_page_list
On 3/3/22 00:34, Andrew Morton wrote:
> On Tue, 1 Mar 2022 20:38:25 -0500 wangjianxing <wangjianxing@...ngson.cn> wrote:
>
>> free a large list of pages maybe cause rcu_sched starved on
>> non-preemptible kernels
>>
>> rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0
>> RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=19
>> [...]
>> Call Trace:
>> free_unref_page_list+0x19c/0x270
>> release_pages+0x3cc/0x498
>> tlb_flush_mmu_free+0x44/0x70
>> zap_pte_range+0x450/0x738
>> unmap_page_range+0x108/0x240
>> unmap_vmas+0x74/0xf0
>> unmap_region+0xb0/0x120
>> do_munmap+0x264/0x438
>> vm_munmap+0x58/0xa0
>> sys_munmap+0x10/0x20
>> syscall_common+0x24/0x38
>
> Thanks.
>
> How did this large list of pages come about?
Looks like it came from TLB batching. But I got lost in the maze of it
trying to figure out how large the batch can grow.
> Will people be seeing this message in upstream kernels, or is it
> specific to some caller code which you have added?
>
> Please always include details such as this so that others can determine
> whether the fix should be backported into -stable kernels.
>
>
Powered by blists - more mailing lists