[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220302153433.719caef31bd9e99319c5e6a2@linux-foundation.org>
Date: Wed, 2 Mar 2022 15:34:33 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: wangjianxing <wangjianxing@...ngson.cn>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/page_alloc: add scheduling point to
free_unref_page_list
On Tue, 1 Mar 2022 20:38:25 -0500 wangjianxing <wangjianxing@...ngson.cn> wrote:
> free a large list of pages maybe cause rcu_sched starved on
> non-preemptible kernels
>
> rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0
> RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=19
> [...]
> Call Trace:
> free_unref_page_list+0x19c/0x270
> release_pages+0x3cc/0x498
> tlb_flush_mmu_free+0x44/0x70
> zap_pte_range+0x450/0x738
> unmap_page_range+0x108/0x240
> unmap_vmas+0x74/0xf0
> unmap_region+0xb0/0x120
> do_munmap+0x264/0x438
> vm_munmap+0x58/0xa0
> sys_munmap+0x10/0x20
> syscall_common+0x24/0x38
Thanks.
How did this large list of pages come about?
Will people be seeing this message in upstream kernels, or is it
specific to some caller code which you have added?
Please always include details such as this so that others can determine
whether the fix should be backported into -stable kernels.
Powered by blists - more mailing lists