[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190805121314.GN7597@dhcp22.suse.cz>
Date: Mon, 5 Aug 2019 14:13:14 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Fuqian Huang <huangfq.daxian@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Is it safe to kmalloc a large size of memory in interrupt
handler?
On Mon 05-08-19 19:57:54, Fuqian Huang wrote:
> In the implementation of kmalloc.
> when the allocated size is larger than KMALLOC_MAX_CACHE_SIZE,
> it will call kmalloc_large to allocate the memory.
> kmalloc_large ->
> kmalloc_order_trace->kmalloc_order->alloc_pages->alloc_pages_current->alloc_pages_nodemask->get_page_from_freelist->node_reclaim->__node_reclaim->shrink_node->shrink_node_memcg->get_scan_count
You shouldn't really get there when using GFP_NOWAIT/GFP_ATOMIC.
> get_scan_count will call spin_unlock_irq which enables local interrupt.
> As the local interrupt should be disabled in the interrupt handler.
> It is safe to use kmalloc to allocate a large size of memory in
> interrupt handler?
It will work very unreliably because larger physically contiguous memory
is not generally available without doing compaction after a longer
runtime. In general I would recommend to use pre allocated buffers or
defer the actual handling to a less restricted context if possible.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists