[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110714150700.GC23587@infradead.org>
Date: Thu, 14 Jul 2011 11:07:00 -0400
From: Christoph Hellwig <hch@...radead.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Mel Gorman <mgorman@...e.de>, Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, XFS <xfs@....sgi.com>,
Dave Chinner <david@...morbit.com>,
Johannes Weiner <jweiner@...hat.com>,
Wu Fengguang <fengguang.wu@...el.com>, Jan Kara <jack@...e.cz>,
Rik van Riel <riel@...hat.com>,
Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 1/5] mm: vmscan: Do not writeback filesystem pages in
direct reclaim
On Thu, Jul 14, 2011 at 01:46:34PM +0900, KAMEZAWA Hiroyuki wrote:
> > XFS and btrfs already disable writeback from memcg context, as does ext4
> > for the typical non-overwrite workloads, and none has fallen apart.
> >
> > In fact there's no way we can enable them as the memcg calling contexts
> > tend to have massive stack usage.
> >
>
> Hmm, XFS/btrfs adds pages to radix-tree in deep stack ?
We're using a fairly deep stack in normal buffered read/write,
wich is almost 100% common code. It's not just the long callchain
(see below), but also that we put the unneeded kiocb and a vector
of I/O vects on the stack:
vfs_writev
do_readv_writev
do_sync_write
generic_file_aio_write
__generic_file_aio_write
generic_file_buffered_write
generic_perform_write
block_write_begin
grab_cache_page_write_begin
add_to_page_cache_lru
add_to_page_cache
add_to_page_cache_locked
mem_cgroup_cache_charge
this might additionally come from in-kernel callers like nfsd,
which has even more stack space used. And at this point we only
enter the memcg/reclaim code, which last time I had a stack trace
ate up another about 3k of stack space.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists