[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20131203143841.11b71e387dc1db3a8ab0974c@linux-foundation.org>
Date: Tue, 3 Dec 2013 14:38:41 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc: Fengguang Wu <fengguang.wu@...el.com>,
David Cohen <david.a.cohen@...ux.intel.com>,
Al Viro <viro@...iv.linux.org.uk>,
Damien Ramonda <damien.ramonda@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] mm readahead: Fix the readahead fail in case of
empty numa node
On Tue, 3 Dec 2013 16:06:17 +0530 Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com> wrote:
> On a cpu with an empty numa node,
This makes no sense - numa nodes don't reside on CPUs.
I think you mean "on a CPU which resides on a memoryless NUMA node"?
> readahead fails because max_sane_readahead
> returns zero. The reason is we look into number of inactive + free pages
> available on the current node.
>
> The following patch tries to fix the behaviour by checking for potential
> empty numa node cases.
> The rationale for the patch is, readahead may be worth doing on a remote
> node instead of incuring costly disk faults later.
>
> I still feel we may have to sanitize the nr below, (for e.g., nr/8)
> to avoid serious consequences of malicious application trying to do
> a big readahead on a empty numa node causing unnecessary load on remote nodes.
> ( or it may even be that current behaviour is right in not going ahead with
> readahead to avoid the memory load on remote nodes).
>
I don't recall the rationale for the current code and of course we
didn't document it. It might be in the changelogs somewhere - could
you please do the git digging and see if you can find out?
I don't immediately see why readahead into a different node is
considered a bad thing.
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -243,8 +243,11 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
> */
> unsigned long max_sane_readahead(unsigned long nr)
> {
> - return min(nr, (node_page_state(numa_node_id(), NR_INACTIVE_FILE)
> - + node_page_state(numa_node_id(), NR_FREE_PAGES)) / 2);
> + unsigned long numa_free_page;
> + numa_free_page = (node_page_state(numa_node_id(), NR_INACTIVE_FILE)
> + + node_page_state(numa_node_id(), NR_FREE_PAGES));
> +
> + return numa_free_page ? min(nr, numa_free_page / 2) : nr;
Well even if this CPU's node has very little pagecache at all, what's
wrong with permitting readahead? We don't know that the new pagecache
will be allocated exclusively from this CPU's node anyway. All very
odd.
Whatever we do, we should leave behind some good code comments which
explain the rationale(s), please. Right now it's rather opaque.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists