[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200212102645.7b2e5b228048b6d22331e47d@linux-foundation.org>
Date: Wed, 12 Feb 2020 10:26:45 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Rik van Riel <riel@...riel.com>, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Dave Chinner <david@...morbit.com>,
Yafang Shao <laoar.shao@...il.com>,
Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>, kernel-team@...com
Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker
LRU
On Wed, 12 Feb 2020 11:35:40 -0500 Johannes Weiner <hannes@...xchg.org> wrote:
> Since the cache purging code was written for highmem scenarios, how
> about making it specific to CONFIG_HIGHMEM at least?
Why do I have memories of suggesting this a couple of weeks ago ;)
> That way we improve the situation for the more common setups, without
> regressing highmem configurations. And if somebody wanted to improve
> the CONFIG_HIGHMEM behavior as well, they could still do so.
>
> Somethig like the below delta on top of my patch?
Does it need to be that complicated? What's wrong with
--- a/fs/inode.c~a
+++ a/fs/inode.c
@@ -761,6 +761,10 @@ static enum lru_status inode_lru_isolate
return LRU_ROTATE;
}
+#ifdef CONFIG_HIGHMEM
+ /*
+ * lengthy blah
+ */
if (inode_has_buffers(inode) || inode->i_data.nrpages) {
__iget(inode);
spin_unlock(&inode->i_lock);
@@ -779,6 +783,7 @@ static enum lru_status inode_lru_isolate
spin_lock(lru_lock);
return LRU_RETRY;
}
+#endif
WARN_ON(inode->i_state & I_NEW);
inode->i_state |= I_FREEING;
_
Whatever we do will need plenty of testing. It wouldn't surprise me
if there are people who unknowingly benefit from this code on
64-bit machines.
Powered by blists - more mailing lists