[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140905141416.GA1510@thunk.org>
Date: Fri, 5 Sep 2014 10:14:16 -0400
From: Theodore Ts'o <tytso@....edu>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Gioh Kim <gioh.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, jack@...e.cz,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org, viro@...iv.linux.org.uk,
paulmck@...ux.vnet.ibm.com, peterz@...radead.org,
adilger.kernel@...ger.ca, minchan@...nel.org, gunho.lee@....com
Subject: Re: [PATCHv4 0/3] new APIs to allocate buffer-cache with user
specific flag
On Fri, Sep 05, 2014 at 04:32:48PM +0900, Joonsoo Kim wrote:
> I also test another approach, such as allocate freepage in CMA
> reserved region as late as possible, which is also similar to your
> suggestion and this doesn't works well. When reclaim is started,
> too many pages reclaim at once, because lru list has successive pages
> in CMA region and these doesn't help kswapd's reclaim. kswapd stop
> reclaiming when freepage count is recovered. But, CMA pages isn't
> counted for freepage for kswapd because they can't be usable for
> unmovable, reclaimable allocation. So kswap reclaim too many pages
> at once unnecessarilly.
Have you considered putting the pages in a CMA region in a separate
zone? After all, that's what we originally did with brain-damaged
hardware that could only DMA into the low 16M of memory. We just
reserved a separate zone for that? That way, we could do
zone-directed reclaim and free pages in that zone, if that was what
was actually needed.
But if we would also preferentially avoid using pages from that zone
unless there was no choice, in order to avoid needing to do that
zone-directed reclaim. Perhaps a similar solution could do done here?
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists