[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100125094228.f7ca1430.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 25 Jan 2010 09:42:28 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Chris Frost <frost@...UCLA.EDU>,
Andrew Morton <akpm@...ux-foundation.org>,
Steve Dickson <steved@...hat.com>,
David Howells <dhowells@...hat.com>,
Xu Chenfeng <xcf@...c.edu.cn>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Steve VanDeBogart <vandebo-lkml@...dbox.net>
Subject: Re: [PATCH] mm/readahead.c: update the LRU positions of in-core
pages, too
On Sat, 23 Jan 2010 18:22:22 +0800
Wu Fengguang <fengguang.wu@...el.com> wrote:
> Hi Chris,
>
> > > +/*
> > > + * Move pages in danger (of thrashing) to the head of inactive_list.
> > > + * Not expected to happen frequently.
> > > + */
> > > +static unsigned long rescue_pages(struct address_space *mapping,
> > > + struct file_ra_state *ra,
> > > + pgoff_t index, unsigned long nr_pages)
> > > +{
> > > + struct page *grabbed_page;
> > > + struct page *page;
> > > + struct zone *zone;
> > > + int pgrescue = 0;
> > > +
> > > + dprintk("rescue_pages(ino=%lu, index=%lu, nr=%lu)\n",
> > > + mapping->host->i_ino, index, nr_pages);
> > > +
> > > + for(; nr_pages;) {
> > > + grabbed_page = page = find_get_page(mapping, index);
> > > + if (!page) {
> > > + index++;
> > > + nr_pages--;
> > > + continue;
> > > + }
> > > +
> > > + zone = page_zone(page);
> > > + spin_lock_irq(&zone->lru_lock);
> > > +
> > > + if (!PageLRU(page)) {
> > > + index++;
> > > + nr_pages--;
> > > + goto next_unlock;
> > > + }
> > > +
> > > + do {
> > > + struct page *the_page = page;
> > > + page = list_entry((page)->lru.prev, struct page, lru);
> > > + index++;
> > > + nr_pages--;
> > > + ClearPageReadahead(the_page);
> > > + if (!PageActive(the_page) &&
> > > + !PageLocked(the_page) &&
> > > + page_count(the_page) == 1) {
> >
> > Why require the page count to be 1?
>
> Hmm, I think the PageLocked() and page_count() tests meant to
> skip pages being manipulated by someone else.
>
> You can just remove them. In fact the page_count()==1 will exclude
> the grabbed_page, so must be removed. Thanks for the reminder!
>
> >
> > > + list_move(&the_page->lru, &zone->inactive_list);
> >
> > The LRU list manipulation interface has changed since this patch.
>
> Yeah.
>
> > I believe we should replace the list_move() call with:
> > del_page_from_lru_list(zone, the_page, LRU_INACTIVE_FILE);
> > add_page_to_lru_list(zone, the_page, LRU_INACTIVE_FILE);
> > This moves the page to the top of the list, but also notifies mem_cgroup.
> > It also, I believe needlessly, decrements and then increments the zone
> > state for each move.
>
> Why do you think mem_cgroup shall be notified here? As me understand
> it, mem_cgroup should only care about page addition/removal.
>
No. memcg maintains its LRU list in synchronous way with global LRU.
So, I think it's better to call usual LRU handler calls as Chris does.
And...for maintainance, I like following code rather than your direct code.
Because you mention " Not expected to happen frequently."
void find_isolate_inactive_page(struct address_space *mapping, pgoff_t index, int len)
{
int i = 0;
struct list_head *list;
for (i = 0; i < len; i++)
page = find_get_page(mapping, index + i);
if (!page)
continue;
zone = page_zone(page);
spin_lock_irq(&zone->lru_lock); /* you can optimize this if you want */
/* isolate_lru_page() doesn't handle the type of list, so call __isolate_lru_page */
if (__isolate_lru_page(page, ISOLATE_INACTIVE, 1)
continue;
spin_unlock_irq(&zone->lru_lock);
ClearPageReadahead(page);
putback_lru_page(page);
}
}
Please feel free to do as you want but please takeing care of memcg' lru management.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists