[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EC5FE6A.3080003@openvz.org>
Date: Fri, 18 Nov 2011 10:42:50 +0400
From: Konstantin Khlebnikov <khlebnikov@...nvz.org>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH] mm: account reaped page cache on inode cache pruning
Andrew Morton wrote:
> On Wed, 16 Nov 2011 17:47:13 +0300
> Konstantin Khlebnikov<khlebnikov@...nvz.org> wrote:
>
>> Inode cache pruning indirectly reclaims page-cache by invalidating mapping pages.
>> Let's account them into reclaim-state to notice this progress in memory reclaimer.
>>
>> Signed-off-by: Konstantin Khlebnikov<khlebnikov@...nvz.org>
>> ---
>> fs/inode.c | 2 ++
>> 1 files changed, 2 insertions(+), 0 deletions(-)
>>
>> diff --git a/fs/inode.c b/fs/inode.c
>> index ee4e66b..1f6c48d 100644
>> --- a/fs/inode.c
>> +++ b/fs/inode.c
>> @@ -692,6 +692,8 @@ void prune_icache_sb(struct super_block *sb, int nr_to_scan)
>> else
>> __count_vm_events(PGINODESTEAL, reap);
>> spin_unlock(&sb->s_inode_lru_lock);
>> + if (current->reclaim_state)
>> + current->reclaim_state->reclaimed_slab += reap;
>>
>> dispose_list(&freeable);
>> }
>
> hm, yes, I suppose we should.
>
> It seems to be cheating to use the "reclaimed_slab" field for this.
> Perhaps it would be cleaner to add an additional field to reclaim_state
> for non-slab pages which were also reclaimed. That's a cosmetic thing
> and I guess we don't need to go that far, not sure...
Do we really need separate on-stack reclaim_state structure with single field?
Maybe replace it with single long (or even unsigned int) .reclaimed_pages field on task_struct
and account reclaimed pages unconditionally.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists