[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ehm6p15v.fsf@devron.myhome.or.jp>
Date: Thu, 13 Sep 2012 21:17:32 +0900
From: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
To: "J. Bruce Fields" <bfields@...ldses.org>
Cc: Namjae Jeon <linkinjeon@...il.com>,
"Steven J. Magnani" <steve@...idescorp.com>,
Al Viro <viro@...iv.linux.org.uk>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org,
Namjae Jeon <namjae.jeon@...sung.com>,
Ravishankar N <ravi.n1@...sung.com>,
Amit Sahrawat <a.sahrawat@...sung.com>
Subject: Re: [PATCH v2 1/5] fat: allocate persistent inode numbers
"J. Bruce Fields" <bfields@...ldses.org> writes:
>> >> Grepping around... Documentation/sysctl/vm.txt mentions a
>> >> vfs_cache_pressure parameter.
>> >> Yeah. And dirty hack will be possible to adjust sb->s_shrink.batch.
>> > I am worrying if it could lead to OOM condition on embedded
>> > system(short memory(DRAM) and support 3TB HDD disk of big size.)
>> >
>> > Please let me know if any issues or queries.
>>
>> So, now I think stable inode number may be useful if there are users of
>> it. And I guess those functionality is no collisions with -mm. And I
>> suppose we can add two modes for "nfs" option (e.g. nfs=1 and nfs=2).
>>
>> If nfs=1, works like current -mm without no limited operations.
>
> Apologies, I haven't been following the conversation carefully: remind
> me what "works like current -mm" means?
Current -mm means the best-effort work only if inode cache is not
evicted. I.e. if there is no inode cache anymore on server, server
would return ESTALE. So I guess the behavior would not be stable
relatively.
Thanks.
>> If nfs=2, try to make stable FH and limit some operations
>>
>> (option name doesn't matter here.)
>>
>> Does this work fine?
--
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists