[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170209012840.GB10747@jaegeuk.local>
Date: Wed, 8 Feb 2017 17:28:40 -0800
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <chao@...nel.org>
Cc: Chao Yu <yuchao0@...wei.com>,
linux-f2fs-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] f2fs: introduce nid cache
On 02/08, Chao Yu wrote:
> On 2017/2/7 15:24, Chao Yu wrote:
> > Hi Jaegeuk,
> >
> > Happy Chinese New Year! :)
> >
> > On 2017/1/24 12:35, Jaegeuk Kim wrote:
> >> Hi Chao,
> >>
> >> On 01/22, Chao Yu wrote:
> >>> In scenario of intensively node allocation, free nids will be ran out
> >>> soon, then it needs to stop to load free nids by traversing NAT blocks,
> >>> in worse case, if NAT blocks does not be cached in memory, it generates
> >>> IOs which slows down our foreground operations.
> >>>
> >>> In order to speed up node allocation, in this patch we introduce a new
> >>> option named "nid cache", when turns on this option, it will load all
> >>> nat entries in NAT blocks when doing mount, and organize all free nids
> >>> in a bitmap, for any operations related to free nid, we will query and
> >>> set the new prebuilded bitmap instead of reading and lookuping NAT
> >>> blocks, so performance of node allocation can be improved.
> >>>
> >>
> >> How does this affect mount time and memory consumption?
> >
> > Sorry for the delay.
> >
> > Let me figure out some numbers later.
>
> a. mount time
>
> I choose slow device (Kingston 16GB SD card) to see how this option affect mount
> time when there is not enough bandwidth in low level,
>
> Before the test, I change readahead window size of NAT pages from FREE_NID_PAGES
> * 8 to sbi->blocks_per_seg for better ra performance, so the result is:
>
> time mount -t f2fs -o nid_cache /dev/sde /mnt/f2fs/
>
> before:
> real 0m0.204s
> user 0m0.004s
> sys 0m0.020s
>
> after:
> real 0m3.792s
Oops, we can't accept this even only for 16GB, right? :(
> user 0m0.000s
> sys 0m0.140s
>
> b. memory consumption
>
> For 16GB size image, there is total 34 NAT pages, so memory footprint is:
> 34 / 2 * 512 * 455 / 8 = 495040 bytes = 483.4 KB
>
> Increasing of memory footprint is liner with total user valid blocks in image,
> and at most it will eat 3900 * 8 * 455 / 8 = 1774500 bytes = 1732.9 KB
How about adding two bitmaps for whole NAT pages and storing the bitmaps in
checkpoint pack, which needs at most two blocks additionally?
1. full-assigned NAT bitmap, where 1 means there is no free nids.
2. empty NAT bitmap, where 1 means whole there-in nids are free.
With these bitmaps, build_free_nids() can scan from 0'th NAT block by:
if (full-assigned NAT)
skip;
else if (empty NAT)
add_free_nid(all);
else
read NAT page and add_free_nid();
The flush_nat_entries() has to change its bitmaps accordingly.
With this approach, I expect we can reuse nids as much as possible while
getting cached NAT pages more effectively.
Thanks,
>
> Thanks,
>
> >
> >> IMO, if those do not
> >> raise huge concerns, we would be able to consider just replacing current free
> >> nid list with this bitmap.
> >
> > Yup, I agree with you.
> >
> > Thanks,
> >
Powered by blists - more mailing lists