[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <1378338609.2354.86.camel@kjgkr>
Date: Thu, 05 Sep 2013 08:50:09 +0900
From: Jaegeuk Kim <jaegeuk.kim@...sung.com>
To: Jin Xu <jinuxstyle@...il.com>
Cc: linux-f2fs-devel@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] f2fs: optimize gc for better performance
Hi Jin,
2013-09-04 (수), 21:17 +0800, Jin Xu:
> Hi Jaegeuk,
>
> On 04/09/2013 17:40, Jaegeuk Kim wrote:
> > Hi Jin,
> >
> > 2013-09-04 (수), 07:59 +0800, Jin Xu:
> >> Hi Jaegeuk,
> >>
> >> On 03/09/2013 08:45, Jaegeuk Kim wrote:
> >>> Hi Jin,
> >>>
> >>>> [...]
> >>>>>
> >>>>> It seems that we can obtain the performance gain just by setting the
> >>>>> MAX_VICTIM_SEARCH to 4096, for example.
> >>>>> So, how about just adding an ending criteria like below?
> >>>>>
> >>>>
> >>>> I agree that we could get the performance improvement by simply
> >>>> enlarging the MAX_VICTIM_SEARCH to 4096, but I am concerning the
> >>>> scalability a little bit. Because it might always searching the whole
> >>>> bitmap in some cases, for example, when dirty segments is 4000 and
> >>>> total segments is 409600.
> >>>>> [snip]
> >>>> [...]
> >>>>>
> >>>>> if (p->max_search > MAX_VICTIM_SEARCH)
> >>>>> p->max_search = MAX_VICTIM_SEARCH;
> >>>>>
> >>>>
> >>>> The optimization does not apply to SSR mode. There has a reason.
> >>>> As noticed in the test, when SSR selected the segments that have most
> >>>> garbage blocks, then when gc is needed, all the dirty segments might
> >>>> have very less garbage blocks, thus the gc overhead is high. This might
> >>>> lead to performance degradation. So the patch does not change the
> >>>> victim selection policy for SSR.
> >>>
> >>> I think it doesn't care.
> >>> GC is only triggered during the direct node block allocation.
> >>> What it means that we need to consider the number of GC triggers where
> >>> the GC triggers more frequently during the normal data allocation than
> >>> the node block allocation.
> >>> So, I think it would not degrade performance significatly.
> >>>
> >>> BTW, could you show some numbers for this?
> >>> Or could you test what I suggested?
> >>>
> >>> Thanks,
> >>>
> >>
> >> I re-ran the test and got the following result:
> >>
> >> ---------------------------------------
> >> 2GB SDHC
> >> create 52023 files of size 32768 bytes
> >> random re-write 100000 records of 4KB
> >> ---------------------------------------
> >>
> >> | file creation (s) | rewrite time (s) | gc count | gc garbage blocks |
> >>
> >> no patch 341 4227 1174 174840
> >> patched 296 2995 634 109314
> >> patched (KIM) 324 2958 645 106682
> >>
> >> In this test, it does not show the minor performance degradation caused
> >> by applying the patch to SSR mode. Instead, the performance is a little
> >> better with what you suggested.
> >>
> >> I agree that the performance degradation would not be significant even
> >> it does degrade. I ever saw the minor degradation in some workloads, but
> >> I didn't save the data.
> >>
> >> So, I agree that we can apply the patch to SSR mode as well.
> >>
> >> And do you still have concerns about the formula for calculating the #
> >> of search?
> >
> > Thank you for the test. :)
> > What I've concerned is that, if it is really important to get a victim
> > more accurately for the performance as you described, it doesn't need to
> > calculate the number of searches IMO. Just let's select nr_dirty. Why
> > not?
> > Only the thing that we should consider is to handle the case where the
> > nr_dirty is too large.
> > For this, we can just limit the # of searches to avoid performance
> > degradation.
> >
> > Still actually, I'm not convincing the effectiveness of your formula.
> > If possible, could you show it with numbers?
>
> It's not easy to prove the effectiveness of the formula. It's just for
> eliminating my concern on the scalability of searching. Since it does
> not matter much for the performance improvement, we can put it aside
> and choose the simpler method as you suggested.
>
> So, should I revise the patch based on what you suggested or will
> you take care of it?
Could you make a patch with your performance description and sumbit it
again?
Thanks a lot,
--
Jaegeuk Kim
Samsung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists