[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YRbARsMfs2O2fz2s@google.com>
Date: Fri, 13 Aug 2021 11:56:06 -0700
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <chao@...nel.org>
Cc: Daeho Jeong <daeho43@...il.com>, linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, kernel-team@...roid.com,
Daeho Jeong <daehojeong@...gle.com>
Subject: Re: [f2fs-dev] [PATCH v2] f2fs: introduce periodic iostat io latency
traces
On 08/13, Chao Yu wrote:
> On 2021/8/13 4:52, Jaegeuk Kim wrote:
> > On 08/11, Chao Yu wrote:
> > > Hi Daeho,
> > >
> > > On 2021/8/4 6:55, Daeho Jeong wrote:
> > > > From: Daeho Jeong <daehojeong@...gle.com>
> > > >
> > > > Whenever we notice some sluggish issues on our machines, we are always
> > > > curious about how well all types of I/O in the f2fs filesystem are
> > > > handled. But, it's hard to get this kind of real data. First of all,
> > > > we need to reproduce the issue while turning on the profiling tool like
> > > > blktrace, but the issue doesn't happen again easily. Second, with the
> > > > intervention of any tools, the overall timing of the issue will be
> > > > slightly changed and it sometimes makes us hard to figure it out.
> > > >
> > > > So, I added F2FS_IOSTAT_IO_LATENCY config option to support printing out
> > > > IO latency statistics tracepoint events which are minimal things to
> > > > understand filesystem's I/O related behaviors. With "iostat_enable" sysfs
> > > > node on, we can get this statistics info in a periodic way and it
> > > > would cause the least overhead.
> > > >
> > > > [samples]
> > > > f2fs_ckpt-254:1-507 [003] .... 2842.439683: f2fs_iostat_latency:
> > > > dev = (254,11), iotype [peak lat.(ms)/avg lat.(ms)/count],
> > > > rd_data [136/1/801], rd_node [136/1/1704], rd_meta [4/2/4],
> > > > wr_sync_data [164/16/3331], wr_sync_node [152/3/648],
> > > > wr_sync_meta [160/2/4243], wr_async_data [24/13/15],
> > > > wr_async_node [0/0/0], wr_async_meta [0/0/0]
> > > >
> > > > f2fs_ckpt-254:1-507 [002] .... 2845.450514: f2fs_iostat_latency:
> > > > dev = (254,11), iotype [peak lat.(ms)/avg lat.(ms)/count],
> > > > rd_data [60/3/456], rd_node [60/3/1258], rd_meta [0/0/1],
> > > > wr_sync_data [120/12/2285], wr_sync_node [88/5/428],
> > > > wr_sync_meta [52/6/2990], wr_async_data [4/1/3],
> > > > wr_async_node [0/0/0], wr_async_meta [0/0/0]
> > > >
> > > > Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
> > > >
> > > > ---
> > > > v2: clean up with wrappers and fix a build breakage reported by
> > > > kernel test robot <lkp@...el.com>
> > > > ---
> > > > fs/f2fs/Kconfig | 9 +++
> > >
> > > I try to apply this patch in my local dev branch, but it failed due to
> > > conflicting with below commit, it needs to rebase this patch to last dev
> > > branch.
> >
> > I applied this in dev branch. Could you please check?
>
> Yeah, I see.
>
> > > > +config F2FS_IOSTAT_IO_LATENCY
> > > > + bool "F2FS IO statistics IO latency information"
> > > > + depends on F2FS_FS
> > > > + default n
> > > > + help
> > > > + Support printing out periodic IO latency statistics tracepoint
> > > > + events. With this, you have to turn on "iostat_enable" sysfs
> > > > + node to print this out.
> > >
> > > This functionality looks independent, how about introuducing iostat.h
> > > and iostat.c (not sure, maybe trace.[hc])to include newly added structure
> > > and functions for dispersive codes cleanup.
>
> Thoughts? this also can avoid using CONFIG_F2FS_IOSTAT_IO_LATENCY in many places.
It seems there's somewhat dependency with iostat which is done by default.
How about adding this by default as well in the existing iostat, and then
covering all together by F2FS_IOSTAT?
>
> Thanks,
Powered by blists - more mailing lists