[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZgXPZ1uJSUCF79Ef@slm.duckdns.org>
Date: Thu, 28 Mar 2024 10:13:27 -1000
From: Tejun Heo <tj@...nel.org>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: Kemeng Shi <shikemeng@...weicloud.com>, akpm@...ux-foundation.org,
willy@...radead.org, jack@...e.cz, bfoster@...hat.com,
dsterba@...e.com, mjguzik@...il.com, dhowells@...hat.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v2 0/6] Improve visibility of writeback
Hello,
On Thu, Mar 28, 2024 at 03:55:32PM -0400, Kent Overstreet wrote:
> > On Thu, Mar 28, 2024 at 03:40:02PM -0400, Kent Overstreet wrote:
> > > Collecting latency numbers at various key places is _enormously_ useful.
> > > The hard part is deciding where it's useful to collect; that requires
> > > intimate knowledge of the code. Once you're defining those collection
> > > poitns statically, doing it with BPF is just another useless layer of
> > > indirection.
> >
> > Given how much flexibility helps with debugging, claiming it useless is a
> > stretch.
>
> Well, what would it add?
It depends on the case but here's an example. If I'm seeing occasional tail
latency spikes, I'd want to know whether there's any correation with
specific types or sizes of IOs and if so who's issuing them and why. With
BPF, you can detect those conditions to tag and capture where exactly those
IOs are coming from and aggregate the result however you like across
thousands of machines in production without anyone noticing. That's useful,
no?
Also, actual percentile disribution is almost always a lot more insightful
than more coarsely aggregated numbers. We can't add all that to fixed infra.
In most cases not because runtime overhead would be too hight but because
the added interface and code complexity and maintenance overhead isn't
justifiable given how niche, adhoc and varied these use cases get.
> > > The time stats stuff I wrote is _really_ cheap, and you really want this
> > > stuff always on so that you've actually got the data you need when
> > > you're bughunting.
> >
> > For some stats and some use cases, always being available is useful and
> > building fixed infra for them makes sense. For other stats and other use
> > cases, flexibility is pretty useful too (e.g. what if you want percentile
> > distribution which is filtered by some criteria?). They aren't mutually
> > exclusive and I'm not sure bdi wb instrumentation is on top of enough
> > people's minds.
> >
> > As for overhead, BPF instrumentation can be _really_ cheap too. We often run
> > these programs per packet.
>
> The main things I want are just
> - elapsed time since last writeback IO completed, so we can see at a
> glance if it's stalled
> - time stats on writeback io initiation to completion
>
> The main value of this one will be tracking down tail latency issues and
> finding out where in the stack they originate.
Yeah, I mean, if always keeping those numbers around is useful for wide
enough number of users and cases, sure, go ahead and add fixed infra. I'm
not quite sure bdi wb stats fall in that bucket given how little attention
it usually gets.
Thanks.
--
tejun
Powered by blists - more mailing lists