[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <qv3vv6355aw5fkzw5yvuwlnyceypcsfl5kkcrvlipxwfl3nuyg@7cqwaqpxn64t>
Date: Thu, 28 Mar 2024 16:22:13 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Tejun Heo <tj@...nel.org>
Cc: Kemeng Shi <shikemeng@...weicloud.com>, akpm@...ux-foundation.org,
willy@...radead.org, jack@...e.cz, bfoster@...hat.com, dsterba@...e.com,
mjguzik@...il.com, dhowells@...hat.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v2 0/6] Improve visibility of writeback
On Thu, Mar 28, 2024 at 10:13:27AM -1000, Tejun Heo wrote:
> Hello,
>
> On Thu, Mar 28, 2024 at 03:55:32PM -0400, Kent Overstreet wrote:
> > > On Thu, Mar 28, 2024 at 03:40:02PM -0400, Kent Overstreet wrote:
> > > > Collecting latency numbers at various key places is _enormously_ useful.
> > > > The hard part is deciding where it's useful to collect; that requires
> > > > intimate knowledge of the code. Once you're defining those collection
> > > > poitns statically, doing it with BPF is just another useless layer of
> > > > indirection.
> > >
> > > Given how much flexibility helps with debugging, claiming it useless is a
> > > stretch.
> >
> > Well, what would it add?
>
> It depends on the case but here's an example. If I'm seeing occasional tail
> latency spikes, I'd want to know whether there's any correation with
> specific types or sizes of IOs and if so who's issuing them and why. With
> BPF, you can detect those conditions to tag and capture where exactly those
> IOs are coming from and aggregate the result however you like across
> thousands of machines in production without anyone noticing. That's useful,
> no?
That's cool, but really esoteric. We need to be able to answer basic
questions and build an overall picture of what the system is doing
without having to reach for the big stuff.
Most users are never going to touch tracing, let alone BPF; that's too
much setup. But I can and do regularly tell users "check this, this and
this" and debug things on that basis without ever touching their
machine.
And basic latency numbers are really easy for users to understand, that
makes them doubly worthwhile to collect and make visible.
> Also, actual percentile disribution is almost always a lot more insightful
> than more coarsely aggregated numbers. We can't add all that to fixed infra.
> In most cases not because runtime overhead would be too hight but because
> the added interface and code complexity and maintenance overhead isn't
> justifiable given how niche, adhoc and varied these use cases get.
You can't calculate percentiles accurately and robustly in one pass -
that only works if your input data obeys a nice statistical
distribution, and the cases we care about are the ones where it doesn't.
>
> > > > The time stats stuff I wrote is _really_ cheap, and you really want this
> > > > stuff always on so that you've actually got the data you need when
> > > > you're bughunting.
> > >
> > > For some stats and some use cases, always being available is useful and
> > > building fixed infra for them makes sense. For other stats and other use
> > > cases, flexibility is pretty useful too (e.g. what if you want percentile
> > > distribution which is filtered by some criteria?). They aren't mutually
> > > exclusive and I'm not sure bdi wb instrumentation is on top of enough
> > > people's minds.
> > >
> > > As for overhead, BPF instrumentation can be _really_ cheap too. We often run
> > > these programs per packet.
> >
> > The main things I want are just
> > - elapsed time since last writeback IO completed, so we can see at a
> > glance if it's stalled
> > - time stats on writeback io initiation to completion
> >
> > The main value of this one will be tracking down tail latency issues and
> > finding out where in the stack they originate.
>
> Yeah, I mean, if always keeping those numbers around is useful for wide
> enough number of users and cases, sure, go ahead and add fixed infra. I'm
> not quite sure bdi wb stats fall in that bucket given how little attention
> it usually gets.
I think it should be getting a lot more attention given that memory
reclaim and writeback are generally implicated whenever a user complains
about their system going out to lunch.
Powered by blists - more mailing lists