[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpELZBoM8uG9prkra1sJ7tDiy_eF9TwetXSSN3XDssp8CQ@mail.gmail.com>
Date: Wed, 31 Aug 2022 14:38:08 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Kent Overstreet <kent.overstreet@...ux.dev>,
Michal Hocko <mhocko@...e.com>, Mel Gorman <mgorman@...e.de>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Davidlohr Bueso <dave@...olabs.net>,
Matthew Wilcox <willy@...radead.org>,
"Liam R. Howlett" <liam.howlett@...cle.com>,
David Vernet <void@...ifault.com>,
Juri Lelli <juri.lelli@...hat.com>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jens Axboe <axboe@...nel.dk>, mcgrof@...nel.org,
masahiroy@...nel.org, nathan@...nel.org, changbin.du@...el.com,
ytcoode@...il.com, Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Benjamin Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>, 42.hyeyoo@...il.com,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>, dvyukov@...gle.com,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>, arnd@...db.de,
jbaron@...mai.com, David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan@...gle.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
kernel-team <kernel-team@...roid.com>,
Linux-MM <linux-mm@...ck.org>, iommu@...ts.linux.dev,
kasan-dev@...glegroups.com, io-uring@...r.kernel.org,
linux-arch@...r.kernel.org, xen-devel@...ts.xenproject.org,
linux-bcache@...r.kernel.org, linux-modules@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 00/30] Code tagging framework and applications
On Wed, Aug 31, 2022 at 1:56 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Wed, Aug 31, 2022 at 12:02 PM Kent Overstreet
> <kent.overstreet@...ux.dev> wrote:
> >
> > On Wed, Aug 31, 2022 at 12:47:32PM +0200, Michal Hocko wrote:
> > > On Wed 31-08-22 11:19:48, Mel Gorman wrote:
> > > > Whatever asking for an explanation as to why equivalent functionality
> > > > cannot not be created from ftrace/kprobe/eBPF/whatever is reasonable.
> > >
> > > Fully agreed and this is especially true for a change this size
> > > 77 files changed, 3406 insertions(+), 703 deletions(-)
> >
> > In the case of memory allocation accounting, you flat cannot do this with ftrace
> > - you could maybe do a janky version that isn't fully accurate, much slower,
> > more complicated for the developer to understand and debug and more complicated
> > for the end user.
> >
> > But please, I invite anyone who's actually been doing this with ftrace to
> > demonstrate otherwise.
> >
> > Ftrace just isn't the right tool for the job here - we're talking about adding
> > per callsite accounting to some of the fastest fast paths in the kernel.
> >
> > And the size of the changes for memory allocation accounting are much more
> > reasonable:
> > 33 files changed, 623 insertions(+), 99 deletions(-)
> >
> > The code tagging library should exist anyways, it's been open coded half a dozen
> > times in the kernel already.
> >
> > And once we've got that, the time stats code is _also_ far simpler than doing it
> > with ftrace would be. If anyone here has successfully debugged latency issues
> > with ftrace, I'd really like to hear it. Again, for debugging latency issues you
> > want something that can always be on, and that's not cheap with ftrace - and
> > never mind the hassle of correlating start and end wait trace events, builting
> > up histograms, etc. - that's all handled here.
> >
> > Cheap, simple, easy to use. What more could you want?
> >
>
> This is very interesting work! Do you have any data about the overhead
> this introduces, especially in a production environment? I am
> especially interested in memory allocations tracking and detecting
> leaks.
I had the numbers for my previous implementation, before we started using the
lazy percpu counters but that would not apply to the new implementation. I'll
rerun the measurements and will post the exact numbers in a day or so.
> (Sorry if you already posted this kind of data somewhere that I missed)
Powered by blists - more mailing lists