[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8EB7F2CE-2C8E-47EA-817F-6DE2D95F0A8B@gmail.com>
Date: Mon, 5 Sep 2022 11:44:55 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Mel Gorman <mgorman@...e.de>
Cc: Kent Overstreet <kent.overstreet@...ux.dev>,
Peter Zijlstra <peterz@...radead.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>, roman.gushchin@...ux.dev,
dave@...olabs.net, Matthew Wilcox <willy@...radead.org>,
liam.howlett@...cle.com, void@...ifault.com, juri.lelli@...hat.com,
ldufour@...ux.ibm.com, Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jens Axboe <axboe@...nel.dk>, mcgrof@...nel.org,
masahiroy@...nel.org, nathan@...nel.org, changbin.du@...el.com,
ytcoode@...il.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, Steven Rostedt <rostedt@...dmis.org>,
bsegall@...gle.com, bristot@...hat.com, vschneid@...hat.com,
cl@...ux.com, penberg@...nel.org, iamjoonsoo.kim@....com,
42.hyeyoo@...il.com, glider@...gle.com,
Marco Elver <elver@...gle.com>, dvyukov@...gle.com,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>,
Arnd Bergmann <arnd@...db.de>, jbaron@...mai.com,
David Rientjes <rientjes@...gle.com>, minchan@...gle.com,
kaleshsingh@...gle.com, kernel-team@...roid.com,
Linux MM <linux-mm@...ck.org>, iommu@...ts.linux.dev,
kasan-dev@...glegroups.com, io-uring@...r.kernel.org,
linux-arch <linux-arch@...r.kernel.org>,
xen-devel@...ts.xenproject.org, linux-bcache@...r.kernel.org,
linux-modules@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 00/30] Code tagging framework and applications
On Aug 31, 2022, at 3:19 AM, Mel Gorman <mgorman@...e.de> wrote:
> On Wed, Aug 31, 2022 at 04:42:30AM -0400, Kent Overstreet wrote:
>> On Wed, Aug 31, 2022 at 09:38:27AM +0200, Peter Zijlstra wrote:
>>> On Tue, Aug 30, 2022 at 02:48:49PM -0700, Suren Baghdasaryan wrote:
>>>> ===========================
>>>> Code tagging framework
>>>> ===========================
>>>> Code tag is a structure identifying a specific location in the source code
>>>> which is generated at compile time and can be embedded in an application-
>>>> specific structure. Several applications of code tagging are included in
>>>> this RFC, such as memory allocation tracking, dynamic fault injection,
>>>> latency tracking and improved error code reporting.
>>>> Basically, it takes the old trick of "define a special elf section for
>>>> objects of a given type so that we can iterate over them at runtime" and
>>>> creates a proper library for it.
>>>
>>> I might be super dense this morning, but what!? I've skimmed through the
>>> set and I don't think I get it.
>>>
>>> What does this provide that ftrace/kprobes don't already allow?
>>
>> You're kidding, right?
>
> It's a valid question. From the description, it main addition that would
> be hard to do with ftrace or probes is catching where an error code is
> returned. A secondary addition would be catching all historical state and
> not just state since the tracing started.
>
> It's also unclear *who* would enable this. It looks like it would mostly
> have value during the development stage of an embedded platform to track
> kernel memory usage on a per-application basis in an environment where it
> may be difficult to setup tracing and tracking. Would it ever be enabled
> in production? Would a distribution ever enable this? If it's enabled, any
> overhead cannot be disabled/enabled at run or boot time so anyone enabling
> this would carry the cost without never necessarily consuming the data.
>
> It might be an ease-of-use thing. Gathering the information from traces
> is tricky and would need combining multiple different elements and that
> is development effort but not impossible.
>
> Whatever asking for an explanation as to why equivalent functionality
> cannot not be created from ftrace/kprobe/eBPF/whatever is reasonable.
I would note that I have a solution in the making (which pretty much works)
for this matter, and does not require any kernel changes. It produces a
call stack that leads to the code that lead to syscall failure.
The way it works is by using seccomp to trap syscall failures, and then
setting ftrace function filters and kprobes on conditional branches,
indirect branch targets and function returns.
Using symbolic execution, backtracking is performed and the condition that
lead to the failure is then pin-pointed.
I hope to share the code soon.
Powered by blists - more mailing lists