lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080504145430.GA23137@Krystal>
Date:	Sun, 4 May 2008 10:54:30 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	"H. Peter Anvin" <hpa@...or.com>
Cc:	Ingo Molnar <mingo@...e.hu>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, "Frank Ch. Eigler" <fche@...hat.com>
Subject: Re: [patch 0/2] Immediate Values - jump patching update

* H. Peter Anvin (hpa@...or.com) wrote:
> Mathieu Desnoyers wrote:
>> I would also like to point out that maintaining a _separated_ piece of
>> code for each instrumentation site which would heavily depend on the
>> inner kernel data structures seems like a maintenance nightmare.
>
> Obviously doing this by hand is insane.  That was not my thought.
>

Great :)

>> I would be happy with a solution that doesn't depend on this gigantic
>> DWARF information and can be included in the kernel build process. See,
>> I think tracing is, primarily, a facility that the kernel should provide
>> to users so they can tune and find problems in their own applications.
>> From this POV, it would make sense to consider tracing as part of the
>> kernel code itself, not as a separated, kernel debugging oriented piece
>> of code. If you require per-site dynamic pieces of code, you are only
>> adding to the complexity of such a tracer. Actually, an active tracer
>> would trash the i-cache quite heavily due to these per-site pieces of
>> code. Given that users want a tracer that disturbs as little as
>> possible the normal system behavior, I don't think this "per-site"
>> pieces of code approach is that good.
>
> That's funny, given that's exactly what you have now.
>

The per-site pieces of code are only there to do the stack setup. I
really wonder if we could do this more efficiently from DWARF info.

> DWARF information is the way you get this stuff out of the compiler. That 
> is what it's *there for*.  If you don't want to keep it around you can 
> distill out the information you need and then remove it.  However, as I 
> have said about six times now:

About DWARF : I agree with Ingo that we might not want to depend on this
kind of information normally expected to be correct for debug uses in a
part of infrastructure that is not limited to debugging situation.
Continous performance monitoring is one of the use cases I have in mind.

Moreover, depending on DWARF info requires us to do
architecture-specific code from the beginning. The markers are designed
in such a way that any given new architecture can use the "architecture
agnostic" version of the markers, and then later implement the
optimizations. With about 27 architectures supported by the Linux
kernel, I think this approach makes sense. Looking at the number of
years it took to port something as "simple" as kprobes to 8 out of 27
architectures speaks for itself.

>
> THE RIGHT WAY TO DO THIS IS WITH COMPILER SUPPORT.
>

We totally agree on this about the jump-patching optimization. If the
jump-patching approach I proposed is too far-fetched, and if reading a
variable from memory at each tracing site is too expensive, I would
propose to use the standard "immediate values" flavor until gcc gives us
that kind support for patchable jump instructions.

> All these problems is because you're trying to do something behind the back 
> of the compiler, but not *completely* so.
>

Using the compiler for the markers (I am not talking about immediate
values, which is an optimization) is what gives us the ability to do an
architecture-agnostic version. The 19 architectures which still lacks
kprobes support tell me that it isn't such a bad way to go.

>> Instruction cache bloat inspection :
>> If a code region is placed with cache cold instructions (unlikely
>> branches), it should not increase the cache impact, since although we
>> might use one more cache line, it won't be often loaded in cache because
>> all the code that shares this cache line is unlikely.
>
> This is somewhat nice in theory; I've found that gcc tends to interlace 
> them pretty heavily and so the cache interference even of gcc "out of line" 
> code is sizable.

Following your own suggestion, why don't we fix gcc and make it
interleave unlikely blocks less heavily with hot blocks ?

> Furthermore, modern CPUs often speculatively fetch *both* 
> branches of a conditional.
>
> This is actually the biggest motivation for patching static branches.
>

Agreed. I'd like to find some info about which microarchitectures you
have in mind. Intel Core 2 ?


>> I therefore think that looking only at code size is misleading when
>> considering the cache impact of markers, since they have been designed
>> to put the bytes as far away as possible from cache-hot memory.
>
> Nice theory.  Doesn't work in practice as long as you rely on gcc 
> unlikey().
>
> 	-hpa

Let's fix gcc ! ;)

Cheers,

Mathieu

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ