[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170713131500.uj2kns7lvidjtnix@treble>
Date: Thu, 13 Jul 2017 08:15:00 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
live-patching@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>, Jiri Slaby <jslaby@...e.cz>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Mike Galbraith <efault@....de>
Subject: Re: [PATCH v3 00/10] x86: ORC unwinder (previously undwarf)
On Wed, Jul 12, 2017 at 09:29:17PM -0700, Andi Kleen wrote:
> On Wed, Jul 12, 2017 at 05:47:59PM -0500, Josh Poimboeuf wrote:
> > On Wed, Jul 12, 2017 at 03:30:31PM -0700, Andi Kleen wrote:
> > > Josh Poimboeuf <jpoimboe@...hat.com> writes:
> > > >
> > > > The ORC data format does have a few downsides compared to DWARF. The
> > > > ORC unwind tables take up ~1MB more memory than DWARF eh_frame tables.
> > > >
> > > Can we have an option to just use dwarf instead? For people
> > > who don't want to waste a MB+ to solve a problem that doesn't
> > > exist (as proven by many years of opensuse kernel experience)
> > >
> > > As far as I can tell this whole thing has only downsides compared
> > > to the dwarf unwinder that was earlier proposed. I don't see
> > > a single advantage.
> >
> > Improved speed, reliability, maintainability. Are those not advantages?
>
> Ok. We'll see how it works out.
>
> The memory overhead is quite bad though. You're basically undoing many
> years of efforts to shrink kernel text. I hope this can be still
> done better.
If we're talking *text*, this further shrinks text size by 3% because
frame pointers can be disabled.
As far as the data size goes, is anyone *truly* impacted by that extra
1MB or so? If you're enabling a DWARF/ORC unwinder, you're already
signing up for a few extra megs anyway.
I do have a vague idea about how to reduce the data size, if/when the
size becomes a problem. Basically there's a *lot* of duplication in the
ORC data:
$ tools/objtool/objtool orc dump vmlinux | wc -l
311095
$ tools/objtool/objtool orc dump vmlinux |cut -d' ' -f2- |sort |uniq |wc -l
345
So that's over 300,000 6-byte entries, only 345 of which are unique.
There should be a way to compress that. However, it will probably
require sacrificing some combination of speed and simplicity.
--
Josh
Powered by blists - more mailing lists