lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Jan 2022 12:51:56 -0500
From:   Arnd Bergmann <arnd@...db.de>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-arch <linux-arch@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "David S. Miller" <davem@...emloft.net>,
        Ard Biesheuvel <ardb@...nel.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH 0000/2297] [ANNOUNCE, RFC] "Fast Kernel Headers" Tree -v1:
 Eliminate the Linux kernel's "Dependency Hell"

On Mon, Jan 3, 2022 at 6:12 AM Ingo Molnar <mingo@...nel.org> wrote:
> * Greg Kroah-Hartman <gregkh@...uxfoundation.org> wrote:
> > > Before going into details about how this tree solves 'dependency hell'
> > > exactly, here's the current kernel build performance gain with
> > > CONFIG_FAST_HEADERS=y enabled, (and with CONFIG_KALLSYMS_FAST=y enabled as
> > > well - see below), using a stock x86 Linux distribution's .config with all
> > > modules built into the vmlinux:
> > >
> > >   #
> > >   # Performance counter stats for 'make -j96 vmlinux' (3 runs):
> > >   #
> > >   # (Elapsed time in seconds):
> > >   #
> > >
> > >   v5.16-rc7:            231.34 +- 0.60 secs, 15.5 builds/hour    # [ vanilla baseline ]
> > >   -fast-headers-v1:     129.97 +- 0.51 secs, 27.7 builds/hour    # +78.0% improvement
> > >
> > > Or in terms of CPU time utilized:
> > >
> > >   v5.16-rc7:            11,474,982.05 msec cpu-clock   # 49.601 CPUs utilized
> > >   -fast-headers-v1:      7,100,730.37 msec cpu-clock   # 54.635 CPUs utilized   # +61.6% improvement
> >
> > Speed up is very impressive, nice job!
>
> Thanks! :-)

I've done some work in this area in the past, didn't quite take it enough of the
way to get this far. The best I saw was 30% improvement with clang, which
tends to be more sensitive than gcc towards header file bloat, as it does more
detailed syntax checking before eliminating dead code.

Did you try both gcc and clang for this?

> > That issue aside, I took a glance at the tree, and overall it looks like
> > a lot of nice cleanups.  Most of these can probably go through the
> > various subsystem trees, after you split them out, for the "major" .h
> > cleanups.  Is that something you are going to be planning on doing?
>
> Yeah, I absolutely plan on doing that too:
>
> - About ~70% of the commits can be split up & parallelized through
>   maintainer trees.
>
> - With the exception of the untangling of sched.h, per_task and the
>   "Optimize Headers" series, where a lot of patches are dependent on each
>   other. These are actually needed to get any measurable benefits from this
>   tree (!). We can do these through the scheduler tree, or through the
>   dedicated headers tree I posted.
>
> The latter monolithic series is pretty much unavoidable, it's the result of
> 30 years of coupling a lot of kernel subsystems to task_struct via embedded
> structs & other complex types, that needed quite a bit of effort to
> untangle, and that untangling needed to happen in-order.
>
> Do these plans this sound good to you?

I haven't had a chance to look at your tree yet, I'm still on vacation
without access to my normal workstation. I would like to run my own
scripts for analyzing the header dependencies on it after I get back
next week.

>From what I could tell, linux/sched.h was not the only such problem,
but I saw similarly bad issues with linux/fs.h (which is what I posted
about in November/December), linux/mm.h and linux/netdevice.h
on the high level, in low-level headers there are huge issues with
linux/atomic.h, linux/mutex.h, linux/pgtable.h etc. I expect that you
have addressed these as well, but I'd like to make sure that your
changes are reasonably complete on arm32 and arm64 to avoid
having to do the big cleanup more than once.

My approach to the large mid-level headers is somewhat different:
rather than completely avoiding them from getting included, I would
like to split up the structure definitions from the inline functions.
Linus didn't really like my approach, but I suspect he'll have similar
concerns about your solution for linux/sched.h, especially if we end
up applying the same hack to other commonly used structures
(sk_buff, mm_struct, super_block) in the end. I should be able to
come up with a less handwavy reply after I've actually studied your
approach better.

Most of the patches should be the same either way (adding back
missing includes to drivers, and doing cleanups to commonly
included headers to avoid the deep nesting), the interesting bit
will be how to properly define the larger structures without pulling
in the rest of the world.

         Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ