lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170206223808.GB6061@gmail.com>
Date:   Mon, 6 Feb 2017 23:38:08 +0100
From:   Ingo Molnar <mingo@...nel.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Galbraith <efault@....de>,
        Oleg Nesterov <oleg@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 20/89] sched/headers, signals: Separate out
 task_struct::signal and task_struct::sighand types and accessors into
 <linux/sched/signal.h>


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Mon, Feb 6, 2017 at 1:35 PM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> >
> > BTW., most of the real work was in identifying and generating that "noise" - but
> > to reviewers it's obviously the least interesting bits.
> >
> > Also note that beyond the header splitup the "noise" is actually what improves
> > kernel code the most all around
> 
> Oh, absolutely agreed. The "noise" is obviously the most important
> part and the most challenging to actually generate (although maybe
> some automation tool could do a lot of this in a perfect world).

BTW., I used half-automation: an (imperfect...) script to recursively follow 
header dependencies to identify the places which I then changed manually, and a 
lot of testing to identify the cases the script missed (or where I made a manual 
mistake).

I started out with a testing based approach alone, but that did not scale very 
well and I had the feeling that it did not converge fast enough either to lead to 
a palatable end result.

Didn't want to invest too much into tooling, as I'd expect this to be a one-off 
effort for one of the worst header dependency problems we have in the kernel 
today. Fortunately most of our other headers are in a much better shape!

> But the noise is also the one where verification is pretty much purely
> about "does it still compile", so from a human angle, once the noise
> has been generated, it's not actually all that interesting.

Yeah, and the repetitive diffs over hundreds of files are actively hindering 
review.

> It's kind of like being NP complete: verification of the noise is
> "trivial" and not all that interesting - once the solution has been
> find.
> 
> > Will post them in 4 groups of 40 patches each or so - would that work for you?
> 
> I suspect that will be a lot easier.to look at (with at least one
> series basically being "there's no point in a human even looking at
> it, other than to verify that it's purely #include changes").
> 
> It would be good to make sure it also ends up bisecting nicely,
> because *if* some problem happens, it would be nice if the bisect then
> clearly points to either "oh, some mistake must have happened during
> code movement" vs "ooh, some really subtle issue with a missed include
> causing some odd fallback code".

Yeah, so I tested it intensively as I was developing it, so barring 
merge/backmerge/conflict-resolution artifacts that happen during rebase (and they 
did happen ...) there shouldn't be any conceptual bisectability hickups in the 
lineup. It wasn't a 'break it and make it work at the end of the series' 
development flow, it was a 'make it work at every commit' approach.

I'll do some bisection validation on the final result to make sure bisectability 
did not bitrot during all the restructuring. Problem is, even on a beefy 60-core 
server with 256 GB RAM a full cross-build test runs for half an hour just for 
defconfigs - so a 150-patch series bisectability test would run for over 3 days 
...

What I can do is some more careful manual review plus the re-testing of a couple 
of random points in the middle of the series - and of course the testing of the 
final result. This should make it reasonably certain that the good bisectability 
that was validated during earlier versions of the patch-set has a good chance of 
being present in the final series as well.

> Because we *do* end up having code in C files that does things like
> 
>    #ifndef ARCH_HAS_XYZ
>    static inline void my_generic_xyz_implementation(...)
>    ...
>     #endif
> 
> so you can end up in the situation that if some specific header file
> wasn't included correctly, the code will still work, but now it will
> use the generic definition rather than the specific one it was
> supposed to use.

Yeah, indeed.

> So these re-organizations do have the potential to cause odd "silent"
> breakage. Most breakage by far should presumably be of the type "it
> doesn't compile and it's very obvious that the header file movement
> missed something", but we *could* have that kind of subtle "it
> compiles, and _almost_ even works, but has a cornercase that is
> broken" that could be fingered by a bisect.
> 
> That's when a good split of patches would be really nice to have too,
> where a patch does either code movement or does header file
> organization movement, but not both.
> 
> So it's not _just_ about actually looking at the patches and trying to
> make sense of them.

Ok, I'm sold on this! This bisectability argument makes me feel much better about 
the resulting 150+ patch-count.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ