lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Feb 2017 08:43:25 +0100
From:   Ingo Molnar <mingo@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Arjan van de Ven <arjan@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
        torvalds@...ux-foundation.org, bp@...en8.de, jpoimboe@...hat.com,
        richard.weinberger@...il.com,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] x86: Implement __WARN using UD0


* Peter Zijlstra <peterz@...radead.org> wrote:

> 0000000000000016 <refcount_add_not_zero>:
>   16:   55                      push   %rbp
>   17:   8b 16                   mov    (%rsi),%edx
>   19:   41 83 c8 ff             or     $0xffffffff,%r8d
>   1d:   48 89 e5                mov    %rsp,%rbp
>   20:   85 d2                   test   %edx,%edx
>   22:   74 25                   je     49 <refcount_add_not_zero+0x33>
>   24:   83 fa ff                cmp    $0xffffffff,%edx
>   27:   74 1c                   je     45 <refcount_add_not_zero+0x2f>
>   29:   89 d1                   mov    %edx,%ecx
>   2b:   89 d0                   mov    %edx,%eax
>   2d:   01 f9                   add    %edi,%ecx
>   2f:   41 0f 42 c8             cmovb  %r8d,%ecx
>   33:   f0 0f b1 0e             lock cmpxchg %ecx,(%rsi)
>   37:   39 c2                   cmp    %eax,%edx
>   39:   74 04                   je     3f <refcount_add_not_zero+0x29>
>   3b:   89 c2                   mov    %eax,%edx
>   3d:   eb e1                   jmp    20 <refcount_add_not_zero+0xa>
>   3f:   ff c1                   inc    %ecx
>   41:   75 02                   jne    45 <refcount_add_not_zero+0x2f>
>   43:   0f ff                   (bad)  
>   45:   b0 01                   mov    $0x1,%al
>   47:   eb 02                   jmp    4b <refcount_add_not_zero+0x35>
>   49:   31 c0                   xor    %eax,%eax
>   4b:   5d                      pop    %rbp
>   4c:   c3                      retq  

BTW., one thing that is probably not represented fairly by this example is the 
better, much lower register clobbering impact of trap-driven WARN_ON() versus 
function call driven WARN_ON(): the trap will preserve all registers, while a call 
driven slow path will clobber all the caller-save registers.

In this example this does not show up much because the WARN_ON() is done at the 
end of the function. In function where WARN()s are emitted earlier the size of the 
slow path should be even better.

In any case, I like your patch, I believe what counts is the .text size reduction:

  text            data     bss    dec              hex    filename                     size

  10503189        4442584  843776 15789549         f0eded defconfig-build/vmlinux.pre  25242904
  10483798        4442584  843776 15770158         f0a22e defconfig-build/vmlinux.post 25243504

Put differently: for essentially zero RAM and runtime cost we've bought a 0.2% 
larger (instruction) cache (!!). That's quite significant, IMHO, as lots of kernel 
intense workloads involve many small functions.

The only high level question is whether we trust the trap machinery to generate 
WARN_ON()s. I believe we do.

BTW.: why not use INT3 instead of all these weird #UD opcodes? It's a single byte 
opcode and we can do a quick exception table search in do_debug(). This way we'll 
also have irqs disabled which might help getting the message out before any irq 
handler comes in and muddies the waters.

In a sense WARN_ON()s and BUG_ON()s can be considered permanently installed 
in-line kprobes, with a special, built-in handler.

BTW. #2: side note, GCC generated crap code here. Why didn't it do:

>   3f:   ff c1                   inc    %ecx
>   41:   75 02                   jne    55 <refcount_add_not_zero+0x2f>
>   45:   b0 01                   mov    $0x1,%al
>   4b:   5d                      pop    %rbp
>   4c:   c3                      retq  
>
>   49:   31 c0                   xor    %eax,%eax
>   4b:   5d                      pop    %rbp
>   4c:   c3                      retq  
>
>   55:   0f ff                   (bad)  

?

It's hand edited so the offsets are wrong, but the idea is that for the same 14 
bytes we get a straight fall-through fast path to the RETQ and one JMP less 
executed in the 'return 1' case. Both the 'return 0' case and the #UD are in tail 
part of the function.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ