lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa139c18-1b04-2c20-2e22-89d74503b3cf@caviumnetworks.com>
Date:   Mon, 27 Feb 2017 11:59:50 -0800
From:   David Daney <ddaney@...iumnetworks.com>
To:     Jason Baron <jbaron@...mai.com>, rostedt@...dmis.org
Cc:     linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mips@...ux-mips.org, linuxppc-dev@...ts.ozlabs.org,
        Ingo Molnar <mingo@...nel.org>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Anton Blanchard <anton@...ba.org>,
        Rabin Vincent <rabin@....in>,
        Russell King <linux@...linux.org.uk>,
        Ralf Baechle <ralf@...ux-mips.org>,
        Chris Metcalf <cmetcalf@...lanox.com>,
        Zhigang Lu <zlu@...hip.com>
Subject: Re: [PATCH] jump_label: align jump_entry table to at least 4-bytes

On 02/27/2017 11:18 AM, Jason Baron wrote:
>
>
> On 02/27/2017 01:57 PM, David Daney wrote:
>> On 02/27/2017 10:49 AM, Jason Baron wrote:
>>> The core jump_label code makes use of the 2 lower bits of the
>>> static_key::[type|entries|next] field. Thus, ensure that the jump_entry
>>> table is at least 4-byte aligned.
>>>
>> [...]
>>> diff --git a/arch/mips/include/asm/jump_label.h
>>> b/arch/mips/include/asm/jump_label.h
>>> index e77672539e8e..243791f3ae71 100644
>>> --- a/arch/mips/include/asm/jump_label.h
>>> +++ b/arch/mips/include/asm/jump_label.h
>>> @@ -31,6 +31,7 @@ static __always_inline bool
>>> arch_static_branch(struct static_key *key, bool bran
>>>      asm_volatile_goto("1:\t" NOP_INSN "\n\t"
>>>          "nop\n\t"
>>>          ".pushsection __jump_table,  \"aw\"\n\t"
>>> +        ".balign 4\n\t"
>>>          WORD_INSN " 1b, %l[l_yes], %0\n\t"
>>>          ".popsection\n\t"
>>>          : :  "i" (&((char *)key)[branch]) : : l_yes);
>>> @@ -45,6 +46,7 @@ static __always_inline bool
>>> arch_static_branch_jump(struct static_key *key, bool
>>>      asm_volatile_goto("1:\tj %l[l_yes]\n\t"
>>>          "nop\n\t"
>>>          ".pushsection __jump_table,  \"aw\"\n\t"
>>> +        ".balign 4\n\t"
>>>          WORD_INSN " 1b, %l[l_yes], %0\n\t"
>>>          ".popsection\n\t"
>>>          : :  "i" (&((char *)key)[branch]) : : l_yes);
>>
>>
>> I will speak only for the MIPS part.
>>
>> If the section is not already properly aligned, this change will add
>> padding, which is probably not what we want.
>>
>> Have you ever seen a problem with misalignment in the real world?
>>
>
> Hi,
>
> Yes, there was a WARN_ON() reported on POWER here:
>
> https://lkml.org/lkml/2017/2/19/85
>
> The WARN_ON() triggers if either of the 2 lsb are set. More
> specifically, its coming from 'static_key_set_entries()' from the
> following patch, which was recently added to linux-next:
>
> https://lkml.org/lkml/2017/2/3/558
>
> So this was not seen on mips, but I included all arches in this patch
> that I though might be affected.
>
>> If so, I think a better approach might be to set properties on the
>> __jump_table section to force the proper alignment, or do something in
>> the linker script.
>>
>
> So in include/asm-generic/vmlinux.lds.h, we are already setting
> 'ALIGN(8)', but that does not appear to be sufficient for POWER...

Yes, I just looked at that.  The ALIGN(8), combined with the fact that 
on MIPS, WORD_INSN will always be of size 4 or 8 (32-bit vs. 64-bit 
kernel), seems to make any additional .balign completely unnecessary.


Really, I think you need to figure out why on powerpc the alignments are 
getting screwed up.  The struct jump_entry entries are on a stride of 12 
(for a 32-bit kernel) or 24 (for a 64-bit kernel), any alignment padding 
added by .balign 4 will surely screw that up.

This patch seems like it is papering over a bigger issue that is not 
understood.

I think a proper fix would be to fix whatever is causing the powerpc 
JUMP_ENTRY_TYPE to misbehave.


>
> Also, I checked the size of the vmlinux generated after this change on
> all 4 arches, and it was rather minimal. I think POWER increased the
> most, but the other arches increased by only a few bytes.

For me the size is not the important issue, it is the alignment of the 
struct jump_entry entries in the table.  I don't understand how your 
patch helps, and I cannot Acked-by unless I understand what is being 
done and can see that it is both correct and necessary.

David.




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ