lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ba15bccd-9580-c20e-ae9c-b8d60f49fa07@suse.com>
Date:   Wed, 7 Jun 2023 16:38:58 +0300
From:   Nikolay Borisov <nik.borisov@...e.com>
To:     Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org
Cc:     linux-kernel@...r.kernel.org, mhocko@...e.com, jslaby@...e.cz
Subject: Re: [PATCH 3/3] x86: Disable running 32bit processes if ia32_disabled
 is passed



On 7.06.23 г. 15:53 ч., Thomas Gleixner wrote:
> On Wed, Jun 07 2023 at 15:19, Nikolay Borisov wrote:
>> On 7.06.23 г. 15:01 ч., Thomas Gleixner wrote:
>>>
>>>> -	(elf_check_arch_ia32(x) ||					\
>>>> -	 (IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64))
>>>> +	(!ia32_disabled && (elf_check_arch_ia32(x) ||			\
>>>> +	 (IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64)))
>>>
>>> If I'm reading this correctly then ia32_disabled also prevents binaries
>>> with X32 ABI to be loaded.
>>>
>>> That might be intentional but I'm failing to find any explanation for
>>> this in the changelog.
>>>
>>> X32_ABI != IA32_EMULATION
>>
>> Right, however given the other changes (i.e disabling sysenter/int 0x80)
>> can we really have a working X32 abi when ia32_disabled is true? Now I'm
>> thinking can we really have IA32_EMULATION && X32_ABI && ia32_disabled,
>> I guess the answer is no?
> 
> X32_ABI is completely _independent_ from IA32_EMULATION.
> 
> It just shares some of the required compat code, but it does not use
> sysenter/int 0x80 at all. It uses the regular 64bit system call.
> 
> You can build a working kernel with X32_ABI=y and IA32_EMULATION=n.
> 
> So why would boottime disabling of IA32_EMULATION affect X32_ABI in any
> way?

In this case it shouldn't affect it and the check should be

((elf_check_arch_ia32(x) && !ia32_disabled) || 
(IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64)).

> 
>>>
>>> This issues a SMP function call on all online CPUs to set these entries
>>> to 0 on _every_ CPU hotplug operation.
>>>
>>> I'm sure there is a reason why these bits need to be cleared over and
>>> over. It's just not obvious to me why clearing them _ONCE_ per boot is
>>> not sufficient. It's neither clear to me why CPU0 must do that ($NCPUS -
>>> 1) times, but for the last CPU it's enough to do it once.
>>
>> Actually clearing them once per-cpu is perfectly fine. Looking around
>> the code i saw arch_smt_update() to be the only place where a
>> on_each_cpu() call is being made hence I put the code there. Another
>> aspect I was thinking of is what if a cpu gets onlined at a later stage
>> and a 32bit process is scheduled on that cpu, if the gdt entry wasn't
>> cleared on that CPU then it would be possible to run 32bit processes on
>> it. I guess a better alternative is to use arch_initcall() ?
> 
> Why do you need an on_each_cpu() function call at all? Why would you
> need an extra arch_initcall()?
> 
> The obvious place to clear this is when a CPU is initialized, no?
> 
>>> That aside, what's the justification for doing this in the first place
>>> and why is this again inconsistent vs. CONFIG_IA32_EMULATION=n?
>>
>> I'll put it under an ifdef CONFIG_IA32_EMULATION, unfortunately the
>> traps.h header can't be included in elf.h without causing build breakage.
> 
> You are not answering my question at all and neither the elf nor the
> traps header have anything to do with it. I'm happy to rephrase it:
> 
>    1) What is the justification for setting the 'present' bit of
>       GDT_ENTRY_DEFAULT_USER32_CS to 0?

This was something which was suggested by Andrew Cooper on irc, to my 
understanding the idea is that by not having a 32bit capable descriptor 
it's impossible to run a 32bit code. I guess the scenario where it might 
be relevant if someone starts a 64bit process and with inline assembly 
tries to run 32bit code somehow, though it might be a far fetched 
example and also the fact that the compat_elf_check_arch() forbids 
loading 32bit processes might be sufficient.

> 
>    2) Why is #1 inconsistent with CONFIG_IA32_EMULATION=n?

Because I forgot doing it.

> 
> Thanks,
> 
>          tglx
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ