[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <938f6eda-f62c-457f-bc42-b2d12fc6e2c7@gmx.de>
Date: Fri, 19 Apr 2024 22:47:06 +0200
From: Michael Schierl <schierlm@....de>
To: Michael Kelley <mhklinux@...look.com>, Jean DELVARE <jdelvare@...e.com>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>, Wei Liu <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>
Cc: "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Early kernel panic in dmi_decode when running 32-bit kernel on
Hyper-V on Windows 11
Hello,
Am 19.04.2024 um 18:36 schrieb Michael Kelley:
>> I still want to understand why 32-bit Linux is taking an oops during
>> boot while 64-bit Linux does not.
>
> The difference is in this statement in dmi_save_devices():
>
> count = (dm->length - sizeof(struct dmi_header)) / 2;
>
> On a 64-bit system, count is 0xFFFFFFFE. That's seen as a
> negative value, and the "for" loop does not do any iterations. So
> nothing bad happens.
>
> But on a 32-bit system, count is 0x7FFFFFFE. That's a big
> positive number, and the "for" loop iterates to non-existent
> memory as Michael Schierl originally described.
>
> I don't know the "C" rules for mixed signed and unsigned
> expressions, and how they differ on 32-bit and 64-bit systems.
> But that's the cause of the different behavior.
Probably lots of implementation defined behaviour here. But when looking
at gcc 12.2 for x86/amd64 architecture (which is the version in Debian),
it is at least apparent from the assembly listing:
https://godbolt.org/z/he7MfcWfE
First of all (this gets me every time): sizeof(int) is 4 on both 32-and
64-bit, unlike sizeof(uintptr_t), which is 8 on 64-bit.
Both 32-bit and 64-bit versions zero-extend the value of dm->length from
8 bits to 32 bits (or actually native bitlength as the upper 32 bits of
rax get set to zero whenever eax is assigned), and then the subtraction
and shifting (division) happen as native unsigend type, taking only the
lowest 32 bits of the result as value for count. In the 64-bit case one
of the extra leading 1 bits from the subtraction gets shifted into the
MSB of the result, while in the 32-bit case it remains empty.
When using long instead of int (64-bit signed integer, as I assumed when
looking at the code for the first time), the result would be
0x7FFF_FFFF_FFFF_FFFE on 64-bits, as no truncations happens, and the
behavior would be the same. This clearly shows that I am mentally still
in the 32-bit era, perhaps that explains why I like 32-bit kernels over
64-bit ones so much :D
> Regardless of the 32-bit vs. 64-bit behavior, the DMI blob is malformed,
> almost certainly as created by Hyper-V. I'll see if I can bring this to
> the attention of one of my previous contacts on the Hyper-V team.
Thanks,
Michael
Powered by blists - more mailing lists