[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <7FB389D0-77D4-482E-8A21-8662DDB00268@amacapital.net>
Date: Sat, 18 Jul 2020 12:25:46 -0700
From: Andy Lutomirski <luto@...capital.net>
To: hpa@...or.com
Cc: Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Joerg Roedel <jroedel@...e.de>, linux-kernel@...r.kernel.org,
joro@...tes.org
Subject: Re: [PATCH] x86/idt: Make sure idt_table takes a whole page
> On Jul 18, 2020, at 10:57 AM, hpa@...or.com wrote:
>
> On July 9, 2020 3:33:55 AM PDT, Joerg Roedel <joro@...tes.org> wrote:
>> From: Joerg Roedel <jroedel@...e.de>
>>
>> On x86-32 the idt_table with 256 entries needs only 2048 bytes. It is
>> page-aligned, but the end of the .bss..page_aligned section is not
>> guaranteed to be page-aligned.
>>
>> As a result, symbols from other .bss sections may end up on the same
>> 4k page as the idt_table, and will accidentially get mapped read-only
>> during boot, causing unexpected page-faults when the kernel writes to
>> them.
>>
>> Avoid this by making the idt_table 4kb in size even on x86-32. On
>> x86-64 the idt_table is already 4kb large, so nothing changes there.
>>
>> Fixes: 3e77abda65b1c ("x86/idt: Consolidate idt functionality")
>> Signed-off-by: Joerg Roedel <jroedel@...e.de>
>> ---
>> arch/x86/kernel/idt.c | 12 ++++++++++--
>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
>> index 0db21206f2f3..83e24f837127 100644
>> --- a/arch/x86/kernel/idt.c
>> +++ b/arch/x86/kernel/idt.c
>> @@ -157,8 +157,13 @@ static const __initconst struct idt_data
>> apic_idts[] = {
>> #endif
>> };
>>
>> -/* Must be page-aligned because the real IDT is used in the cpu entry
>> area */
>> -static gate_desc idt_table[IDT_ENTRIES] __page_aligned_bss;
>> +/*
>> + * Must be page-aligned because the real IDT is used in the cpu entry
>> area.
>> + * Allocate more entries than needed so that idt_table takes a whole
>> page, so it
>> + * is safe to map the idt_table read-only and into the user-space
>> page-table.
>> + */
>> +#define IDT_ENTRIES_ALLOCATED (PAGE_SIZE / sizeof(gate_desc))
>> +static gate_desc idt_table[IDT_ENTRIES_ALLOCATED] __page_aligned_bss;
>>
>> struct desc_ptr idt_descr __ro_after_init = {
>> .size = IDT_TABLE_SIZE - 1,
>> @@ -335,6 +340,9 @@ void __init idt_setup_apic_and_irq_gates(void)
>> idt_map_in_cea();
>> load_idt(&idt_descr);
>>
>> + BUILD_BUG_ON(IDT_ENTRIES_ALLOCATED < IDT_ENTRIES);
>> + BUILD_BUG_ON(sizeof(idt_table) != PAGE_SIZE);
>> +
>> /* Make the IDT table read only */
>> set_memory_ro((unsigned long)&idt_table, 1);
>>
>
> NAK... this isn't the right way to fix this and just really kicks the can down the road. The reason is that you aren't fixing the module that actually has a problem.
>
> The Right Way[TM] is to figure out which module(s) lack the proper alignment for this section. A script using objdump -h or readelf -SW running over the .o files looking for alignment less than 2**12 should spot the modules that are missing the proper .balign directives.
I don’t see the problem. If we are going to treat an object as though it’s 4096 bytes, making C think it’s 4096 bytes seems entirely reasonable to me.
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
Powered by blists - more mailing lists