[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 12 Dec 2017 09:58:58 -0800
From: Andy Lutomirski <luto@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andy Lutomirsky <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...el.com>,
Borislav Petkov <bpetkov@...e.de>,
Greg KH <gregkh@...uxfoundation.org>,
Kees Cook <keescook@...gle.com>,
Hugh Dickins <hughd@...gle.com>,
Brian Gerst <brgerst@...il.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Juergen Gross <jgross@...e.com>,
David Laight <David.Laight@...lab.com>,
Eduardo Valentin <eduval@...zon.com>, aliguori@...zon.com,
Will Deacon <will.deacon@....com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [patch 13/16] x86/ldt: Introduce LDT write fault handler
On Tue, Dec 12, 2017 at 9:32 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
>
> When the LDT is mapped RO, the CPU will write fault the first time it uses
> a segment descriptor in order to set the ACCESS bit (for some reason it
> doesn't always observe that it already preset). Catch the fault and set the
> ACCESS bit in the handler.
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> ---
> arch/x86/include/asm/mmu_context.h | 7 +++++++
> arch/x86/kernel/ldt.c | 30 ++++++++++++++++++++++++++++++
> arch/x86/mm/fault.c | 19 +++++++++++++++++++
> 3 files changed, 56 insertions(+)
>
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -76,6 +76,11 @@ static inline void init_new_context_ldt(
> int ldt_dup_context(struct mm_struct *oldmm, struct mm_struct *mm);
> void ldt_exit_user(struct pt_regs *regs);
> void destroy_context_ldt(struct mm_struct *mm);
> +bool __ldt_write_fault(unsigned long address);
> +static inline bool ldt_is_active(struct mm_struct *mm)
> +{
> + return mm && mm->context.ldt != NULL;
> +}
> #else /* CONFIG_MODIFY_LDT_SYSCALL */
> static inline void init_new_context_ldt(struct task_struct *task,
> struct mm_struct *mm) { }
> @@ -86,6 +91,8 @@ static inline int ldt_dup_context(struct
> }
> static inline void ldt_exit_user(struct pt_regs *regs) { }
> static inline void destroy_context_ldt(struct mm_struct *mm) { }
> +static inline bool __ldt_write_fault(unsigned long address) { return false; }
> +static inline bool ldt_is_active(struct mm_struct *mm) { return false; }
> #endif
>
> static inline void load_mm_ldt(struct mm_struct *mm, struct task_struct *tsk)
> --- a/arch/x86/kernel/ldt.c
> +++ b/arch/x86/kernel/ldt.c
> @@ -82,6 +82,36 @@ static void ldt_install_mm(struct mm_str
> mutex_unlock(&mm->context.lock);
> }
>
> +/*
> + * ldt_write_fault() already checked whether there is an ldt installed in
> + * __do_page_fault(), so it's safe to access it here because interrupts are
> + * disabled and any ipi which would change it is blocked until this
> + * returns. The underlying page mapping cannot change as long as the ldt
> + * is the active one in the context.
> + *
> + * The fault error code is X86_PF_WRITE | X86_PF_PROT and checked in
> + * __do_page_fault() already. This happens when a segment is selected and
> + * the CPU tries to set the accessed bit in desc_struct.type because the
> + * LDT entries are mapped RO. Set it manually.
> + */
> +bool __ldt_write_fault(unsigned long address)
> +{
> + struct ldt_struct *ldt = current->mm->context.ldt;
> + unsigned long start, end, entry;
> + struct desc_struct *desc;
> +
> + start = (unsigned long) ldt->entries;
> + end = start + ldt->nr_entries * LDT_ENTRY_SIZE;
> +
> + if (address < start || address >= end)
> + return false;
> +
> + desc = (struct desc_struct *) ldt->entries;
> + entry = (address - start) / LDT_ENTRY_SIZE;
> + desc[entry].type |= 0x01;
You have another patch that unconditionally sets the accessed bit on
installation. What gives?
Also, this patch is going to die a horrible death if IRET ever hits
this condition. Or load gs.
Powered by blists - more mailing lists