lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Nov 2009 19:10:14 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Brian Gerst <brgerst@...il.com>
Cc:	x86@...nel.org, linux-kernel@...r.kernel.org,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] x86, 64-bit: Move K8 B step iret fixup to fault entry
	asm (v2)


* Brian Gerst <brgerst@...il.com> wrote:

> Move the handling of truncated %rip from an iret fault to the fault
> entry path.
> 
> This allows x86-64 to use the standard search_extable() function.
> 
> v2: Fixed jump to error_swapgs to be unconditional.

v1 is already in the tip:x86/asm topic tree. Mind sending a delta fix 
against:

  http://people.redhat.com/mingo/tip.git/README

?

Also, i'm having second thoughts about the change:

> Signed-off-by: Brian Gerst <brgerst@...il.com>
> ---
>  arch/x86/include/asm/uaccess.h |    1 -
>  arch/x86/kernel/entry_64.S     |   11 ++++++++---
>  arch/x86/mm/extable.c          |   31 -------------------------------
>  3 files changed, 8 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index d2c6c93..abd3e0e 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -570,7 +570,6 @@ extern struct movsl_mask {
>  #ifdef CONFIG_X86_32
>  # include "uaccess_32.h"
>  #else
> -# define ARCH_HAS_SEARCH_EXTABLE
>  # include "uaccess_64.h"
>  #endif
>  
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index b5c061f..1579a6c 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1491,12 +1491,17 @@ error_kernelspace:
>  	leaq irq_return(%rip),%rcx
>  	cmpq %rcx,RIP+8(%rsp)
>  	je error_swapgs
> -	movl %ecx,%ecx	/* zero extend */
> -	cmpq %rcx,RIP+8(%rsp)
> -	je error_swapgs
> +	movl %ecx,%eax	/* zero extend */
> +	cmpq %rax,RIP+8(%rsp)
> +	je bstep_iret
>  	cmpq $gs_change,RIP+8(%rsp)
>  	je error_swapgs
>  	jmp error_sti
> +
> +bstep_iret:
> +	/* Fix truncated RIP */
> +	movq %rcx,RIP+8(%rsp)
> +	jmp error_swapgs
>  END(error_entry)
>  
>  
> diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
> index 61b41ca..d0474ad 100644
> --- a/arch/x86/mm/extable.c
> +++ b/arch/x86/mm/extable.c
> @@ -35,34 +35,3 @@ int fixup_exception(struct pt_regs *regs)
>  
>  	return 0;
>  }
> -
> -#ifdef CONFIG_X86_64
> -/*
> - * Need to defined our own search_extable on X86_64 to work around
> - * a B stepping K8 bug.
> - */
> -const struct exception_table_entry *
> -search_extable(const struct exception_table_entry *first,
> -	       const struct exception_table_entry *last,
> -	       unsigned long value)
> -{
> -	/* B stepping K8 bug */
> -	if ((value >> 32) == 0)
> -		value |= 0xffffffffUL << 32;
> -
> -	while (first <= last) {
> -		const struct exception_table_entry *mid;
> -		long diff;
> -
> -		mid = (last - first) / 2 + first;
> -		diff = mid->insn - value;
> -		if (diff == 0)
> -			return mid;
> -		else if (diff < 0)
> -			first = mid+1;
> -		else
> -			last = mid-1;
> -	}
> -	return NULL;
> -}
> -#endif

is this the only way how we can end up having a truncated 64-bit RIP 
passed in to search_exception_tables()/search_extable()? Before your 
commit we basically had a last-ditch safety net in 64-bit kernels that 
zero-extended truncated RIPs - no matter how they got there (via known 
or unknown erratums).

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ