lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190725172854.GL31381@hirez.programming.kicks-ass.net>
Date:   Thu, 25 Jul 2019 19:28:54 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
        Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH] x86/hw_breakpoint: Prevent data breakpoints on
 cpu_entry_area

On Thu, Jul 25, 2019 at 09:37:15AM -0700, Andy Lutomirski wrote:
> A data breakpoint near the top of an IST stack will cause unresoverable
> recursion.  A data breakpoint on the GDT, IDT, or TSS is terrifying.
> Prevent either of these from happening.
> 
> Co-developed-by: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Andy Lutomirski <luto@...nel.org>

Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>

One small nit beflow.

> ---
> 
>  arch/x86/include/asm/cpu_entry_area.h | 10 ++++++++++
>  arch/x86/kernel/hw_breakpoint.c       | 17 +++++++++++++++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
> index e23e2d9a92d7..3f50d4738487 100644
> --- a/arch/x86/include/asm/cpu_entry_area.h
> +++ b/arch/x86/include/asm/cpu_entry_area.h
> @@ -126,6 +126,16 @@ static inline struct entry_stack *cpu_entry_stack(int cpu)
>  	return &get_cpu_entry_area(cpu)->entry_stack_page.stack;
>  }
>  
> +/*
> + * Checks whether the range from addr to end, inclusive, overlaps the CPU
> + * entry area range.
> + */
> +static inline bool within_cpu_entry_area(unsigned long addr, unsigned long end)
> +{
> +	return end >= CPU_ENTRY_AREA_PER_CPU &&
> +		addr < (CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_TOT_SIZE);
> +}
> +
>  #define __this_cpu_ist_top_va(name)					\
>  	CEA_ESTACK_TOP(__this_cpu_read(cea_exception_stacks), name)
>  
> diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
> index 218c8917118e..dc4581fe4b4e 100644
> --- a/arch/x86/kernel/hw_breakpoint.c
> +++ b/arch/x86/kernel/hw_breakpoint.c
> @@ -231,6 +231,23 @@ static int arch_build_bp_info(struct perf_event *bp,
>  			      const struct perf_event_attr *attr,
>  			      struct arch_hw_breakpoint *hw)
>  {
> +	unsigned long bp_end;
> +
> +	/* Ensure that bp_end does not oveflow. */
> +	if (attr->bp_len >= ULONG_MAX - attr->bp_addr)
> +		return -EINVAL;
> +
> +	bp_end = attr->bp_addr + attr->bp_len - 1;

The alternative (and possibly more conventional) overflow test would be:

	if (bp_end < attr->bp_addr)
		return -EINVAL;

> +
> +	/*
> +	 * Prevent any breakpoint of any type that overlaps the
> +	 * cpu_entry_area.  This protects the IST stacks and also
> +	 * reduces the chance that we ever find out what happens if
> +	 * there's a data breakpoint on the GDT, IDT, or TSS.
> +	 */
> +	if (within_cpu_entry_area(attr->bp_addr, bp_end))
> +		return -EINVAL;
> +
>  	hw->address = attr->bp_addr;
>  	hw->mask = 0;
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ