lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1615339667.i88ve15v8a.astroid@bobo.none>
Date:   Wed, 10 Mar 2021 11:29:10 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        Michael Ellerman <mpe@...erman.id.au>,
        Paul Mackerras <paulus@...ba.org>
Cc:     linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2 28/43] powerpc/64e: Call bad_page_fault() from
 do_page_fault()

Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
> book3e/64 is the last one calling __bad_page_fault()
> from assembly.
> 
> Save non volatile registers before calling do_page_fault()
> and modify do_page_fault() to call __bad_page_fault()
> for all platforms.
> 
> Then it can be refactored by the call of bad_page_fault()
> which avoids the duplication of the exception table search.

This can go in with the 64e change after your series. I think it should
be ready for the next merge window as well.

Thanks,
Nick

> 
> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
> ---
>  arch/powerpc/kernel/exceptions-64e.S |  8 +-------
>  arch/powerpc/mm/fault.c              | 17 ++++-------------
>  2 files changed, 5 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index e8eb9992a270..b60f89078a3f 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -1010,15 +1010,9 @@ storage_fault_common:
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
>  	ld	r14,PACA_EXGEN+EX_R14(r13)
>  	ld	r15,PACA_EXGEN+EX_R15(r13)
> +	bl	save_nvgprs
>  	bl	do_page_fault
> -	cmpdi	r3,0
> -	bne-	1f
>  	b	ret_from_except_lite
> -1:	bl	save_nvgprs
> -	mr	r4,r3
> -	addi	r3,r1,STACK_FRAME_OVERHEAD
> -	bl	__bad_page_fault
> -	b	ret_from_except
>  
>  /*
>   * Alignment exception doesn't fit entirely in the 0x100 bytes so it
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 2e54bac99a22..7bcff3fca110 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -541,24 +541,15 @@ NOKPROBE_SYMBOL(___do_page_fault);
>  
>  static long __do_page_fault(struct pt_regs *regs)
>  {
> -	const struct exception_table_entry *entry;
>  	long err;
>  
>  	err = ___do_page_fault(regs, regs->dar, regs->dsisr);
>  	if (likely(!err))
> -		return err;
> -
> -	entry = search_exception_tables(regs->nip);
> -	if (likely(entry)) {
> -		instruction_pointer_set(regs, extable_fixup(entry));
>  		return 0;
> -	} else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
> -		__bad_page_fault(regs, err);
> -		return 0;
> -	} else {
> -		/* 32 and 64e handle the bad page fault in asm */
> -		return err;
> -	}
> +
> +	bad_page_fault(regs, err);
> +
> +	return 0;
>  }
>  NOKPROBE_SYMBOL(__do_page_fault);
>  
> -- 
> 2.25.0
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ