lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Dec 2018 12:59:07 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>, stable-rt@...r.kernel.org
Subject: Re: [PATCH RT] x86/mm/pat: disable preemption __split_large_page()
 after spin_lock()

On Thu, 13 Dec 2018 17:44:31 +0100
Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:

> Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
> warning if __flush_tlb_all() is invoked in preemptible context. On !RT
> the warning does not trigger because a spin lock is acquired which
> disables preemption. On RT the spin lock does not disable preemption and
> so the warning is seen.
> 
> Disable preemption to avoid the warning in __flush_tlb_all().

I'm guessing the reason for the warn on is that we don't want a task to
be scheduled in where we expected the TLB to have been flushed.


> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  arch/x86/mm/pageattr.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index e2d4b25c7aa44..abbe3e93ec266 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -687,6 +687,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	pgprot_t ref_prot;
>  
>  	spin_lock(&pgd_lock);

We probably should have comment explaining why we have a
preempt_disable here.

> +	preempt_disable();
>  	/*
>  	 * Check for races, another CPU might have split this page
>  	 * up for us already:
> @@ -694,6 +695,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	tmp = _lookup_address_cpa(cpa, address, &level);
>  	if (tmp != kpte) {
>  		spin_unlock(&pgd_lock);
> +		preempt_enable();

Shouldn't the preempt_enable() be before the unlock?

>  		return 1;
>  	}
>  
> @@ -727,6 +729,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  
>  	default:
>  		spin_unlock(&pgd_lock);
> +		preempt_enable();

Here too.

-- Steve

>  		return 1;
>  	}
>  
> @@ -764,6 +767,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	 * going on.
>  	 */
>  	__flush_tlb_all();
> +	preempt_enable();
>  	spin_unlock(&pgd_lock);
>  
>  	return 0;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ