lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Jul 2012 02:47:37 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	Mandeep Singh Baines <msb@...omium.org>
CC:	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Shaohua Li <shaohua.li@...el.com>,
	Yinghai Lu <yinghai@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	Tejun Heo <tj@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Stephen Rothwell <sfr@...b.auug.org.au>,
	Christoph Lameter <cl@...two.org>,
	Olof Johansson <olofj@...omium.org>
Subject: Re: [PATCH v2] x86, mm: only wait for flushes from online cpus

On 06/23/2012 03:36 AM, Mandeep Singh Baines wrote:
> A cpu in the mm_cpumask could go offline before we send the invalidate
> IPI causing us to wait forever. Avoid this by only waiting for online
> cpus.
> 
> We are seeing a softlockup reporting during shutdown. The stack
> trace shows us that we are inside default_send_IPI_mask_logical:
> 
[...]
> Changes in V2:
>   * bitmap_and is not atomic so use a temporary bitmask
> 

Looks like I posted my reply to v1. So I'll repeat the same suggestions in
this thread as well.

> ---
>  arch/x86/mm/tlb.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index d6c0418..231a0b9 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -185,6 +185,8 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>  	f->flush_mm = mm;
>  	f->flush_va = va;
>  	if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
> +		DECLARE_BITMAP(tmp_cpumask, NR_CPUS);
> +
>  		/*
>  		 * We have to send the IPI only to
>  		 * CPUs affected.
> @@ -192,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>  		apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
>  			      INVALIDATE_TLB_VECTOR_START + sender);
> 

This function is always called with preempt_disabled() right?
In that case, _while_ this function is running, a CPU cannot go offline
because of stop_machine(). (I understand that it might go offline in between
calculating that cpumask and calling preempt_disable() - which is the race
you are trying to handle).

So, why not take the offline cpus out of the way even before sending that IPI?
That way, we need not modify the while loop below.

> -		while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> +		/* Only wait for online cpus */
> +		do {
> +			cpumask_and(to_cpumask(tmp_cpumask),
> +				    to_cpumask(f->flush_cpumask),
> +				    cpu_online_mask);
>  			cpu_relax();
> +		} while (!cpumask_empty(to_cpumask(tmp_cpumask)));
>  	}
> 
>  	f->flush_mm = NULL;
> 

That is, how about something like this:

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5e57e11..9d387a9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -186,7 +186,11 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
 
        f->flush_mm = mm;
        f->flush_va = va;
-       if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
+
+       cpumask_and(to_cpumask(f->flush_cpumask), cpumask, cpu_online_mask);
+       cpumask_clear_cpu(smp_processor_id(), to_cpumask(f->flush_cpumask));
+
+       if (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
                /*
                 * We have to send the IPI only to
                 * CPUs affected.


Regards,
Srivatsa S. Bhat
IBM Linux Technology Center

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ