lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	2 Feb 2010 17:42:41 -0500
From:	"George Spelvin" <linux@...izon.com>
To:	rostedt@...dmis.org
Cc:	linux@...izon.com, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task

> again:
>   tmp_mask = mm_cpumask(current->mm);
>   smp_mb();
>    rcu_read_lock(); /* ensures validity of cpu_curr(cpu) tasks */
>    for_each_cpu(cpu, tmp_mask) {
>      spin_lock_irq(&cpu_rq(cpu)->lock);
>      ret = current->mm == cpu_curr(cpu)->mm;
>      spin_unlock_irq(&cpu_rq(cpu)->lock);
>      if (ret)
>        smp_call_function_single(cpu, membarrier_ipi, NULL, 1);
>    }
>    rcu_read_unlock();
>    smp_mb();
>    if (tmp_mask != mm_cpumask(current->mm)) {
>       /* do check for signals here */
>       goto again;
>    }

How about the harder-to-livelock version that avoids sending
a second IPI to all the processors if the retry condition
hits?

(It also caches current->mm across the various barriers, as I think the
compiler will have difficulty inferring that otherwise).


cpumask_t unsent_mask;
cpumask_setall(&unsent_mask);
cpumask_clear_cpu(smp_processor_id(), &unsent_mask);

struct mm_struct const *current_mm = current->mm;

for (;;) {
    cpumask_t const *tmp_mask = mm_cpumask(current_mm);
    int cpu = cpumask_next_and(-1, tmp_mask, &unsent_mask);

    if (cpu > nr_cpu_ids)
	break;

    smp_mb();
    rcu_read_lock(); /* ensures validity of cpu_curr(cpu) tasks */
    do {
	struct mm_struct const *other_mm;
	spin_lock_irq(&cpu_rq(cpu)->lock);
	other_mm = cpu_curr(cpu)->mm;
	spin_unlock_irq(&cpu_rq(cpu)->lock);
	if (other_mm == current_mm) {
	    smp_call_function_single(cpu, membarrier_ipi, NULL, 1);
	    cpumask_clear_cpu(cpu, &unsent_mask);
        }
	cpu = cpumask_next_and(cpu, tmp_mask, &unsent_mask);
    } while (cpu < nr_cpu_ids);
    rcu_read_unlock();
    smp_mb();
    /* And now check again if any more CPUs have joined the mm */
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ