lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0901281841330.7408@gandalf.stny.rr.com>
Date:	Wed, 28 Jan 2009 18:50:07 -0500 (EST)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	mingo@...e.hu, tglx@...utronix.de, peterz@...radead.org,
	arjan@...radead.org, rusty@...tcorp.com.au, jens.axboe@...cle.com
Subject: Re: Buggy IPI and MTRR code on low memory


On Wed, 28 Jan 2009, Andrew Morton wrote:
> > 
> > I don't think so. With ticket spinlocks and such, that just looks like it 
> > is destine to crash. Also, spinlocks disable preemption, and we would need 
> > to enable it again otherwise we have a dangling preempt_disable.
> 
> Well that sucks.
> 
> I think the model of taking a lock, passing the data across to another
> thread of control and having that thread of control release the lock
> once the data has been consumed is a good one.  It's faster and in this
> case it means that we can now (maybe) perform (wait=0) IPI calls with local
> interrupts disabled, which wasn't the case before.

I think that model is nice too. Just not for spinlocks ;-)

> 
> It's a shame that irrelevant-other-stuff prevents us from implementing
> such a thing.  Of course, we can still do it, via some ad-hoc locking
> thing using raw_spin_lock/atomic_t's/bitops/cpu_relax/etc.

It should not be too hackish. I don't even think we need raw or atomic.

cpu caller:

 again:
	spin_lock(&data_lock);
	if (per_cpu(data, cpu).lock) {
		spin_unlock(&data_lock);
		cpu_relax();
		goto again;
	}

	per_cpu(data, cpu).lock = 1;
	spin_unlock(&data_lock);

	call ipi functions


cpu callee:
		
	func();

	smp_wmb();
	per_cpu(data, cpu).lock = 0;


Here we can only call an ipi on a funcion if the data lock is cleared.
If it is set, we wait for the ipi function to clear it and then we set it.

Thus we synchronize the callers, but the callers do not need to wait on 
the callees. And since the callers can only call a ipi on a cpu that is 
finished, we do not need to worry about corrupted data.
	
> 
> > Of course this will only work with the single cpu function callers. I 
> > think the all cpu function callers still need to alloc.
> 
> I haven't really thought about the smp_call_function_many()
> common case.  

We can keep the original code that still does the malloc for the many 
cpus. But falls back to the single version that does not use malloc if the 
malloc fails.


current code for smp_call_function_many():

        data = kmalloc(sizeof(*data) + cpumask_size(), GFP_ATOMIC);
        if (unlikely(!data)) {
                /* Slow path. */
                for_each_online_cpu(cpu) {
                        if (cpu == smp_processor_id())
                                continue;
                        if (cpumask_test_cpu(cpu, mask))
                                smp_call_function_single(cpu, func, info, wait);
                }
                return;
        }

I think this would work.

/me goes and tries it out

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ