lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 28 Jan 2009 20:56:01 -0500 (EST)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	linux-kernel@...r.kernel.org, rusty@...tcorp.com.au,
	npiggin@...e.de, torvalds@...ux-foundation.org, mingo@...e.hu,
	tglx@...utronix.de, peterz@...radead.org, arjan@...radead.org
Subject: Re: [PATCH] use per cpu data for single cpu ipi calls


On Wed, 28 Jan 2009, Andrew Morton wrote:

> On Wed, 28 Jan 2009 19:52:16 -0500 (EST)
> Steven Rostedt <rostedt@...dmis.org> wrote:
> 
> > 
> > The smp_call_function can be passed a wait parameter telling it to
> > wait for all the functions running on other CPUs to complete before
> > returning, or to return without waiting. Unfortunately, this is
> > currently just a suggestion and not manditory. That is, the
> 
> "mandatory"

Some day I'll learn to use spell check.

> > 
> > Unfortunatly, some callers do no abide by this hint and act as if
> 
> "Unfortunately".

Unfortunately, not today.

> 
> Well that looks nice.

Thank you.

> 
> Can we make the spinlock a per-cpu thing as well?  Or is that
> over-optimising?  We'd need to initialise all those spinlocks at
> runtime.

I thought about it and thought it was over optimizing (US spelling).
But I could be wrong.

> 
> In generic_smp_call_function_single_interrupt(), did you consider
> releasing the "lock" _before_ calling the callback function?  That
> would reduces latencies a bit, allow more concurrency.  Maybe that's
> over-optimising too.

I thought about it too, but I wanted to avoid the coping of the data 
fields. Currently the callee does:

                   data = list_entry(list.next, struct call_single_data,
                                           list);
                   list_del(&data->list);

                   /*
                    * 'data' can be invalid after this call if
                    * flags == 0 (when called through
                    * generic_exec_single(), so save them away before
                    * making the call.
                    */
                    data_flags = data->flags;

                    data->func(data->info);

I would need to have a stack item like I did in my first release, and copy 
it.

	d = *data;
	smp_wmb();
	data->flags &= ~CDS_FLAG_LOCK;

	d->func(d->info).

I figured that the contention would only happen with a second caller. The
first caller does not have to wait. And I doubt that there would be many
instances of two callers contending. Matters how often smp_call_function 
is called.

Probably not enough contention to worry about.

> 
> Can generic_smp_call_function_single_interrupt() ever see
> CSD_FLAG_ALLOC set now?  If not, that kfree can go away.

Probably not, but I kept it just in case. I see there's a way a caller
can pass in their own data through

       generic_exec_single()

A user could potentially set the ALLOC flag. But since the enum is defined 
in this file, I find that highly unlikely. But looking at some of the code 
in the kernel, I would not be surprised if it is.

> 
> And where do we now stand with the architectures which _don't_ use the
> kernel/smp.c code?  If someone writes code in generic kernel which
> relies upon the new capabilities, it will go bad on those
> architectures.  Makes davem sad.

I guess someone should flag the arch list.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ