lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0701301838110.3611@woody.linux-foundation.org>
Date:	Tue, 30 Jan 2007 18:46:19 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
cc:	Zach Brown <zach.brown@...cle.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-aio@...ck.org, Suparna Bhattacharya <suparna@...ibm.com>,
	Benjamin LaHaise <bcrl@...ck.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 0 of 4] Generic AIO by scheduling stacks



On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote:

> > - We would now have some measure of task_struct concurrency.  Read that twice,
> > it's scary.  As two fibrils execute and block in turn they'll each be
> > referencing current->.  It means that we need to audit task_struct to make sure
> > that paths can handle racing as its scheduled away.  The current implementation
> > *does not* let preemption trigger a fibril switch.  So one only has to worry
> > about racing with voluntary scheduling of the fibril paths.  This can mean
> > moving some task_struct members under an accessor that hides them in a struct
> > in task_struct so they're switched along with the fibril.  I think this is a
> > manageable burden.
> 
> That's the one scaring me in fact ... Maybe it will end up being an easy
> one but I don't feel too comfortable...

We actually have almost zero "interesting" data in the task-struct.

All the real meat of a task has long since been split up into structures 
that can be shared for threading anyway (ie signal/files/mm/etc).

Which is why I'm personally very comfy with just re-using task_struct 
as-is.

NOTE! This is with the understanding that we *never* do any preemption. 
The whole point of the microthreading as far as I'm concerned is exactly 
that it is cooperative. It's not preemptive, and it's emphatically *not* 
concurrent (ie you'd never have two fibrils running at the same time on 
separate CPU's).

If you want preemptive of concurrent CPU usage, you use separate threads. 
The point of AIO scheduling is very much inherent in its name: it's for 
filling up CPU's when there's IO.

So the theory (and largely practice) is that you want to use real threads 
to fill your CPU's, but then *within* those threads you use AIO to make 
sure that each thread actually uses the CPU efficiently and doesn't just 
block with nothing to do.

So with the understanding that this is neither CPU-concurrent nor 
preemptive (*within* a fibril group - obviously the thread itself gets 
both preempted and concurrently run with other threads), I don't worry at 
all about sharing "struct task_struct".

Does that mean that we might not have some cases where we'd need to make 
sure we do things differently? Of course not. Something migt show up. But 
this actually makes it very clear what the difference between "struct 
thread_struct" and "struct task_struct" are. One is shared between 
fibrils, the other isn't.

			Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ