lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071108010347.GP19691@waste.org>
Date:	Wed, 7 Nov 2007 19:03:47 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Andi Kleen <ak@...e.de>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Marin Mitov <mitov@...p.bas.bg>, linux-kernel@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: is minimum udelay() not respected in preemptible SMP kernel-2.6.23?

On Thu, Nov 08, 2007 at 01:31:00AM +0100, Andi Kleen wrote:
> On Thursday 08 November 2007 01:20, Matt Mackall wrote:
> > On Wed, Nov 07, 2007 at 12:30:45PM -0800, Andrew Morton wrote:
> > > Ow.  Yes, from my reading delay_tsc() can return early (or after
> > > heat-death-of-the-universe) if the TSCs are offset and if preemption
> > > migrates the calling task between CPUs.
> > >
> > > I suppose a lameo fix would be to disable preemption in delay_tsc().
> >
> > preempt_disable is lousy documentation here. This and other cases
> > (lots of per_cpu users, IIRC) actually want a migrate_disable() which
> > is a proper subset. We can simply implement migrate_disable() as
> > preempt_disable() for now and come back later and implement a proper
> > migrate_disable() that still allows preemption (and thus avoids the
> > latency).
> 
> We could actually do this right now. migrate_disable() can be just changing
> the cpu affinity of the current thread to current cpu and then restoring it 
> afterwards. That should even work from interrupt context.

Yes, that's one way. But we need somewhere to stash the old flags.
Expanding the task struct sucks. Jamming another bit in the preempt
count sucks.

But I think we'd be best off stashing a single bit somewhere and
checking it at migrate time (relatively infrequent) rather than
copying and zeroing out a potentially enormous affinity mask every
time we disable migration (often, and in fast paths). Perhaps adding
TASK_PINNED to the task state flags would do it?

> get_cpu() etc. could be changed to use this then too.

Some users of get_cpu might be relying on it to avoid actual
preemption. In other words, we should have introduced a
migrate_disable() when we first discovered the preempt/per_cpu
conflict.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ