lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1411202315230.3950@nanos>
Date:	Thu, 20 Nov 2014 23:42:42 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Tejun Heo <tj@...nel.org>
cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>, Don Zickus <dzickus@...hat.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	the arch/x86 maintainers <x86@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Andy Lutomirski <luto@...capital.net>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: frequent lockups in 3.18rc4

On Thu, 20 Nov 2014, Tejun Heo wrote:
> On Thu, Nov 20, 2014 at 10:58:26PM +0100, Thomas Gleixner wrote:
> > It's completely undocumented behaviour, whether it has been that way
> > for ever or not. And I agree with Fredric, that it is insane. Actuallu
> > it's beyond insane, really.
> 
> This is exactly the same for any address in the vmalloc space.

I know, but I really was not aware of the fact that dynamically
allocated percpu stuff is vmalloc based and therefor exposed to the
same issues.

The normal vmalloc space simply does not have the problems which are
generated by percpu allocations which have no documented access
restrictions.

You created a special case and that special case is clever but not
very well thought out considering the use cases of percpu variables
and the completely undocumented limitations you introduced silently.

Just admit it and dont try to educate me about trivial vmalloc
properties.

> ..
> >    So in the scheduler if the same task gets reselected you check that
> >    sequence count and update the PGD if different. If a task switch
> >    happens then you also need to check the sequence count and act
> >    accordingly.
> 
> That isn't enough tho.  What if the percpu allocated pointer gets
> passed to another CPU without task switching?  You'd at least need to
> send IPIs to all CPUs so that all the active PGDs get updated
> synchronously.

You obviously did not even take the time to carefully read what I
wrote:

   "Now after that increment the allocation side needs to wait for a
    scheduling cycle on all cpus (we have mechanisms for that)"

That's exactly stating what you claim to be 'not enough'. 

> > What really frightens me is the potential and well hidden fuckup
> > potential which lurks around the corner and the hard to debug once in
> > a while fallout which might be caused by this.
> 
> Lazy vmalloc population through fault is something we accepted as
> reasonable as it works fine for most of the kernel. 

Emphasis on most.

I'm well aware about the lazy vmalloc population, but I was definitely
not aware about the implications chosen by the dynamic percpu
allocator. I do not care about random discussion threads on LKML or
random slides you produced for a conference. All I care about is that
I cannot find a single word of documentation about that in the source
tree. Neither in the percpu implementation nor in Documentation/

> For the time being, we can make percpu accessors complain when
> called from nmi handlers so that the problematic ones can be easily
> identified.

You should have done that in the very first place instead of letting
other people run into issues which you should have thought of from the
very beginning.

Thanks,

	tglx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ