lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Jun 2013 13:13:53 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Waiman Long <waiman.long@...com>, linux-kernel@...r.kernel.org,
	mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
	sbw@....edu, torvalds@...ux-foundation.org
Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock

On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:

> > > I am a bit concern about the size of the head queue table itself. RHEL6, 
> > > for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table 
> > > size of 256. Maybe it is better to dynamically allocate the table at 
> > > init time depending on the actual number of CPUs in the system.
> > 
> > Yeah, it can be allocated dynamically at boot.
> 
> But let's first demonstrate the need.  Keep in mind that an early-boot
> deadlock would exercise this code.

I think an early-boot deadlock has more problems than this :-)

Now if we allocate this before other CPUs are enabled, there's no need
to worry about accessing it before they are used. They can only be used
on contention, and there would be no contention when we are only running
on one CPU.


>   Yes, it is just a check for NULL,
> but on the other hand I didn't get the impression that you thought that
> this code was too simple.  ;-)

I wouldn't change the code that uses it. It should never be hit, and if
it is triggered by an early boot deadlock, then I think this would
actually be a plus. An early boot deadlock would cause the system to
hang with no feedback whats so ever, causing the developer hours of
crying for mommy and pulling out their hair because the system just
stops doing anything except to show the developer a blinking cursor that
blinks "haha, haha, haha".

But if an early boot deadlock were to cause this code to be triggered
and do a NULL pointer dereference, then the system crashes. It would
most likely produce a backtrace that will give a lot more information to
the developer to see what is happening here. Sure, it may confuse them
at first, but then they can say: "why is this code triggering before we
have other CPUS? Oh, I have a deadlock here" and go fix the code in a
matter of minutes instead of hours.

Note, I don't even see this triggering with an early boot deadlock. The
only way that can happen is if the task tries to take a spinlock it
already owns, or an interrupt goes off and grabs a spinlock that the
task currently has but didn't disable interrupts. The ticket counter
would be just 2, which is far below the threshold that triggers the
queuing.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ