lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon,  3 Mar 2008 23:00:16 -0800 (PST)
From:	Roland McGrath <roland@...hat.com>
To:	Frank Mayhar <fmayhar@...gle.com>
Cc:	parag.warudkar@...il.com,
	Alejandro Riveira Fernández 
	<ariveira@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
	bugme-daemon@...zilla.kernel.org, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Jakub Jelinek <jakub@...hat.com>
Subject: Re: [Bugme-new] [Bug 9906] New: Weird hang with NPTL and SIGPROF.

Thanks for the detailed explanation and for bringing this to my attention.

This is a problem we knew about when I first implemented posix-cpu-timers
and process-wide SIGPROF/SIGVTALRM.  I'm a little surprised it took this
long to become a problem in practice.  I originally expected to have to
revisit it sooner than this, but I certainly haven't thought about it for
quite some time.  I'd guess that HZ=1000 becoming common is what did it.

The obvious implementation for the process-wide clocks is to have the
tick interrupt increment shared utime/stime/sched_time fields in
signal_struct as well as the private task_struct fields.  The all-threads
totals accumulate in the signal_struct fields, which would be atomic_t.
It's then trivial for the timer expiry checks to compare against those
totals.

The concern I had about this was multiple CPUs competing for the
signal_struct fields.  (That is, several CPUs all running threads in the
same process.)  If the ticks on each CPU are even close to synchronized,
then every single time all those CPUs will do an atomic_add on the same
word.  I'm not any kind of expert on SMP and cache effects, but I know
this is bad.  However bad it is, it's that bad all the time and however
few threads (down to 2) it's that bad for that many CPUs.

The implementation we have instead is obviously dismal for large numbers
of threads.  I always figured we'd replace that with something based on
more sophisticated thinking about the CPU-clash issue.  

I don't entirely follow your description of your patch.  It sounds like it
should be two patches, though.  The second of those patches (workqueue)
sounds like it could be an appropriate generic cleanup, or like it could
be a complication that might be unnecessary if we get a really good
solution to main issue.  

The first patch I'm not sure whether I understand what you said or not.
Can you elaborate?  Or just post the unfinished patch as illustration,
marking it as not for submission until you've finished.


Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ