lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Apr 2007 12:17:06 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Christoph Pfister <christophpfister@...il.com>
Cc:	S.Çağlar Onur <caglar@...dus.org.tr>,
	linux-kernel@...r.kernel.org,
	Michael Lothian <mike@...eburn.co.uk>,
	Christophe Thommeret <hftom@...e.fr>,
	Jurgen Kofler <kaffeine@....net>,
	Ulrich Drepper <drepper@...hat.com>
Subject: Re: Kaffeine problem with CFS


* Christoph Pfister <christophpfister@...il.com> wrote:

> It's nearly impossible for me to find out which mutex is deadlocking.

i've disassembled the xine_play function, and here are the function 
calls in it:

  <unresolved widget call?>
 pthread_mutex_lock()
 xine_log()
  <unresolved widget call?>
 function pointer call
 right after it: pthread_mutex_lock()

this second pthread_mutex_lock() in question is the one that deadlocks. 
It comes right after that function pointer call, maybe that identifies 
it?

[some time passes]

i rebuilt the library from source and while the installed library is 
different from it, looking at the disassembly i'm quite sure it's this 
pthread_mutex_lock() in xine_play_internal():

  pthread_mutex_lock( &stream->demux_lock );

src/xine-engine/xine.c:1201

the function pointer call was:

  stream->xine->port_ticket->acquire(stream->xine->port_ticket, 1);

right before the pthread_mutex_lock() call.

> It would be great if you could reproduce the same problem with a 
> xine-lib which has been compiled with debug support (so you'd get line 
> numbers in the back trace - that makes life _a lot_ easier and maybe I 
> could identify the problem that way) and the least optimization 
> possible ... :-)

ok, i'll try that too (but it will take some more time), but given how 
hard it was for me to trigger it, i wanted to get maximum info out of it 
before having to kill the threads.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ