lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161024033852.quinlee4a24mb2e2@thunk.org>
Date:   Sun, 23 Oct 2016 23:38:52 -0400
From:   Theodore Ts'o <tytso@....edu>
To:     Jens Axboe <axboe@...com>
Cc:     Dave Chinner <david@...morbit.com>, linux-ext4@...r.kernel.org,
        fstests@...r.kernel.org, tarasov@...ily.name
Subject: Re: Test generic/299 stalling forever

I enabled some more debugging and it's become more clear what's going
on.   (See attached for the full log).

The main issue seems to be that once one of fio is done, it kills off
the other threads (actually, we're using processes):

process  31848 terminate group_id=0
process  31848 setting terminate on direct_aio/31846
process  31848 setting terminate on direct_aio/31848
process  31848 setting terminate on direct_aio/31849
process  31848 setting terminate on direct_aio/31851
process  31848 setting terminate on aio-dio-verifier/31852
process  31848 setting terminate on buffered-aio-verifier/31854
process  31851 pid=31851: runstate RUNNING -> FINISHING
process  31851 terminate group_id=0
process  31851 setting terminate on direct_aio/31846
process  31851 setting terminate on direct_aio/31848
process  31851 setting terminate on direct_aio/31849
process  31851 setting terminate on direct_aio/31851
process  31851 setting terminate on aio-dio-verifier/31852
process  31851 setting terminate on buffered-aio-verifier/31854
process  31852 pid=31852: runstate RUNNING -> FINISHING
process  31846 pid=31846: runstate RUNNING -> FINISHING
    ...

but one or more of the threads doesn't exit within 60 seconds:

fio: job 'direct_aio' (state=5) hasn't exited in 60 seconds, it appears to be stuck. Doing forceful exit of this job.
process  31794 pid=31849: runstate RUNNING -> REAPED
fio: job 'buffered-aio-verifier' (state=5) hasn't exited in 60 seconds, it appears to be stuck. Doing forceful exit of this job.
process  31794 pid=31854: runstate RUNNING -> REAPED
process  31794 terminate group_id=-1

The main thread then prints all of the statistics, and calls stat_exit():

stat_exit called by tid: 31794       <---- debugging message which prints gettid()

Unfortunately, this process(es) aren't actually, killed, they are
marked as reap, but they are still in the process listing:

root@...tests:~# ps augxww | grep fio
root      1585  0.0  0.0      0     0 ?        S<   18:45   0:00 [dm_bufio_cache]
root      7191  0.0  0.0  12732  2200 pts/1    S+   23:05   0:00 grep fio
root     31849  1.5  0.2 407208 18876 ?        Ss   22:36   0:26 /root/xfstests/bin/fio /tmp/31503.fio
root     31854  1.2  0.1 398480 10240 ?        Ssl  22:36   0:22 /root/xfstests/bin/fio /tmp/31503.fio

And if you attach to them with a gdb, they are spinning trying to grab
the stat_mutex(), which they can't get because the main thread has
already called stat_exit() and then has exited.  So these two threads
did eventually return, but some time after 60 seconds had passed, and
then they hung waiting for stat_mutex(), which they will never get
because the main thread has already called stat_exit().

This probably also explains why you had trouble reproducing it.  It
requires a disk whose performance is variable enougoh that under heavy
load, it might take more than 60 seconds for the direct_aio or
buffered-aio-verifier thread to close itself out.

And I suspect once the main thread exited, it probably also closed out
the debugging channel so the deadlock detector did probably trip, but
somehow we just didn't see the output.

So I can imagine some possible fixes.  We could make the thread
timeout configurable, and/or increase it from 60 seconds to something like
300 seconds.  We could make stat_exit() a no-op --- after all, if the
main thread is exiting, there's no real point to down and then destroy
the stat_mutex.  And/or we could change the forced reap to send a kill
-9 to the thread, and instead of maring it as reaped.

Cheers,

						- Ted


Download attachment "299.full.gz" of type "application/gzip" (4711 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ