lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Tue, 19 Jun 2012 15:51:10 +0200
From:	Emil Renner Berthing <erb@....com>
To:	John McCutchan <john@...nmccutchan.com>,
	Robert Love <rlove@...ve.org>,
	Eric Paris <eparis@...isplace.org>
CC:	linux-kernel@...r.kernel.org, Jesper Dahl Nyerup <nyerup@....com>,
	Anders Saaby <as@....com>
Subject: Inotify scalability issue

Hi,

We're running Dovecot mailservers and are experiencing problems similar 
to what is described here:
http://old.nabble.com/Very-High-Load-on-Dovecot-2-and-Errors-in-mail.err.-tt33856207.html#a33856207

I've written two small programs to expose the problem.

watcher.c:
This program reads a filename from the commandline, creates a new 
inotify handle and sets it up to watch IN_CLOSE_WRITE and IN_DELETE on 
the file. It then writes a 'z' to stdout, and does a blocking read from 
inotify. After receiving an event from inotify the program prints an 'x' 
to stdout, closes the inotify handle and then prints a '.' to stdout 
before exiting.

test.c:
This program creates 20 files and spawns 20 watchers to watch each of 
them. For each watcher it waits between 1 and 2 seconds before touching 
the file they watch (which should cause it to wake up and exit), and 
then spawns a new watcher on the file, again waiting between 1 and 2 
seconds before touching the file again etc.

On my dualcore workstation running the test program behaves as you'd 
expect. That is it prints
zzzzzzzzzzzzzzzzzzzzx.zx.zx.zx.zx.zx.zx.zx.zx.zx.zx.zx.zx (etc.)

However on a 16-core server it behaves very differently:
zzzzzzzzzzzzzzzzzzzzxzxzxzxz.xzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxzxz......................................................................................................xzxzxzxzxz.xxzzxzxzxzxzxzxzxzxzxzxzxz.................xzxz.xz

(sorry about the long line)
That is watchers are spawned to watch their files, they're woken up by 
inotify as they should be, but then they pile up in D-state waiting for 
the close call to finish. Only at irregular intervals do they all return.

They seem to be sleeping on the syncronize_srcu() call in 
fsnotify_destroy_group() of fs/notify/group.c.

We've tested this on various machines running kernels from 3.0 and up, 
and the trend very clear: The more processors the worse it gets.
However, I also tried it on one 48-core server running an old 2.6.32 
debian kernels, and here the processes don't pile up.

/Emil

View attachment "watcher.c" of type "text/x-c" (1157 bytes)

View attachment "test.c" of type "text/x-c" (2346 bytes)

View attachment "Makefile" of type "text/plain" (362 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ