lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 May 2009 14:58:53 +0200
From:	Martin Steigerwald <ms@...mix.de>
To:	Marcin Krol <mrkafk@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: inotify limits - thousands (tens of thousands?) of watches

Am Mittwoch, 20. Mai 2009 schrieb Marcin Krol:
> Martin Steigerwald wrote:
> > Hmmm, I think you could just run a rsync periodically. It might even be
> > faster detecting changed files.
>
> I beg to differ on this: rsync does quite intensive (in terms of disk
> activity and CPU activity) comparisons at the beginning of
> synchronization. It's pretty light later, true, but running rsync every
> few minutes on entire /home is IMO out of question.

Another idea that might be applicable: 

We have a clustered setup - exactly also where that inotify ruby script runs - 
that used LVM and SoftRAID 1 for providing mirroring between both locations.

In each locations there is a RAID array with some hardware RAID, i.e. 
redundant in itself. Each RAID array is connected via FC to both cluster 
servers. Then we layer a SoftRAID 1 on top of it. Both cluster servers see 
the SoftRAID 1 device. One usually only does NFS and one usually only MySQL. 
Thus we made two volume groups. One of them is used by the NFS server only 
and the other one by the MySQL server.

In failover case the remaining server stoniths the failed server and takes 
over the volume group of it.

This way one of the servers could fail and the remaining server will be able 
to access the most recent data. And one of the externel RAID arrays could 
fail as well.

This worked remarkably well for more than a year, too. It won't work when you 
need to access the same volumes on both servers simultaneously, obviously.

-- 
Martin Steigerwald - team(ix) GmbH - http://www.teamix.de
gpg: 19E3 8D42 896F D004 08AC A0CA 1E10 C593 0399 AE90

Download attachment "signature.asc " of type "application/pgp-signature" (198 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ