lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Sep 2010 17:18:34 +0200
From:	Jan Kara <jack@...e.cz>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	Jan Kara <jack@...e.cz>, Christoph Hellwig <hch@...radead.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] BKL: Remove BKL from isofs

On Mon 20-09-10 13:13:11, Arnd Bergmann wrote:
> On Monday 20 September 2010, Jan Kara wrote:
> >   Hmm, looking through the code, I actually don't see a reason
> > why we should need any per-sb lock at all. The filesystem is always
> > read-only and we don't seem to have any global data structures that
> > change. But that needs some testing I guess - I'll try to do that.
> 
> Ok, great! The BKL was basically as wrong as the global mutex protecting
> the operations anyway, because it does not document what data is
> actually getting protected in any of all the drivers that I'm converting
> to a private mutex.
> 
> Given more time for code inspection and some testing, you can probably
> come up with a good explanation why the BKL is not needed in all those
> places, but since nobody ever bothered to do this for the last decade
> for all these drivers, my approach was to simply prove (in a rather lose
> sense) that I can't make it worse by converting to a mutex.
  Attached is an obvious patch with some reasoning. I've also run for about
half an hour 10 parallel tar processes continuously reading the isofs image
in a loop. In parallel was running an infinite loop with "echo 2
>/proc/sys/vm/drop_caches" so directory entries were constantly reread from
the filesystem. The whole thing was running inside KVM so the fs image was
actually cached in memory. The machine has 8 CPUs so I think reasonable
paralelism has happened and all seemed happy. So I think the patch should
be safe to put into some tree for testing in inclusion...

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR

View attachment "0001-isofs-Remove-BKL.patch" of type "text/x-patch" (4027 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ