lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200712131524.lBDFOHBv024206@agora.fsl.cs.sunysb.edu>
Date:	Thu, 13 Dec 2007 10:24:17 -0500
From:	Erez Zadok <ezk@...sunysb.edu>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Erez Zadok <ezk@...sunysb.edu>, hch@...radead.org,
	viro@....linux.org.uk, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 36/42] VFS: export drop_pagecache_sb 

In message <200712121638.35167.nickpiggin@...oo.com.au>, Nick Piggin writes:
> On Monday 10 December 2007 13:42, Erez Zadok wrote:
> > Needed to maintain cache coherency after branch management.
> >
> 
> Hmm, I'd much prefer to be able to sleep in invalidate_mapping_pages
> before this function gets exported.
> 
> As it is, it can cause massive latencies on preemption and the inode_lock
> so it is pretty much debug-only IMO. I'd rather it didn't escape into the
> wild as is.
> 
> Either that or rework your cache coherency somehow.

Nick, thanks for the advice.

We use a generation number after each successful branch configuration
command, so that ->d_revalidate later on can discover that change, and
rebuild the union of objects.  At ->remount time, I figured it'd be nice to
"encourage" that revalidation to happen sooner, by invalidating as many
upper pages as possible, thus causing ->d_revalidate/->readpage to take
place sooner.  So we used to call drop_pagecache_sb from our remount code:
it was the only caller of drop_pagecache_sb.  It wasn't too much of an
latency issue to call drop_pagecache_sb there: the VFS remount code path is
already pretty slow (dropping temporarily to readonly mode, and dropping
other caches), and remount isn't an operation used often, so a little bit
more latency would probably not have been noticed by users.

Nevertheless, it was not strictly necessary to call drop_pagecache_sb in
unionfs_remount, because the objects in question will have gotten
revalidated sooner or later anyway; the call to drop_pagecache_sb was just
an optimization (one which I wasn't 100% sure about anyway, as per my long
"XXX" comment above that call in unionfs_remount).

So I agree with you: if this symbol can be abused by modules and cause
problems, then exporting it to modules is too risky.  I've reworked my code
to avoid calling drop_pagecache_sb and I'll [sic] drop that patch.

Cheers,
Erez.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ