lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 30 Oct 2010 13:49:28 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Dan Magenheimer <dan.magenheimer@...cle.com>,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@....de>,
	Chris Mason <chris.mason@...cle.com>,
	Nitin Gupta <nitingupta910@...il.com>
Subject: Re: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window

 On 10/30/2010 12:06 PM, Andrew Morton wrote:
> On Wed, 27 Oct 2010 11:37:47 -0700 (PDT) Dan Magenheimer <dan.magenheimer@...cle.com> wrote:
>
>> Ping?  I hope you are still considering this.  If not or if
>> there are any questions I can answer, please let me know.
> What's happened here is that the patchset has gone through its
> iterations and a few people have commented and then after a while,
> nobody had anything to say about the code so nobody said anything more.
>
> But silence doesn't mean acceptance - it just means that nobody had
> anything to say.
>
> I think I looked at the earlier iterations, tried to understand the
> point behind it all, made a few code suggestions and eventually tuned
> out.  At that time (and hence at this time) I just cannot explain to
> myself why we would want to merge this code.
>
> All new code is a cost/benefit decision.  The costs are pretty well
> known: larger codebase, more code for us and our "customers" to
> maintain and support, etc.  That the code pokes around in vfs and
> various filesystems does increase those costs a little.
>
> But the extent of the benefits to our users aren't obvious to me.  The
> code is still xen-specific, I believe?  If so, that immediately reduces
> the benefit side by a large amount simply because of the reduced
> audience.
>
> We did spend some time trying to get this wired up to zram so that the
> feature would be potentially useful to *all* users, thereby setting the
> usefulness multiplier back to 1.0.  But I don't recall that anything
> came of this?

Nitin was definitely working on this and made some constructive comments
as a result, but I don't know if there's any completed/usable code or not.

> I also don't know how useful the code is to its intended
> micro-audience: xen users!

The benefit is that it allows memory to be much more fluidly assigned
between domains as needed, rather than having to statically allocate big
chunks of memory.  The result is that its possible to provision domains
with much smaller amounts of memory while still reducing the number of
real pagefault IOs.  Dan's numbers are certainly very interesting (Dan,
perhaps you can repost those results).

However, I don't think it has been widely deployed yet, since most users
are using upstream/distro kernels.

> So can we please revisit all this from the top level?  Jeremy, your
> input would be valuable.

OK, I'll need to get myself up to speed on the issues again.  Will you
be about in Boston next week?

>   Christoph, I recall that you had technical
> objections - can you please repeat those?

I think (and I don't want to misrepresent or minimize Christoph's
concerns here) the most acute one were the need for a one-line addition
to each filesystem which wants to participate.

> It's the best I can do to kick this along, sorry.

Thanks,
    J

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ