lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0890e7c-9f9f-4110-8da9-05d0fdf7f91c@default>
Date:	Mon, 24 Sep 2012 13:36:48 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Konrad Wilk <konrad.wilk@...cle.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nitin Gupta <ngupta@...are.org>,
	Minchan Kim <minchan@...nel.org>,
	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
	Robert Jennings <rcj@...ux.vnet.ibm.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org
Subject: RE: [RFC] mm: add support for zsmalloc and zcache

> From: Mel Gorman [mailto:mgorman@...e.de]
> Subject: Re: [RFC] mm: add support for zsmalloc and zcache
> 
> On Sat, Sep 22, 2012 at 02:18:44PM -0700, Dan Magenheimer wrote:
> > > From: Mel Gorman [mailto:mgorman@...e.de]
> > > Subject: Re: [RFC] mm: add support for zsmalloc and zcache
> > >
> > > On Fri, Sep 21, 2012 at 01:35:15PM -0700, Dan Magenheimer wrote:
> > > > > From: Seth Jennings [mailto:sjenning@...ux.vnet.ibm.com]
> > > > > Subject: Re: [RFC] mm: add support for zsmalloc and zcache
> > > > The two proposals:
> > > > A) Recreate all the work done for zcache2 as a proper sequence of
> > > >    independent patches and apply them to zcache1. (Seth/Konrad)
> > > > B) Add zsmalloc back in to zcache2 as an alternative allocator
> > > >    for frontswap pages. (Dan)
> > >
> > > Throwing it out there but ....
> > >
> > > C) Merge both, but freeze zcache1 except for critical fixes. Only allow
> > >    future work on zcache2. Document limitations of zcache1 and
> > >    workarounds until zcache2 is fully production ready.
> >
> What would the impact be if zcache2 and zcache1 were mutually exclusive
> in Kconfig and the naming was as follows?
> 
> CONFIG_ZCACHE_DEPRECATED	(zcache1)
> CONFIG_ZCACHE			(zcache2)
> 
> That would make it absolutely clear to distributions which one they should
> be enabling and also make it clear that all future development happen
> on zcache2.
> 
> I know it looks insane to promote something that is instantly deprecated
> but none of the other alternatives seem to be gaining traction either.
> This would at least allow the people who are currently heavily behind
> zcache1 to continue supporting it and applying critical fixes until they
> move to zcache2.

Just wondering... how, in your opinion, is this different from
leaving zcache1 (or even both) in staging?  "Tainting" occurs
either way, it's just a matter of whether or not there is a message
logged by the kernel that it is officially tainted, right?

However, it _is_ another attempt at compromise and, if this
is the only solution that allows the debate to end, and it
is agreed on by whatever maintainer is committed to pull
both (be it you, or Andrew, or Konrad, or Linux), I would
agree to your "C-prime" proposal.
 
> > I use the terms "zcache1" and "zcache2" only to clarify which
> > codebase, not because they are dramatically different. I estimate
> > that 85%-90% of the code in zcache1 and zcache2 is identical, not
> > counting the allocator or comments/whitespace/janitorial!
> 
> If 85-90% of the code is identicial then they really should be sharing
> the code rather than making copies. That will result in some monolithic
> patches but it's unavoidable. I expect it would end up looking like
> 
> Patch 1		promote zcache1
> Patch 2		promote zcache2
> Patch 3		move shared code for zcache1,zcache2 to common files
> 
> If the shared code is really shared and not copied it may reduce some of
> the friction between the camps.

This part I would object to... at least I would object to signing
up to do Patch 3 myself.  Seems like a lot of busywork if zcache1
is truly deprecated.

> zcache1 does appear to have a few snarls that would make me wary of having
> to support it. I don't know if zcache2 suffers the same problems or not
> as I have not read it.
> 
> Unfortunately, I'm not going to get the chance to review [zcache2] in the
> short-term. However, if zcache1 and zcache2 shared code in common files
> it would at least reduce the amount of new code I have to read :)

Understood, which re-emphasizes my point about how the presence
of both reduces the (to date, very limited) MM developer time available
for either.

> > Seth (and IBM) seems to have a bee in his bonnet that the existing
> > zcache1 code _must_ be promoted _soon_ with as little change as possible.
> > Other than the fact that he didn't like my patching approach [1],
> > the only technical objection Seth has raised to zcache2 is that he
> > thinks zsmalloc is the best choice of allocator [2] for his limited
> > benchmarking [3].
> 
> FWIW, I would fear that kernbench is not that interesting a benchmark for
> something like zcache. From an MM perspective, I would be wary that the
> data compresses too well and fits too neatly in the different buckets and
> make zsmalloc appear to behave much better than it would for a more general
> workload.  Of greater concern is that the allocations for zcache would be
> too short lived to measure if external fragmentation was a real problem
> or not. This is pure guesswork as I didn't read zsmalloc but this is the
> sort of problem I'd be looking out for if I did review it. In practice,
> I would probably prefer to depend on zbud because it avoids the external
> fragmentation problem even if it wasted memory but that's just me being
> cautious.

Your well-honed intuition is IMHO exactly right.

But my compromise proposal would allow the allocator decision to be delayed
until a broader set of workloads are brought to bear.

> > I've offered to put zsmalloc back in to zcache2 as an optional
> > (even default) allocator, but that doesn't seem to be good enough
> > for Seth.  Any other technical objections to zcache2, or explanation
> > for his urgent desire to promote zcache1, Seth (and IBM) is keeping
> > close to his vest, which I find to be a bit disingenuous.
> 
> I can only guess what the reasons might be for this and none of the
> guesses will help resolve this problem.

Me too.  Given the amount of time already spent on this discussion
(and your time reviewing, IMHO, old code), I sure hope the reasons
are compelling.

It's awfully hard to determine a compromise when one side
refuses to budge for unspecified reasons.   And the difference
between deprecated and in-staging seems minor enough that
it's hard to believe your modified proposal will make that
side happy... but we are both shooting in the dark.

> > So, I'd like to challenge Seth with a simple question:
> >
> > If zcache2 offers zsmalloc as an alternative (even default) allocator,
> > what remaining _technical_ objections do you (Seth) have to merging
> > zcache2 _instead_ of zcache1?
> >
> > If Mel agrees that your objections are worth the costs of bifurcating
> > zcache and will still endorse merging both into core mm, I agree to move
> > forward with Mel's alternative (C) (and will then repost
> > https://lkml.org/lkml/2012/7/31/573).
> 
> If you go with C), please also add another patch on top *if possible*
> that actually shares any common code between zcache1 and zcache2.

Let's hear Seth's technical objections first, and discuss post-merge
followon steps later?

Thanks again, Mel, for wading into this.  Hopefully the disagreement
can be resolved and I will value your input on some of the zcache next
steps currently blocked by this unfortunate logjam.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ