lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Sep 2007 14:35:07 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Mel Gorman <mel@...net.ie>
cc:	Nick Piggin <nickpiggin@...oo.com.au>, andrea@...e.de,
	torvalds@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, Christoph Hellwig <hch@....de>,
	William Lee Irwin III <wli@...omorphy.com>,
	David Chinner <dgc@....com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Badari Pulavarty <pbadari@...il.com>,
	Maxim Levitsky <maximlevitsky@...il.com>,
	Fengguang Wu <fengguang.wu@...il.com>,
	swin wang <wangswin@...il.com>, totty.lu@...il.com,
	hugh@...itas.com, joern@...ybastard.org
Subject: Re: [00/41] Large Blocksize Support V7 (adds memmap support)

On Tue, 11 Sep 2007, Mel Gorman wrote:

> > Well Christoph seems to still be spinning them as a solution for VM
> > scalability and first class support for making contiguous IOs, large
> > filesystem block sizes etc.
> > 
> 
> Yeah, I can't argue with you there. I was under the impression that we
> would be dealing with this strictly as a second class solution to see
> what it bought to help steer the direction of fsblock.

I think we all have the same impression. But should second class not be 
okay for IO and FS in special situations?

> As you say, a difference is if we fail to allocate a hugepage, the world
> does not end. It's been a well known problem for years and grouping pages
> by mobility is aimed at relaxing some of the more painful points. It has
> other uses as well, but each of them is expected to deal with failures with
> contiguous range allocation.

Note that this patchset is only needing higher order pages up to 64k not 
2M.

> > And I would have kept quiet this time too, except for the worrying idea
> > to use higher order pages to fix the SLUB vs SLAB regression, and if
> > the rationale for this patchset was more realistic.
> > 
> 
> I don't agree with using higher order pages to fix SLUB vs SLAB performance
> issues either. SLUB has to be able to compete with SLAB on it's own terms. If
> SLUB gains x% over SLAB in specialised cases with high orders, then fair
> enough but minimally, SLUB has to perform the same as SLAB at order-0. Like
> you, I think if we depend on SLUB using high orders to match SLAB, we are
> going to get kicked further down the line.

That issue is discussed elsewhere and we have a patch in mm to address the 
issue.

> > In theory (and again for the filesystem guys who don't have to worry about
> > it). In practice after seeing the patch it's not a nice thing for the VM to
> > have to do.
> > 
> 
> That may be a good enough reason on it's own to delay this. It's a
> technical provable point.

It would be good to know what is wrong with the patch? I was surprised how 
easy it was to implement mmap.

> I might regret saying this, but it would be easier to craft an attack
> using pagetable pages. It's woefully difficult to do but it's probably
> doable. I say pagetables because while slub targetted reclaim is on the
> cards and memory compaction exists for page cache pages, pagetables are
> currently pinned with no prototype patch existing to deal with them.

Hmmm... I thought Peter had a patch to move page table pages?

> If we hit this problem at all, it'll be due to gradual natural degredation.
> It used to be a case that jumbo ethernets reported problems after running
> for weeks and we might encounter something similar with large blocks while it
> lacks a fallback. We no longer see jumbo ethernet reports but the fact is we
> don't know if it's because we fixed it or people gave up. Chances are people
> will be more persistent with large blocks than they were with jumbo ethernet.

I have seen a failure recently with jumbo frames and order 2 allocs on 
2.6.22. But then .22 has no lumpy reclaim.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ