lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 6 Feb 2015 07:11:49 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Steven Whitehouse <swhiteho@...hat.com>
Cc:	Oleg Drokin <green@...uxhacker.ru>, cluster-devel@...hat.com,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm@...ck.org
Subject: Re: [PATCH] gfs2: use __vmalloc GFP_NOFS for fs-related allocations.

On Wed, Feb 04, 2015 at 09:49:50AM +0000, Steven Whitehouse wrote:
> Hi,
> 
> On 04/02/15 07:13, Oleg Drokin wrote:
> >Hello!
> >
> >On Feb 3, 2015, at 5:33 PM, Dave Chinner wrote:
> >>>I also wonder if vmalloc is still very slow? That was the case some
> >>>time ago when I noticed a problem in directory access times in gfs2,
> >>>which made us change to use kmalloc with a vmalloc fallback in the
> >>>first place,
> >>Another of the "myths" about vmalloc. The speed and scalability of
> >>vmap/vmalloc is a long solved problem - Nick Piggin fixed the worst
> >>of those problems 5-6 years ago - see the rewrite from 2008 that
> >>started with commit db64fe0 ("mm: rewrite vmap layer")....
> >This actually might be less true than one would hope. At least somewhat
> >recent studies by LLNL (https://jira.hpdd.intel.com/browse/LU-4008)
> >show that there's huge contention on vmlist_lock, so if you have vmalloc
> >intense workloads, you get penalized heavily. Granted, this is rhel6 kernel,
> >but that is still (albeit heavily modified) 2.6.32, which was released at
> >the end of 2009, way after 2008.
> >I see that vmlist_lock is gone now, but e.g. vmap_area_lock that is heavily
> >used is still in place.
> >
> >So of course with that in place there's every incentive to not use vmalloc
> >if at all possible. But if used, one would still hopes it would be at least
> >safe to do even if somewhat slow.
> >
> >Bye,
> >     Oleg
> 
> I was thinking back to this thread:
> https://lkml.org/lkml/2010/4/12/207
> 
> More recent than 2008, and although it resulted in a patch that
> apparently fixed the problem, I don't think it was ever applied on
> the basis that it was too risky and kmalloc was the proper solution
> anyway.... I've not tested recently, so it may have been fixed in
> the mean time,

IIUC, the problem was resolved with a different fix back in 2011 - a
lookaside cache that avoids the overhead of searching the entire
list on every vmalloc. 

commit 89699605fe7cfd8611900346f61cb6cbf179b10a
Author: Nick Piggin <npiggin@...e.de>
Date:   Tue Mar 22 16:30:36 2011 -0700

    mm: vmap area cache

    Provide a free area cache for the vmalloc virtual address allocator, based
    on the algorithm used by the user virtual memory allocator.

    This reduces the number of rbtree operations and linear traversals over
    the vmap extents in order to find a free area, by starting off at the last
    point that a free area was found.
....
    After this patch, the search will start from where it left off, giving
    closer to an amortized O(1).

    This is verified to solve regressions reported Steven in GFS2, and Avi in
    KVM.
....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ