lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20071203192937.GK3604@webber.adilger.int>
Date:	Mon, 3 Dec 2007 12:29:37 -0700
From:	Andreas Dilger <adilger@....com>
To:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc:	Alex Tomas <bzzz@....com>,
	ext4 development <linux-ext4@...r.kernel.org>,
	Eric Sandeen <sandeen@...hat.com>
Subject: Re: Understanding mballoc

On Dec 03, 2007  23:42 +0530, Aneesh Kumar K.V wrote:
> This is my attempt at understanding multi block allocator. I have
> few questions marked as FIXME below. Can you help answering them.
> Most of this data is already in the patch queue as commit message.
> I have updated some details regarding preallocation. Once we
> understand the details i will update the patch queue commit message.

Some comments below, Alex can answer more authoritatively.

> If we are not able to find blocks in the inode prealloc space and if we have
> the group allocation flag set then we look at the locality group prealloc
> space. These are per CPU prealloc list repreasented as
> 
> ext4_sb_info.s_locality_groups[smp_processor_id()]
> 
> /* FIXME!! 
> After getting the locality group related to the current CPU we could be
> scheduled out and scheduled in on different CPU. So why are we putting the
> locality group per cpu ?
> */

I think just to avoid contention between CPUs.  It is possible to get
scheduled at this point it is definitely unlikely.  There does appear
to still be proper locking for the locality group, so at worst we get
contention between 2 CPUs for the preallocation instead of all of them.

> /* FIXME: 
> We need to explain the normalization of the request length.
> What are the conditions we are checking the request length
> against. Why are group request always requested at 512 blocks ?

Probably no particular reason for 512 blocks = 2MB, other than some
decent number of smaller requests can fit in there before looking
for another one.

One note for normalization - regarding recent benchmarks that show
e2fsck performance improvement for clustering of indirect blocks it
would also seem that allocating index blocks in the same preallocation
group could provide a similar improvement for mballoc+extents.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ