[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <C5662C0F-10E3-4AC3-8ACA-4CBBF4F36D70@dilger.ca>
Date: Mon, 21 Feb 2011 11:00:02 -0700
From: Andreas Dilger <adilger@...ger.ca>
To: Paweł Brodacki <pawel.brodacki@...glemail.com>
Cc: Amir Goldstein <amir73il@...il.com>, Ted Ts'o <tytso@....edu>,
Rogier Wolff <R.E.Wolff@...wizard.nl>,
linux-ext4@...r.kernel.org
Subject: Re: fsck performance.
On 2011-02-21, at 9:04 AM, Paweł Brodacki wrote:
> 2011/2/21 Amir Goldstein <amir73il@...il.com>:
>> One thing I am not sure I understand is (excuse my ignorance) why is the
>> swap space solution good only for 64bit processors?
>
> It's an address space limit on 32 bit processors. Even with PAE the
> user space process still won't have access to more than 2^32 bits,
> that is 4 GiB of memory. Due to practical limitations (e.g. kernel
> needing some address space) usually a process won't have access to
> more than 3 GiB.
Roger,
are you using the icount allocation reduction patches previously posted? They won't help if you need more than 3GB of address space, but they definitely reduce the size of allocations and allow the icount data to be swapped. See the thread "[PATCH]: icount: Replace the icount list by a two-level tree".
>> If it is common knowledge, do you know of an upper limit (depending on fs size,
>> no. of inodes, etc)?
>>
>
> I vaguely remember some estimation of memory requirements of fsck
> being given somewhere, but I'm not able to find the posts now :(.
My rule of thumb is about 1 byte of memory per block in the filesystem, for "normal" filesystems (i.e. mostly regular files, and a small fraction of directories). For a 3TB filesystem this would mean ~768MB of RAM. One problem is that the current icount implementation allocates almost 2x the peak usage when it is resizing the array, hence the patch mentioned above for filesystems with lots of directories and hard links.
Cheers, Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists