lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE0An6c40nF5v4km4tn-qzDZXptnfhPdYt=X=sBH-EKWB2=6RQ@mail.gmail.com>
Date:   Tue, 21 Mar 2017 16:28:47 -0700
From:   Manish Katiyar <mkatiyar@...il.com>
To:     Reindl Harald <h.reindl@...lounge.net>
Cc:     Andreas Dilger <adilger@...ger.ca>,
        ext4 <linux-ext4@...r.kernel.org>
Subject: Re: ext4 scaling limits ?

On Tue, Mar 21, 2017 at 2:59 PM, Reindl Harald <h.reindl@...lounge.net> wrote:
>
>
> Am 21.03.2017 um 22:48 schrieb Andreas Dilger:
>>
>> While it is true that e2fsck does not free memory during operation, in
>> practice this is not a problem. Even for large filesystems (say 32-48TB)
>> it will only use around 8-12GB of RAM so that is very reasonable for a
>> server today.
>
>
> no it's not reasonable even today that your whole physical machine exposes
> it's total RAM to the one of many single virtual machines running just a
> samba server for a 50 TB "datagrave" with a handful of users
>
> in reality it should not be a problem to attach even a 100 TB storage to a
> VM with 1-2 GB
>

Thanks Andreas, for confirming.

If I understand correctly, then the theoretical limit is really (RAM +
available swap space) right ? Only if we aren't able to page out
anything to swap it should hurt ?


Thanks -
Manish



>
>> The rough estimate that I use for e2fsck is 1 byte of RAM per block.
>>
>> Cheers, Andreas
>>
>>> On Mar 21, 2017, at 16:07, Manish Katiyar <mkatiyar@...il.com> wrote:
>>>
>>> Hi,
>>>
>>> I was looking at e2fsck code to see if there are any limits on running
>>> e2fsck on large ext4 filesystems. From the code it looks like all the
>>> required metadata while e2fsck is running is only kept in memory and
>>> is only flushed to disk when the appropriate changes are corrected.
>>> (Except the undo file case).
>>> There doesn't seem to be a case/code where we have to periodically
>>> flush some tracking metadata while it is running, just because we have
>>> too much of incore tracking data and may ran out of memory (looks like
>>> code will simply return failure if ext2fs_get_mem() returns failure)
>>>
>>> Appreciate if someone can confirm that my understanding is correct ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ