lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 18 Aug 2023 13:53:49 -0600
From:   Andreas Dilger <adilger@...ger.ca>
To:     "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
Cc:     Bobi Jam <bobijam@...mail.com>, linux-ext4@...r.kernel.org
Subject: Re: [PATCH 1/2] ext4: optimize metadata allocation for hybrid LUNs

On Aug 16, 2023, at 4:09 AM, Ritesh Harjani (IBM) <ritesh.list@...il.com> wrote:
> 
> Andreas Dilger <adilger@...ger.ca> writes:
> 
>> On Aug 3, 2023, at 6:10 AM, Ritesh Harjani (IBM) <ritesh.list@...il.com> wrote:
>>> 1. What happens when the hdd space for data gets fully exhausted? AFAICS,
>>> the allocation for data blocks will still succeed, however we won't be
>>> able to make use of optimized scanning any more. Because we search within
>>> iops lists only when EXT4_MB_HINT_METADATA is set in ac->ac_flags.
>> 
>> The intention for our usage is that data allocations should *only* come
>> from the HDD region of the device, and *not* from the IOPS (flash) region
>> of the device.  The IOPS region will be comparatively small (0.5-1.0% of
>> the total device size) so using or not using this space will be mostly
>> meaningless to the overall filesystem usage, especially with a 1-5%
>> reserved blocks percentage that is the default for new filesystems.
> 
> Yes, but when we give this functionality to non-enterprise users,
> everyone would like to take advantage of a faster performing ext4 using
> 1 ssd and few hdds or a smaller spare ssd and larger hdds. Then it could
> be that the space of iops region might not strictly be less than 1-2%
> and could be anywhere between 10-50% ;)
> 
> Shouldn't we still support this class of usecase as well? ^^^
> So if the HDD gets full then the allocation should fallback to ssd for
> data blocks no?

It's true that this is possible, and I've thought about optionally
allowing e.g. "small files" to be allocated in the IOPS space while
"large files" are allocated only in the HDD space.  This involves
"policy" which is always tricky to get right.  What is "small" and
what is "large"?  At what threshold do we *stop* putting small files
into the IOPS groups when they get too full, or increase the size of
"small" files if it isn't filling up quickly enough vs. the non-IOPS
groups? ...

I'd prefer to get the basic infrastructure working, and then we can
have the long discussions about how the policy should work for the
*next* patches, because those decisions do not have a permanent effect
on the on-disk format. :-)

> Or we can have a policy knob i.e. fallback_data_to_iops_region_thresh.
> So if the free space %age in the iops region is above 1% (can be changed
> by user) then the data allocations can fallback to iops region if it is
> unable to allocate data blocks from hdd region.
> 
>      echo %age_threshold > fallback_data_to_iops_region_thresh (default 1%)
> 
>        Fallback data allocations to iops region as long as we have free space
>        %age of iops region above %age_threshold.

IMHO, a simple "too full" threshold is sub-optimal, because it means
suddenly the IOPS groups would get filled up with regular file data,
and in the likely case that old files are deleted to free up space,
the IOPS groups will still be filled with the new files.

My preference would be to have a "base small file size" (e.g. 64KB)
and a "fullness ratio" (free IOPS blocks / free non-IOPS blocks) and
use the fullness ratio to scale the "small file size".  In case the
free IOPS blocks are very small (e.g. my initial proposal of 1% of
the total volume size, most of which would be filled with static
metadata) then e.g. files < 64 KB * 0.5% <= 3.2KB (probably *no* files,
since this is less than one block) would go into the IOPS groups.

If the ratio is more like 50% then files under 32KB could be allocated
into the IOPS groups, and if the non-IOPS groups end up filling faster
and the free space ratio is equal or even higher in the IOPS groups,
then 64KB or 128KB files can start to be allocated there.

> I am interested in knowing what do you think will be challenges in
> supporting resize with hybrid devices? Like if someone would like to
> add an additional ssd and do a resize, do you think all later metadata
> allocations can be fullfilled from this iops region?
> 
> And what happens when someone adds hdds to existing ssds.
> I guess adding an hdd followed by resize operation can still allocate,
> GDT, block/inode bitmaps and inode tables etc for these block groups
> to sit on the resized hdd right.
> 
> Are there any other challenges as well for such usecase?

At a high level, my expectation would be that resize would "work"
regardless of whether the IOPS groups have space or not, but how
optimal this is depends on how much free space is in the IOPS groups.
If the IOPS groups are over-provisioned, then it should be fine to
allocate bitmaps and inode table blocks in that space (with flex_bg).

It should also be possible to add more IOPS groups at the end of the
filesystem to help the resize to keep all metadata in the fast storage.
Allowing disjoint regions of flash storage is one of the reasons why
EXT4_BG_IOPS is a per-group flag and not just a "threshold" boundary
within the filesystem.


I only realized yesterday that online resize is completely disabled
for filesystems with sparse_super2.  I think this is actually a mistake
because the problem looks like a bad interaction between sparse_super2
having only 2 backup groups, and the resize_inode feature expecting that
there are backup group descriptors in all of the usual places.

So I think it makes sense to change the "cannot do online resize" check
to only the case of sparse_super2 AND resize_inode being enabled.  This
should be uncommon since sparse_super2 is mostly useful for filesystems
over 16TB in size, and resize_inode currently doesn't work in that case.

It does seem possible to fix resize_inode to work with sparse_super2 for
filesystems over 16TiB.  Originally the reason resize_inode is disallowed
for filesystems > 16TiB is because of the 2^32 block number limit for
non-extent files, and because the increasing numbers of backup groups
means a large number of blocks need to be reserved.  However, when using
sparse_super2 there are only 2 backup groups, and they can be located
within the first 16TiB (there is no reason that backup #2 has to be in
the last group) means resize_inode will have enough space in it to reserve
extra GDT blocks for the online resize to work smoothly, whether the IOPS
groups are in use or not.  However, that is a separate project...

Cheers, Andreas






Download attachment "signature.asc" of type "application/pgp-signature" (874 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ