[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E02E44A9-6206-4B73-B52F-C3A1BC4C7D1E@dilger.ca>
Date: Mon, 25 Nov 2019 14:39:59 -0700
From: Andreas Dilger <adilger@...ger.ca>
To: Alex Zhuravlev <azhuravlev@...mcloud.com>,
Благодаренко Артём
<artem.blagodarenko@...il.com>
Cc: "linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: Re: [RFC] improve malloc for large filesystems
On Nov 21, 2019, at 7:41 AM, Alex Zhuravlev <azhuravlev@...mcloud.com> wrote:
>
> On 21 Nov 2019, at 12:18, Artem Blagodarenko <artem.blagodarenko@...il.com> wrote:
>> Assume we have one fragmented part of disk and all other parts are quite free.
>> Allocator will spend a lot of time to go through this fragmented part, because
>> will brake cr0 and cr1 and get range that satisfy c3.
>
> Even at cr=3 we still search for the goal size.
>
> Thus we shouldn’t really allocate bad chunks because we break cr=0 and cr=1,
> we just stop to look for nicely looking groups and fallback to regular (more
> expensive) search for free extents.
I think it is important to understand what the actual goal size is at this
point. The filesystems where we are seeing problems are _huge_ (650TiB and
larger) and are relatively full (70% or more) but take tens of minutes to
finish mounting. Lustre does some small writes at mount time, but it shouldn't
take so long to find some small allocations for the config log update.
The filesystems are automatically getting "s_stripe_size = 512" from mke2fs
(presumably from the underlying RAID), and I _think_ this is causing mballoc
to inflate the IO request to 8-16MB prealloc chunks, which would be much
harder to find, and unnecessary for a small allocation.
>> c3 requirement is quite simple “get first group that have enough free
>> blocks to allocate requested range”.
>
> This is only group selection, then we try to find that extent within that
> group, can fail and move to the next group.
> EXT4_MB_HINT_FIRST is set outside of the main cr=0..3 loop.
>
>> With hight probability allocator find such group at the start of c3 loop,
>> so goal (allocator starts its searching from goal) will not significantly
>> changed. Thus allocator go through this fragmented range using small steps.
>>
>> Without suggested optimisation, allocator skips this fragmented range at
>> moment and continue to allocate blocks.
>
> 1000 groups * 5ms avg.time = 5 seconds to skip 1000 bad uninitialized groups. This is the real problem. You mentioned 4M groups...
Yes, these filesystems have 5M or more groups, which is a real problem.
Alex is working on a patch to do prefetch of the bitmaps, and to read them
in chunks of flex_bg size (256 blocks = 1MB) to cut down on the number of
seeks needed to fetch them from disk.
Using bigalloc would also help, and getting the number of block groups lower
will avoid the need for meta_bg (which puts each group descriptor into a
separate group, rather than packed contiguously) but we've had to fix a few
performance issues with bigalloc as well, and have not deployed it yet in
production.
Cheers, Andreas
Download attachment "signature.asc" of type "application/pgp-signature" (874 bytes)
Powered by blists - more mailing lists