lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Jun 2020 08:39:53 -0500
From:   Eric Sandeen <>
To:     Lukas Czerner <>,
        Eric Sandeen <>
Cc:     "" <>
Subject: Re: [PATCH 1/1] ext4: fix potential negative array index in

On 6/19/20 1:41 AM, Lukas Czerner wrote:
> On Wed, Jun 17, 2020 at 02:19:04PM -0500, Eric Sandeen wrote:
>> If for any reason a directory passed to do_split() does not have enough
>> active entries to exceed half the size of the block, we can end up
>> iterating over all "count" entries without finding a split point.
>> In this case, count == move, and split will be zero, and we will
>> attempt a negative index into map[].
>> Guard against this by detecting this case, and falling back to
>> split-to-half-of-count instead; in this case we will still have
>> plenty of space (> half blocksize) in each split block.


>> +	/*
>> +	 * map index at which we will split
>> +	 *
>> +	 * If the sum of active entries didn't exceed half the block size, just
>> +	 * split it in half by count; each resulting block will have at least
>> +	 * half the space free.
>> +	 */
>> +	if (i > 0)
>> +		split = count - move;
>> +	else
>> +		split = count/2;
> Won't we have exactly the same problem as we did before your commit
> ef2b02d3e617cb0400eedf2668f86215e1b0e6af ? Since we do not know how much
> space we actually moved we might have not made enough space for the new
> entry ?

I don't think so - while we don't have the original reproducer, I assume that
it was the case where the block was very full, and splitting by count left us
with one of the split blocks still over half full (because ensuring that we
split in half by size seemed to fix it)

In this case, the sum of the active entries was <= half the block size.
So if we split by count, we're guaranteed to have >= half the block size free
in each side of the split.
> Also since we have the move == count when the problem appears then it's
> clear that we never hit the condition
> 1865 →       →       /* is more than half of this entry in 2nd half of the block? */
> 1866 →       →       if (size + map[i].size/2 > blocksize/2)
> 1867 →       →       →       break;
> in the loop. This is surprising but it means the the entries must have
> gaps between them that are small enough that we can't fit the entry
> right in ? Should not we try to compact it before splitting, or is it
> the case that this should have been done somewhere else ?

Yes, that's exactly what happened - see my 0/1 cover letter.  Maybe that should
be in the patch description itself.  ALso, yes compaction would help but I was
unclear as to whether that should be done here, is the side effect of some other
bug, etc.  In general, we do seem to do compaction elsewhere and I don't know
how we got to this point.

> If we really want ot be fair and we want to split it right in the middle
> of the entries size-wise then we need to keep track of of sum of the
> entries and decide based on that, not blocksize/2. But maybe the problem
> could be solved by compacting the entries together because the condition
> seems to rely on that.

I thought about that as well, but it took a bit more code to do; we could make
make_map() return both count and total size, for example.  But based on my
theory above that both sides of the split will have >= half block free, it
didn't seem necessary, particularly since this seems like an edge case?


> -Lukas
>> +
>>  	hash2 = map[split].hash;
>>  	continued = hash2 == map[split - 1].hash;
>>  	dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",

Powered by blists - more mailing lists