[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Sun, 5 Jun 2016 21:05:07 -0700
From: Brandon Philips <brandon@...p.co>
To: Vlastimil Babka <vbabka@...e.cz>,
Anthony Romano <anthony.romano@...eos.com>,
Hugh Dickins <hughd@...gle.com>,
Christoph Hellwig <hch@...radead.org>,
Cong Wang <amwang@...hat.com>, Kay Sievers <kay@...y.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Garrett <mjg59@...f.ucam.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] tmpfs: don't undo fallocate past its last page
On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babka <vbabka@...e.cz> wrote:
> On 05/08/2016 03:16 PM, Anthony Romano wrote:
>>
>> When fallocate is interrupted it will undo a range that extends one byte
>> past its range of allocated pages. This can corrupt an in-use page by
>> zeroing out its first byte. Instead, undo using the inclusive byte range.
>
>
> Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the
> first place? The only other caller is shmem_truncate_range() and all *its*
> callers do subtract 1 to avoid the same issue. So a nicer fix would be to
> remove all this +1/-1 madness. Or is there some subtle corner case I'm
> missing?
Bumping this thread as I don't think this patch has gotten picked up.
And cc'ing folks from 1635f6a74152f1dcd1b888231609d64875f0a81a.
Also, resending because I forgot to remove the HTML mime-type to make
vger happy.
Thank you,
Brandon
>> Signed-off-by: Anthony Romano <anthony.romano@...eos.com>
>
>
> Looks like a stable candidate patch. Can you point out the commit that
> introduced the bug, for the Fixes: tag?
>
> Thanks,
> Vlastimil
>
>
>> ---
>> mm/shmem.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 719bd6b..f0f9405 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int
>> mode, loff_t offset,
>> /* Remove the !PageUptodate pages we added */
>> shmem_undo_range(inode,
>> (loff_t)start << PAGE_SHIFT,
>> - (loff_t)index << PAGE_SHIFT, true);
>> + ((loff_t)index << PAGE_SHIFT) - 1, true);
>> goto undone;
>> }
>>
>>
>
Powered by blists - more mailing lists