lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Jun 2012 15:30:25 -0400
From:	Ric Wheeler <ricwheeler@...il.com>
To:	Fredrick <fjohnber@...o.com>
CC:	Ric Wheeler <rwheeler@...hat.com>, Theodore Ts'o <tytso@....edu>,
	linux-ext4@...r.kernel.org, Andreas Dilger <adilger@...ger.ca>,
	wenqing.lz@...bao.com, Eric Sandeen <sandeen@...hat.com>
Subject: Re: ext4_fallocate

On 06/26/2012 02:05 PM, Fredrick wrote:
>
>> Hi Ted,
>>
>> Has anyone made progress digging into the performance impact of running
>> without this patch? We should definitely see if there is some low
>> hanging fruit there, especially given that XFS does not seem to suffer
>> such a huge hit.
>>
>> I think that we need to get a good reproducer for the workload that
>> causes the pain and start to dig into this.
>>
>> Opening this security exposure is still something that is clearly a hack
>> and best avoided if we can fix the root cause :)
>>
>> Ric
>>
>>>
>
> Hi Ric,
>
> I had run perf stat on ext4 functions between two runs of our program
> writing data to a file for the first time and writing data to the file
> for the second time(where the extents are initialized).
> The amount of data written is same between the two runs.
>
> left is first time
> right is second time.
>
>
> <                 42 ext4:ext4_mb_bitmap_load
> <                 42 ext4:ext4_mb_buddy_bitmap_load
> <                642 ext4:ext4_mb_new_inode_pa
> <                645 ext4:ext4_mballoc_alloc
> <              9,596 ext4:ext4_mballoc_prealloc
> <             10,240 ext4:ext4_da_update_reserve_space
> ---
> >              7,413 ext4:ext4_mark_inode_dirty
> 49d52
> <             10,241 ext4:ext4_allocate_blocks
> 51d53
> <             10,241 ext4:ext4_request_blocks
> 55d56
> <          1,310,720 ext4:ext4_da_reserve_space
> 58,60c59,60
> <          1,331,288 ext4:ext4_ext_map_blocks_enter
> <          1,331,288 ext4:ext4_ext_map_blocks_exit
> <          1,341,467 ext4:ext4_mark_inode_dirty
> ---
> >          1,310,806 ext4:ext4_ext_map_blocks_enter
> >          1,310,806 ext4:ext4_ext_map_blocks_exit
>
>
> May be the mballocs have overhead.
>
> I ll try to compare numbers on XFS during this week.
>
> -Fredrick
>

Thanks!  Eric is also running some tests to evaluate the impact of various 
techniques :)

ric

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ