lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <57C5853F020000A100022641@gwsmtp1.uni-regensburg.de>
Date:   Tue, 30 Aug 2016 13:08:15 +0200
From:   "Ulrich Windl" <Ulrich.Windl@...uni-regensburg.de>
To:     "Ulrich Windl" <Ulrich.Windl@...uni-regensburg.de>,
        <linux-kernel@...r.kernel.org>
Subject: Antw: MBR partitions slow?

Update:

I found out the bad performance was caused by partition alignment, and not by the pertition per se (YaST created the partition next to the MBR). I compared two partitions, number one badly aligned, and number 2 properly aligned. Then I got these results:

Disk /dev/disk/by-id/dm-name-FirstTest-32: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 16777216 bytes
Disk identifier: 0x00016340

                                Device Boot      Start         End      Blocks   Id  System
/dev/disk/by-id/dm-name-FirstTest-32-part1               1     5242879     2621439+  83  Linux
Partition 1 does not start on physical sector boundary.
/dev/disk/by-id/dm-name-FirstTest-32-part2         5242880    10485759     2621440   83  Linux
h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32_part1
time to open /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.000021s
time for fstat(): 0.000060s
time to map /dev/disk/by-id/dm-name-FirstTest-32_part1 (size 2684.4MiB) at 0x7f826a8a1000: 0.000038s
time to zap 2684.4MiB: 11.734121s (228.76 MiB/s)
time to sync 2684.4MiB: 3.515991s (763.47 MiB/s)
time to unmap 2684.4MiB at 0x7f826a8a1000: 0.038104s
time to close /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.673100s
h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32_part2
time to open /dev/disk/by-id/dm-name-FirstTest-32_part2: 0.000020s
time for fstat(): 0.000069s
time to map /dev/disk/by-id/dm-name-FirstTest-32_part2 (size 2684.4MiB) at 0x7fe18823e000: 0.000044s
time to zap 2684.4MiB: 4.861062s (552.22 MiB/s)
time to sync 2684.4MiB: 0.811360s (3308.47 MiB/s)
time to unmap 2684.4MiB at 0x7fe18823e000: 0.038380s
time to close /dev/disk/by-id/dm-name-FirstTest-32_part2: 0.265687s

So the correctly aligned partition is two to three times faster than the badly aligned partition (write-only case), and it's about the performance of an unpartitioned disk.

Regards,
Ulrich

>>> Ulrich Windl <Ulrich.Windl@...uni-regensburg.de> schrieb am 30.08.2016 um 11:32
in Nachricht <57C552B6.33D : 161 : 60728>:
> Hello!
> 
> (I'm not subscribed to this list, but I'm hoping to get a reply anyway)
> While testing some SAN storage system, I needed a utility to erase disks 
> quickly. I wrote my own one that mmap()s the block device, memset()s the 
> area, then msync()s the changes, and finally close()s the file descriptor.
> 
> On one disk I had a primary MBR partition spanning the whole disk, like this 
> (output from some of my obscure tools):
> disk /dev/disk/by-id/dm-name-FirstTest-32 has 20971520 blocks of size 512 
> (10737418240 bytes)
> partition 1 (1-20971520)
> Total Sectors     =   20971519
> 
> When wiping, I started (for no good reason) to wipe partition 1, then I 
> wiped the whole disk. The disk is 4-way multipathed to a 8Gb FC-SAN, and the 
> disk system is all-SSD (32x2TB). Using kernel 3.0.101-80-default of SLES11 
> SP4.
> For the test I had reduced the amount of RAM via "mem=4G". The machine's RAM 
> bandwidth is about 9GB/s.
> 
> To my surprise I found out that the partition eats significant performance 
> (not quite 50%, but a lot):
> 
> ### Partition
> h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32_part1
time to 
> open /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.000042s
time for fstat(): 
> 0.000017s
time to map /dev/disk/by-id/dm-name-FirstTest-32_part1 (size 
> 10.7Gib) at 0x7fbc86739000: 0.000039s
time to zap 10.7Gib: 52.474054s (204.62 
> MiB/s)
time to sync 10.7Gib: 4.148350s (2588.36 MiB/s)
time to unmap 10.7Gib at 
> 0x7fbc86739000: 0.052170s
time to close 
> /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.770630s
> 
> ### Whole disk
> h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32
time to open 
> /dev/disk/by-id/dm-name-FirstTest-32: 0.000022s
time for fstat(): 
> 0.000061s
time to map /dev/disk/by-id/dm-name-FirstTest-32 (size 
> 10.7Gib) at 0x7fa2434cc000: 0.000037s
time to zap 10.7Gib: 24.580162s (436.83 
> MiB/s)
time to sync 10.7Gib: 1.097502s (9783.51 MiB/s)
time to unmap 10.7Gib at 
> 0x7fa2434cc000: 0.052385s
time to close /dev/disk/by-id/dm-name-FirstTest-32: 
> 0.290470s
> 
> Reproducible:
> h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32
> time to open /dev/disk/by-id/dm-name-FirstTest-32: 0.000039s
> time for fstat(): 0.000065s
> time to map /dev/disk/by-id/dm-name-FirstTest-32 (size 10.7Gib) at 
> 0x7f1cc17ab000: 0.000037s
> time to zap 10.7Gib: 24.624000s (436.06 MiB/s)
> time to sync 10.7Gib: 1.199741s (8949.79 MiB/s)
> time to unmap 10.7Gib at 0x7f1cc17ab000: 0.069956s
> time to close /dev/disk/by-id/dm-name-FirstTest-32: 0.327232s
> 
> So without partition the throughput is about twice as high! Why?
> 
> Regards
> Ulrich
> 
> 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ