lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Apr 2014 14:59:14 +0400
From:	Dmitry Monakhov <>
To:	ext4 development <>
Subject: [RFC] Defragmentation strategies

In ext4 we have EXT4_IOC_MOVE_EXT ioctl which allow to
migrate data block. At this moment the only defragmentation
strategy we have in e4defrag(8) is defragmentation of big files.
But one can imagine different defragmentation strategies for
different file sizes and different purposes. I would like to start a 
discussion about list of strategies which can be usable for us:

* Big file defragmentation
  Good known strategy to make big files
** Example: In fact fragmented for big files may appear only in such cases
   1) Creation big files on FS which has low free space
   2) weird io pattern (multi-threaded small chunks random io + fsync) or
      punch_hole/collapse_range etc.

* Compact small old files to continuous chunks.
**  Example:
     news, mail, web or cache server contains a lot of small files in
     each directory. And files are periodically created and unlinked
     after some period of time. Files has different(unpredictable)
     life-time which result in fragmented fs because block allocator tries
     to compact new files to each other, but later unlink result in
     fragmentation.  In case of thin-provision target this also result
     in significant waste of space.
** Proposed strategy:
   Scan directory and collect small old files it to continuous chunks.
   Core idea is similar to block allocations smaller than
   s_mb_stream_request. But at this moment we have more information about
   file history because if mtime is close to ctime then append is
   unlikely to happen in future so compaction is effective.

* Compact files according to IO access pattern.
  Various tracers may collect statistics about IO access pattern, so
  we can place such block close to each other and reduce number of seeks.
** Example:
   1) Boot io pattern are almost identical across boots
   2) Firefox start-up speedup
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux - Powered by OpenVZ