lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <51385177.9030904@sx.jp.nec.com>
Date:	Thu, 07 Mar 2013 17:36:07 +0900
From:	Kazuya Mio <k-mio@...jp.nec.com>
To:	jack@...e.cz, akpm@...ux-foundation.org, adilger.kernel@...ger.ca
CC:	linux-ext4@...r.kernel.org
Subject: bio splits unnecessarily due to BH_Boundary in ext3 direct I/O

I found the performance problem that ext3 direct I/O sends large number of bio
unnecessarily when buffer_head is set BH_Boundary flag.

When we read/write a file sequentially, we will read/write not only
the data blocks but also the indirect blocks that may not be physically
adjacent to the data blocks. So ext3 sets BG_Boundary flag to submit
the previous I/O before reading/writing an indirect block.

However, in the case of direct I/O, the size of buffer_head
could be more than the blocksize. dio_send_cur_page() checks BH_Boundary flag
and then calls submit_bio() without calling dio_bio_add_page().
As a result, submit_bio() is called every one page and cause of high CPU usage.

The following patch fixes this problem only for ext3. At least ext2/3/4
don't need BH_Boundary flag for direct I/O because submit_bio() will be called
when the offset of buffer_head is discontinuous about the previous one.

---
@@ -926,7 +926,8 @@ int ext3_get_blocks_handle(handle_t *handle, struct inode *inode,
    set_buffer_new(bh_result);
 got_it:
    map_bh(bh_result, inode->i_sb, le32_to_cpu(chain[depth-1].key));
-   if (count > blocks_to_boundary)
+   /* set bourdary flag for buffered I/O */
+   if (maxblocks == 1 && count > blocks_to_boundary)
        set_buffer_boundary(bh_result);
    err = count;
    /* Clean up and exit */
---

My simple performance test with/without the above patch shows us reducing
CPU usage.

-------------------------------------------------
|        | I/O time(s)| CPU used(%)| mem used(%)|
-------------------------------------------------
|default |     41.304 |     74.658 |     21.528 |
|patched |     40.948 |     58.325 |     21.857 |
-------------------------------------------------

environment:
  kernel: 3.8.0-rc7
  CPU:    Xeon E3-1220
  Memory: 8GB

Test detail:
  (1) create 48KB file
  (2) write 4096KB with O_DIRECT from the file offset 48KB (write only
      indirect blocks)
  (3) loop (2) at 1000 times

  I/O time means the time between (1) and (3), and CPU/memory usage is
  monitored by sar command.

When BH_Boundary flag is sets to buffer_head, we should call submit_bio()
once per the size of buffer_head. But I don't see the impact of
other filesystems that is used BH_Boundary.

Does anyone have any ideas about this problem?

Regards,
Kazuya Mio
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ