lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 1 Aug 2016 04:36:28 +0200
From:	Tomas Vondra <tomas@...ddict.com>
To:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Dirty/Writeback fields in /proc/meminfo affected by 20d74bf29c

Hi,

While investigating a strange OOM issue on the 3.18.x branch (which 
turned out to be already fixed by 52c84a95), I've noticed a strange 
difference in Dirty/Writeback fields in /proc/meminfo depending on 
kernel version. I'm wondering whether this is expected ...

I've bisected the change to 20d74bf29c, added in 3.18.22 (upstream 
commit 4f258a46):

     sd: Fix maximum I/O size for BLOCK_PC requests

With /etc/sysctl.conf containing

     vm.dirty_background_bytes = 67108864
     vm.dirty_bytes = 1073741824

a simple "dd" example writing 10GB file

     dd if=/dev/zero of=ssd.test.file bs=1M count=10240

results in about this on 3.18.21:

     Dirty:            740856 kB
     Writeback:         12400 kB

but on 3.18.22:

     Dirty:             49244 kB
     Writeback:        656396 kB

I.e. it seems to revert the relationship. I haven't identified any 
performance impact, and apparently for random writes the behavior did 
not change at all (or at least I haven't managed to reproduce it).

But it's unclear to me why setting a maximum I/O size should affect 
this, and perhaps it has impact that I don't see.

regards
Tomas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ