lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 15 Dec 2007 13:33:54 +0100 (CET)
From:	Krzysztof Oledzki <olel@....pl>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Osterried <osterried@...se.de>,
	bugme-daemon@...zilla.kernel.org
Subject: Re: [Bug 9182] Critical memory leak (dirty pages)



On Thu, 13 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Thu, 13 Dec 2007, Peter Zijlstra wrote:
>
>> 
>> On Thu, 2007-12-13 at 16:17 +0100, Krzysztof Oledzki wrote:
>>> 
>> 
>>> BTW: Could someone please look at this problem? I feel little ignored and
>>> in my situation this is a critical regression.
>> 
>> I was hoping to get around to it today, but I guess tomorrow will have
>> to do :-/
>
> Thanks.
>
>> So, its ext3, dirty some pages, sync, and dirty doesn't fall to 0,
>> right?
>
> Not only doesn't fall but continuously grows.
>
>> Does it happen with other filesystems as well?
>
> Don't know. I generally only use ext3 and I'm afraid I'm not able to switch 
> this system to other filesystem.
>
>> What are you ext3 mount options?
> /dev/root / ext3 rw,data=journal 0 0
> /dev/VolGrp0/usr /usr ext3 rw,nodev,data=journal 0 0
> /dev/VolGrp0/var /var ext3 rw,nodev,data=journal 0 0
> /dev/VolGrp0/squid_spool /var/cache/squid/cd0 ext3 
> rw,nosuid,nodev,noatime,data=writeback 0 0
> /dev/VolGrp0/squid_spool2 /var/cache/squid/cd1 ext3 
> rw,nosuid,nodev,noatime,data=writeback 0 0
> /dev/VolGrp0/news_spool /var/spool/news ext3 
> rw,nosuid,nodev,noatime,data=ordered 0 0

BTW: this regression also exists in 2.6.24-rc5. I'll try to find when it 
was introduced but it is hard to do it on a highly critical production 
system, especially since it takes ~2h after a reboot, to be sure.

However, 2h is quite good time, on other systems I have to wait ~2 months 
to get 20MB of leaked memory:

# uptime
  13:29:34 up 58 days, 13:04,  9 users,  load average: 0.38, 0.27, 0.31

# sync;sync;sleep 1;sync;grep Dirt /proc/meminfo
Dirty:           23820 kB

Best regards,

 				Krzysztof Olędzki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ