lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 5 Aug 2007 17:22:43 +0000 (UTC)
From:	Brice Figureau <brice+lklm@...sofwonder.com>
To:	linux-kernel@...r.kernel.org
Subject:  Re: [PATCH 00/23] per device dirty throttling -v8

Hi,

Ingo Molnar <mingo <at> elte.hu> writes:
> * Linus Torvalds <torvalds <at> linux-foundation.org> wrote:
> > On Fri, 3 Aug 2007, Peter Zijlstra wrote:
> > > 
> > > These patches aim to improve balance_dirty_pages() and directly address 
> > > three issues:
> > >   1) inter device starvation
> > >   2) stacked device deadlocks
> > >   3) inter process starvation
> > 
> > Ok, the patches certainly look pretty enough, and you fixed the only 
> > thing I complained about last time (naming), so as far as I'm 
> > concerned it's now just a matter of whether it *works* or not. I guess 
> > being in -mm will help somewhat, but it would be good to have people 
> > with several disks etc actively test this out.
> 
> There are positive reports in the never-ending "my system crawls like an 
> XT when copying large files" bugzilla entry:
> 
>  http://bugzilla.kernel.org/show_bug.cgi?id=7372
> 
>[ snipped part of the bug report ]
> 
> so the whole problem area seems to be a "perfect storm" created by a 
> combination of TCQ, IO scheduling and VM dirty handling weaknesses. Per 
> device dirty throttling is a good step forward and it makes a very 
> visible positive difference.


Foreword: I'm the OP of bug #7372. 

I just want to say/add that:
 1) I'm running the per-bdi patch since about 30 days on a master mysql server
under somewhat mild load without any adverse effect I could notice.

 2) I _still_ don't get the "performances" of 2.6.17, but since that's the
better combination I could get, I think there is IMHO progress in the right
direction (to be compared to no progress since 2.6.18, that's better :-)).

To be honest, a vanilla 2.6.17 not tuned at all (ie vfs_cache_pressure and other
knobs in /proc/sys/vm like swappiness and dirty_*) is still better than any
other upcoming kernel I tested. Thus I still think 2.6.18 added a big regression
(which unfortunately I couldn't find).
Read the full bug report for any background information if needed.

Unfortunately it isn't practical to git-bisect my issue as the server is a
production server that can't be rebooted/stopped whenever I want (and since I
found workarounds of the issue...).

Thanks for showing interest in this issue.

Please CC: me on any answers as I'm not subscribed to the list.

--
Brice Figureau

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ