lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Oct 2007 17:34:15 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Rob Landley <rob@...dley.net>, Theodore Tso <tytso@....edu>,
	James Bottomley <James.Bottomley@...eleye.com>,
	Matthew Wilcox <matthew@....cx>, linux-kernel@...r.kernel.org,
	linux-scsi@...r.kernel.org, Jens Axboe <axboe@...e.de>,
	Suparna Bhattacharya <suparna@...ibm.com>,
	Nick Piggin <piggin@...erone.com.au>
Subject: Re: OOM killer gripe (was Re: What still uses the block layer?)

On Tuesday 16 October 2007 14:38, Eric W. Biederman wrote:
> Nick Piggin <nickpiggin@...oo.com.au> writes:
> > On Tuesday 16 October 2007 13:55, Eric W. Biederman wrote:

> > I don't follow your logic. We don't need SWAP > RAM in order to swap
> > effectively, IMO.
>
> The steady state of a system that is heavily and usably swapping but
> not thrashing is that all of the pages in RAM are in the swap cache,
> at least that used to be the case.

Yeah, it works better in 2.6 (and, IIRC later 2.4 kernels).


> > I don't know if there is a causal relationship there. I mean, I
> > think it's been a long time since thrashing was ever a viable mode
> > of operation, right?
>
> Right.  But swapping heavily has been a viable mode of operation
> and that the vast gap in disk random IO performance seems to have
> hurt significantly.

Or, just not improved as fast as everything else is improving.
There isn't too much the kernel can do about that. It just
relatively changes the point at which you'd consider "swapping
heavily", right?


> It be very clear is used to able to run a problem at little below
> full speed with the disk pegged with swap traffic, and I did this
> regularly when I started out with linux.

I can do this now. In make -jhuge tests for example, you can get
a 4GB, 4 core machine to max out a disk with swapping and still
have 0 idle time. Of course you can also go past that point and
your idle time comes up. That's not new though.


> > Maybe desktops just have less need for swapping now, so nobody sees
> > it much until something goes _really_ bad. When I'm using my 256MB
> > machine, unused stuff goes to swap.
>
> There is a bit of truth in the fact that there is less need for
> swapping now.  At the same time however swapping simply does not
> work well right now, and I'm not at all certain why.
>
> >> the disk for is very limited.   I wonder if we could figure out
> >> how to push and pull 1M or bigger chunks into and out of swap?
> >
> > Pulling in 1MB pages can really easily end up compounding the
> > thrashing problem unless you're very sure a significant amount
> > of it will be used.
>
> It's a hard call.  The I/O time for 1MB of contiguous disk data
> is about the I/O time of 512 bytes of contiguous disk data.

And if you're thrashing, then by definition you need to throw
out 1MB of your working set in order to read it in.


> >> I don't know if swap has actually worked since we vmscan stopped
> >> going over the virtual addresses.
> >
> > I do, and it does ;)
>
> Really?  Not just the pushing of unused stuff into swap.

We had several bugs and things that caused swapping performance
regressions vs 2.4 in earlyish 2.6. After those were fixed, we're
pretty competitive with 2.4 in some basic tests I was using. I
haven't run them for a fair while, so something might have broken
since then, I don't know.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ