lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Sep 2007 13:11:14 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Anton Altaparmakov <aia21@....ac.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	marc.smith@...ail.mcc.edu
Subject: Re: VM/VFS bug with large amount of memory and file systems?

Peter Zijlstra wrote:
> On Mon, 17 Sep 2007 15:04:05 +0100 Anton Altaparmakov <aia21@....ac.uk>
> wrote:
> 
>> They files  
>> are attached this time rather than inlined so people don't complain  
>> about line wrapping!  (No doubt people will not complain about them  
>> being attached!  )-:)
> 
> I switched mailer after I learnt about flowed stuffs. Still,
> appreciated.
> 
>> If I read it correctly it appears all of low memory is eaten up by  
>> buffer_heads.
>>
>> <quote>
>> # name            <active_objs> <num_objs> <objsize> <objperslab>  
>> <pagesperslab>
>> : tunables <limit> <batchcount> <sharedfactor> : slabdata  
>> <active_slabs> <num_s
>> labs> <sharedavail>
>> buffer_head       12569528 12569535     56   67    1 : tunables   
>> 120   60    8 :
>> slabdata 187605 187605      0
>> </quote>
>>
>> That is 671MiB of low memory in buffer_heads.
>>
>> But why is the kernel not reclaiming them by getting rid of the page  
>> cache pages they are attached to or even leaving the pages around but  
>> killing their buffers?
> 
> Well, you see, you have this very odd configuration where:
> 
> 11GB highmem
>  1GB normal
> 
> pagecache pages go into highmem
> buggerheads go into normal
> 
> I'm guessing there is no pressure at all on zone_highmem so the
> kernel will not try to reclaim pagecache. And because the pagecache
> pages are happily sitting there, the buggerheads are pinned and do not
> get reclaimed.

I've got code for this in RHEL 3, but never bothered to
merge it upstream since I thought people with large memory
systems would be running 64 bit kernels by now.

Obviously I was wrong.  Andrew, are you interested in a
fix for this problem?

IIRC I simply kept a list of all buffer heads and walked
that to reclaim pages when the number of buffer heads is
too high (and we need memory).  This list can be maintained
in places where we already hold the lock for the buffer head
freelist, so there should be no additional locking overhead
(again, IIRC).

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ