lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Sep 2007 15:09:24 +0100
From:	Anton Altaparmakov <aia21@....ac.uk>
To:	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	marc.smith@...ail.mcc.edu
Subject: Re: VM/VFS bug with large amount of memory and file systems?

On 17 Sep 2007, at 15:04, Anton Altaparmakov wrote:
> On 15 Sep 2007, at 11:52, Andrew Morton wrote:
>> On Sat, 15 Sep 2007 12:08:17 +0200 Peter Zijlstra  
>> <peterz@...radead.org> wrote:
>>> Anyway, looks like all of zone_normal is pinned in kernel  
>>> allocations:
>>>
>>>> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB  
>>>> high: 5616kB active:0kB inactive:3160kB present:894080kB  
>>>> pages_scanned:5336 all_unreclaimable? yes
>>>
>>> Out of the 870 odd mb only 3 is on the lru.
>>>
>>> Would be grand it you could have a look at slabinfo and the like.
>>
>> Definitely.
>>
>>>> Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988
>>>> pagetables:129 bounce:0
>>
>> 814,665,728 bytes of slab.
>
> Marc emailed me the contents of /proc/ 
> {slabinfo,meminfo,vmstat,zoneinfo} taken just a few seconds before  
> the machine panic()ed due to running OOM completely...  They files  
> are attached this time rather than inlined so people don't complain  
> about line wrapping!  (No doubt people will not complain about them  
> being attached!  )-:)
>
> If I read it correctly it appears all of low memory is eaten up by  
> buffer_heads.
>
> <quote>
> # name            <active_objs> <num_objs> <objsize> <objperslab>  
> <pagesperslab>
> : tunables <limit> <batchcount> <sharedfactor> : slabdata  
> <active_slabs> <num_s
> labs> <sharedavail>
> buffer_head       12569528 12569535     56   67    1 : tunables   
> 120   60    8 :
> slabdata 187605 187605      0
> </quote>
>
> That is 671MiB of low memory in buffer_heads.

I meant that is 732MiB of low memory in buffer_heads.  (12569535  
num_objs / 67 objperslab * 1 pagesperslab * 4096 PAGE_SIZE)

> But why is the kernel not reclaiming them by getting rid of the  
> page cache pages they are attached to or even leaving the pages  
> around but killing their buffers?
>
> I don't think I am doing anything in NTFS to cause this problem to  
> happen...  Other than using buffer heads for my page cache pages  
> that is but that is hardly a crime!  /-;

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ