[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <482D4CC8.2050501@free.fr>
Date: Fri, 16 May 2008 10:58:48 +0200
From: Stéphane ANCELOT <sancelot@...e.fr>
To: Christoph Lameter <clameter@....com>
Cc: Pekka Enberg <penberg@...helsinki.fi>, linux-kernel@...r.kernel.org
Subject: Re: detecting kernel mem leak
Christoph Lameter a écrit :
> On Tue, 13 May 2008, Stéphane ANCELOT wrote:
>
>
>> I kept my kernel running with few applications for 5 days , doing
>> nothing more than backing up few kb of data on disk and refresh few X apps.
>>
>> Ater five days the global memory available go down from 24Mb to 8Mb ...
>>
>
> That is normal. Linux tries to put all memory to use and will free on
> demand.
>
>
>> The are some signifiant changes in slabinfo but now, I do not know where
>> to search ?
>>
>
> Compile the slabinfo tool.
>
> gcc -o slabinfo linux/Documentation/vm/slabinfo.c
>
> Then you can do
>
> slabinfo -T
>
> to get an overview of how much is used by slabs. But I do not see that
> slabs are using an excessive amount. So toying around with slabinfo is
> not going to get you anywhere.
>
>
1) slabinfo tells me SYSFS support for SLUB not active
In the kernel, there is SLAB or SLUB , my kernel is at this time
configured for SLAB allocator.
it is documented SLUB minimizes cache line usage.
Do you think I have to switch to SLUB ?
2) regarding memory debugging, your reply and some mesages told it was
normal the memory was growing (with ext3 buffer_heads...) and released
on demand.
This sounds to me it becomes VERY VERY difficult telling if my system
is STABLE or NOT. Is there a way to bypass it ?
I assume I have to do some kind of small program trying to allocate
almost the full remaining memory available at startup to empty caches ?
Best Regards
Steph
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists