[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c4e36d110806121311n64890e17h67f49c32481fbb28@mail.gmail.com>
Date: Thu, 12 Jun 2008 22:11:32 +0200
From: "Zdenek Kabelac" <zdenek.kabelac@...il.com>
To: "Johannes Berg" <johannes@...solutions.net>
Cc: "Tomas Winkler" <tomasw@...il.com>,
"Rik van Riel" <riel@...hat.com>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
yi.zhu@...el.com, reinette.chatre@...el.com,
linux-wireless@...r.kernel.org
Subject: Re: Problem: Out of memory after 2days with 2GB RAM
2008/6/12 Johannes Berg <johannes@...solutions.net>:
>
>> I'm not against it. You;v decided that I'm fighting you because I gave
>> another solution.
>
> Ok, no, I'm not saying you shouldn't rewrite all the code to get rid of
> it, but I think you can use a patch like mine interim as such a rewrite
> is unlikely to go into 2.6.26, is it?
>
>> Frankly we probably don't need this allocation at all. maybe one skb
>> is just enough
>
> That would be nice, indeed.
>
>> even with my never dying hope all fragments are in skb fragment list.
>
> :)
>
>> This still probably won't save pci memory allocation problem
>
> Yeah, true, that one needs to be done, but it could probably be done
> only once when hw is probed rather than every time it is brought up.
> Most likely not something you'll get to fix in 2.6.26 either though.
Well - it's great that there will be saved few kB in allocation of
never used pointers in iwl driver - but does this really solve the
problem that kernel gets relatively quickly out of memory for
allocations of this size - I guess iwl isn't the only driver
requesting 32 sequential pages.
Is it possible to track how this memory gets fragment/lost - who owns
the block and why they are not back in the pool?
btw with 8hour uptime at this moment I can see this:
DMA: 26*4kB 37*8kB 72*16kB 65*32kB 3*64kB 0*128kB 0*256kB 0*512kB
0*1024kB 0*2048kB 1*4096kB = 7920kB
DMA32: 203*4kB 79*8kB 26*16kB 11*32kB 6*64kB 9*128kB 3*256kB 2*512kB
2*1024kB 0*2048kB 0*4096kB = 7588kB
so at this moment I can see quiet a lot of free DMA memory - but in my
trace at the thread beginig after several suspend/resumes this memory
was gone....
Zdenek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists