[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1357236046.21409.25385.camel@edumazet-glaptop>
Date: Thu, 03 Jan 2013 10:00:46 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Dave Jones <davej@...hat.com>
Cc: netdev@...r.kernel.org, h.reindl@...lounge.net,
Fedora Kernel Team <kernel-team@...oraproject.org>
Subject: Re: order 7 allocations from xt_recent
On Thu, 2013-01-03 at 12:26 -0500, Dave Jones wrote:
> On Thu, Jan 03, 2013 at 12:11:15PM -0500, Dave Jones wrote:
> > On Thu, Jan 03, 2013 at 08:55:04AM -0800, Eric Dumazet wrote:
> > > On Thu, 2013-01-03 at 11:43 -0500, Dave Jones wrote:
> > > > We had a report from a user that shows this code trying
> > > > to do enormous allocations, which isn't going to work too well..
> > > > ...
> > > > Which is initialised thus..
> > > >
> > > > ip_list_hash_size = 1 << fls(ip_list_tot);
> > > >
> > > > And ip_list_tot is 10000 in this case. Hmm ?
> > > >
> > > > Complete report and setup described in his bug report at https://bugzilla.redhat.com/show_bug.cgi?id=890715
> > >
> > > Yes, we had a report and a patch :
> > >
> > > http://comments.gmane.org/gmane.linux.network/248216
> > >
> > > I'll send it in a more formal way.
> >
> > Ah! Excellent.
> >
> > That 'check size and vmalloc/kmalloc accordingly' thing seems to be a pattern
> > that comes up time and time again. Is it worth maybe making a more generic
> > version of that instead of open-coding it each time it comes up ?
>
> Something else that I'm puzzled by.
>
> In the report above, it failed to allocate 512kb, but..
>
> Node 0 Normal: 2388*4kB 347*8kB 1029*16kB 3512*32kB 29*64kB 2*128kB 1*256kB 5*512kB 1*1024kB 0*2048kB 0*4096kB = 147128kB
> ^^^^^^^^^^^^^^^^
>
> Shouldn't the allocator have been able to satisfy that anyway ?
>
> Dave
>
Might be something related to the CONFIG_COMPACTION=y and lumpy reclaim
removal ?
Anyway, we keep a fraction of memory for ATOMIC allocations.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists