[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200610021930.32017.dada1@cosmosbay.com>
Date: Mon, 2 Oct 2006 19:30:31 +0200
From: Eric Dumazet <dada1@...mosbay.com>
To: Vadim Lobanov <vlobanov@...akeasy.net>
Cc: Andi Kleen <ak@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] fdtable: Implement new pagesize-based fdtable allocation scheme.
On Monday 02 October 2006 19:04, Vadim Lobanov wrote:
> On Monday 02 October 2006 03:01, Andi Kleen wrote:
> > Vadim Lobanov <vlobanov@...akeasy.net> writes:
> > > The allocation algorithm sizes the fdarray in such a way that its
> > > memory usage increases in easy page-sized chunks. Additionally, it
> > > tries to account for the optimal usage of the allocators involved:
> > > kmalloc() for sizes less than a page, and vmalloc() with page
> > > granularity for sizes greater than a page.
> >
> > Best would be to avoid vmalloc() completely because it can be quite
> > costly
>
> It's possible. This switch between kmalloc() and vmalloc() was there in the
> original code, and I didn't feel safe ripping it out right now. We can
> always explore this approach too, however.
>
> What is the origin and history of this particular code? (It's been there
> since at least 2.4.x.) Who put in the switch between the two allocators,
> and for what reason? Is that reason still valid?
I think Andi was suggesting using a indirection, with a table of pointers to
PAGES, each PAGE containing PAGE_SIZE/sizeof(struct file *) pointers. Kind of
what is doing vmalloc(), but without the need of contiguous virtual memory
space.
You cannot just zap vmalloc() and use one kmalloc(), because some programs
open one million files. That's 8 MByte of memory on x86_64. kmalloc() cannot
cope with that. It cannot even cope with 32KB allocations but just after
reboot...
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists