[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1fx07c59o.fsf@fess.ebiederm.org>
Date: Mon, 28 Jun 2010 12:43:15 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Andi Kleen <andi@...stfloor.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Nathan Fontenot <nfont@...tin.ibm.com>,
linux-kernel@...r.kernel.org, Greg Kroah-Hartman <gregkh@...e.de>
Subject: Re: [PATCH] memory hotplug disable boot option
Andi Kleen <andi@...stfloor.org> writes:
>> I have a prototype patch sitting around somewhere. I think ultimately
>> it makes sense to do something like extN's htree directory structure
>> in sysfs. I wanted to get the tagged sysfs support in before I worked
>> on scalability because that slightly changes the requirements.
>>
>> Improving the scalability here is certainly worth doing, but I am slightly
>> concerned there is something else algorithmically wrong if this is still
>> going to take 33 minutes to boot with 2TB.
>
> I'm don't think thousands of entries in sysfs is really a good idea. Even if you fix
> the the insert algorithm issues a simple ls will still be very slow and there
> will be likely other issues too. And nobody can claim that's a good interface.
Yes. I am much more interested in fixing lookup and stat performance,
if people only look at insert performance it is a bit of a joke.
That said there are cases of people using a lot of network virtual
network devices that are essentially sane that push sysfs.
This is only the second or third time sysfs scalability has come up
this year. So at some level I think it is sane to fix sysfs regardless
of what we do with memory hotplug.
Ensuring that our insert times are O(logN) for insert, O(N) for readdir,
O(logN) for lookup and O(log1) for stat, seems useful if we can do it
without other complications.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists