[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87vc8ldcy7.fsf@xmission.com>
Date: Wed, 20 Mar 2013 13:38:08 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Benoit Lourdelet <blourdel@...iper.net>
Cc: Serge Hallyn <serge.hallyn@...ntu.com>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
lxc-users <lxc-users@...ts.sourceforge.net>
Subject: Re: [Lxc-users] Containers slow to start after 1600
Benoit Lourdelet <blourdel@...iper.net> writes:
> Hello,
>
> The measurement has been done with kernel 3.8.2.
>
> Linux ieng-serv06 3.7.9 #3 SMP Wed Feb 27 02:38:58 PST 2013 x86_64 x86_64
> x86_64 GNU/Linux
Two different kernel versions?
> What information would you like to see on the kernel ?
The question is where is the kernel spending it's time. So profiling
information should help us see that. Something like.
$ cat > test-script.sh << 'EOF'
#!/bin/bash
for i in $(seq 1 2000) ; do
ip link add a$i type veth peer name b$i
done
EOF
$ perf record -a -g test-script.sh
$ perf report
I don't do anywhere enough work with perf to remember what good options
are.
You definititely don't want to time anything you are doing something
silly like asking ip link add to generate device names which is O(N^2)
when you create one device at a time.
And of course there is the interesting discrepency. Why can I add 5000
veth pairs in 120 seconds and it takes you 1123 seconds. Do you have a very
slow cpu in your test environment? Or was your test asking the kernel
to generate names.
Once we know where the kernel is spending it's time we can look to see
if there is anything that is easy to fix, and where to point you.
Both my timing and yours indicates that there is something taking O(N^2)
time in there. So it would at least be interesting to see what that
something is.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists