lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 08 May 2011 16:17:42 +0100
From:	Alex Bligh <alex@...x.org.uk>
To:	paulmck@...ux.vnet.ibm.com
cc:	Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
	Alex Bligh <alex@...x.org.uk>
Subject: Re: Scalability of interface creation and deletion

Paul,

>> No, I waited a few minutes after boot for the system to stabilize, and
>> all CPUs were definitely online.
>>
>> The patch to the kernel I am running is below.
>
> OK, interesting...
>
> My guess is that you need to be using ktime_get_ts().  Isn't ktime_get()
> subject to various sorts of adjustment?

It's Eric's code, not mine, but:

kernel/time/timekeeping.c suggests they do the same thing
(adjust xtime by wall_to_monotonic), just one returns a
struct timespec and the other returns a ktime_t.

>> >> There is nothing much going on these systems (idle, no other users,
>> >> just normal system daemons).
>> >
>> > And normal system daemons might cause this, right?
>>
>> Yes. Everything is normal, except I did
>> service udev stop
>> unshare -n bash
>> which together stop the system running interface scripts when
>> interfaces are created (as upstart and upstart-udev-bridge are
>> now integrated, you can't kill upstart, so you have to rely on
>> unshare -n to stop the events being propagated). That's just
>> to avoid measuring the time it takes to execute the scripts.
>
> OK, so you really could be seeing grace periods started by these system
> daemons.

In 50% of 200 calls? That seems pretty unlikely. I think it's more
likely to be the 6 jiffies per call to ensure cpus are idle,
plus the 3 calls per interface destroy.

If 6 jiffies per call to ensure cpus are idle is a fact of life,
then the question goes back to why interface removal is waiting
for rcu readers to be released synchronously, as opposed to
doing the update bits synchronously, then doing the reclaim
element (freeing the memory) afterwards using call_rcu.

-- 
Alex Bligh
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ