lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Nov 2010 09:30:44 +0800
From:	Cypher Wu <cypher.w@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Chris Metcalf <cmetcalf@...era.com>,
	Américo Wang <xiyou.wangcong@...il.com>,
	linux-kernel@...r.kernel.org, netdev <netdev@...r.kernel.org>
Subject: Re: Kernel rwlock design, Multicore and IGMP

On Mon, Nov 15, 2010 at 7:31 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le lundi 15 novembre 2010 à 19:18 +0800, Cypher Wu a écrit :
>> In that post I want to confirm another thing: if we join/leave on
>> different cores that every call will start the timer for IGMP message
>> using the same in_dev->mc_list, could that be optimized?
>>
>
> Which timer exactly ? Is it a real scalability problem ?
>
> I believe RTNL would be the blocking point actually...
>
>
>
>

struct in_device::mr_ifc_timer. Every time when a process join/leave a
MC group, igmp_ifc_event() -> igmp_ifc_start_timer() will start the
timer on the core that the system call issued, and
igmp_ifc_timer_expire() will be called on that core in the bottem halt
of timer interrupt.
If we call join/leave on mutlicores that timers will run on all these
cores, but it seems only one or two will generate IGMP message, others
will only lock the list and loop throught it with nothing generated.


-- 
Cyberman Wu
http://www.meganovo.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists