lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49E6CFB2.2020905@cosmosbay.com>
Date:	Thu, 16 Apr 2009 08:26:58 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	David Miller <davem@...emloft.net>
CC:	shemminger@...tta.com, kaber@...sh.net, jeff.chua.linux@...il.com,
	paulmck@...ux.vnet.ibm.com, paulus@...ba.org, mingo@...e.hu,
	torvalds@...ux-foundation.org, laijs@...fujitsu.com,
	jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
	netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
	benh@...nel.crashing.org
Subject: Re: [PATCH] netfilter: use per-cpu spinlock rather than RCU (v3)

David Miller a écrit :
> From: Eric Dumazet <dada1@...mosbay.com>
> Date: Wed, 15 Apr 2009 23:07:29 +0200
> 
>> Well, it seems original patch was not so bad after all
>>
>> http://lists.netfilter.org/pipermail/netfilter-devel/2006-January/023175.html
>>
>> So change per-cpu spinlocks to per-cpu rwlocks 
>>
>> and use read_lock() in ipt_do_table() to allow recursion...
> 
> Grumble, one more barrier to getting rid of rwlocks in the whole
> tree. :-/
> 
> I really think we should entertain the idea where we don't RCU quiesce
> when adding rules.  That was dismissed as not workable because the new
> rule must be "visible" as soon as we return to userspace but let's get
> real, effectively it will be.

We had to RCU quiesce to be sure old rules were not any more used before
freeing them. Alternative is to defer freeing via call_rcu() but
subject to OOM.

With 200 basic rules, size of rules table is about 40960 bytes per cpu.
(88 pages taken on vmalloc virtual space on my 8 cpus machine)
0xfcaf8000-0xfcb03000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb04000-0xfcb0f000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb10000-0xfcb1b000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb1c000-0xfcb27000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb28000-0xfcb33000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb34000-0xfcb3f000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb40000-0xfcb4b000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc
0xfcb4c000-0xfcb57000   45056 xt_alloc_table_info+0xa8/0xd0 pages=10 vmalloc

This kind of monolithic huge object is hard to handle with RCU semantic,
more suitable for handling set of small objects (struct file for example),
even if RCU can have a backoff of 10000 elements in its queue...

> 
> If there are any stale object reference issues, we can use RCU object
> destruction to handle that kind of thing.
> 
> I almost cringed when the per-spinlock idea was proposed, but per-cpu
> rwlocks just takes things too far for my tastes.


In my humble opinion, this is a reasonnable compromise, and Stephen patch
version 4 is ok for me.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ