lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 Nov 2008 23:43:18 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Al Viro <viro@...IV.linux.org.uk>
CC:	Ingo Molnar <mingo@...e.hu>, David Miller <davem@...emloft.net>,
	"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org,
	kernel-testers@...r.kernel.org, Mike Galbraith <efault@....de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Christoph Hellwig <hch@...radead.org>, rth@...ddle.net,
	ink@...assic.park.msu.ru
Subject: Re: [PATCH 6/6] fs: Introduce kern_mount_special() to mount	special
 vfs

Eric Dumazet a écrit :
> Al Viro a écrit :
>> On Thu, Nov 27, 2008 at 12:32:59AM +0100, Eric Dumazet wrote:
>>> This function arms a flag (MNT_SPECIAL) on the vfs, to avoid
>>> refcounting on permanent system vfs.
>>> Use this function for sockets, pipes, anonymous fds.
>>
>> IMO that's pushing it past the point of usefulness; unless you can show
>> that this really gives considerable win on pipes et.al. *AND* that it
>> doesn't hurt other loads...
> 
> Well, if this is the last cache line that might be shared, then yes, 
> numbers can talk.
> But coming from 10 to 1 instead of 0 is OK I guess
> 
>>
>> dput() part: again, I want to see what happens on other loads; it's 
>> probably
>> fine (and win is certainly more than from mntput() change), but...  The
>> thing is, atomic_dec_and_lock() in there is often done on dentries with
>> d_count > 1 and that's fairly cheap (and doesn't involve contention on
>> dcache_lock on sane targets).
>>
>> FWIW, unless there's a really good reason to do alpha 
>> atomic_dec_and_lock()
>> in a special way, I'd try to compare with
> 
>>         if (atomic_add_unless(&dentry->d_count, -1, 1))
>>                 return;
> 
> I dont know, but *reading* d_count before trying to write it is expensive
> on modern cpus. Oprofile clearly show that on Intel Core2.
> 
> Then, *testing* the flag before doing the atomic_something() has the same
> problem. Or we should put flag in a different cache line.
> 
> I am lazy (time for a sleep here), maybe we are smart here and use a 
> trick like that already ?
> 
> atomic_t atomic_read_with_write_intent(atomic_t *v)
> {
>        int val = 0;
>     /*
>      * No LOCK prefix here, we only give a write intent hint to cpu
>      */
>        asm volatile("xaddl %0, %1"
>                     : "+r" (val), "+m" (v->counter)
>                     : : "memory");
>        return val;
> }

Forget it, its wrong... I really need to sleep :)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ