lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Apr 2009 01:27:58 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Tejun Heo <tj@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
	linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
	Al Viro <viro@...IV.linux.org.uk>,
	Hugh Dickins <hugh@...itas.com>,
	Alexey Dobriyan <adobriyan@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Greg Kroah-Hartman <gregkh@...e.de>
Subject: Re: [RFC][PATCH 0/9] File descriptor hot-unplug support

Tejun Heo <tj@...nel.org> writes:

> Eric W. Biederman wrote:
>> Do you know of a case where we actually have multiple tasks accessing
>> a file simultaneously?
>
> I don't have anything at hand but multithread/process server accepting
> on the same socket comes to mind.  I don't think it would be a very
> rare thing.  If you confine the scope to character devices or sysfs,
> it could be quite rare tho.

Yes.  I think I can safely exclude sockets, and not bother with
reference counting them.

The only strong evidence I have that multi-threading on a single file
descriptor is likely to be common is that we have pread and pwrite
syscalls.  At the same time the number of races we have in struct file
if it is accessed by multiple threads at the same time, suggests
that at least for cases where you have an offset it doesn't happen often.

I cringe when I see per cpu counters for something like files that we
are likely to have a lot of.  I keep imagining a quadratic explosion
in data size.  In practice we are likely to have a small cpu count <=
8-16 cpus so it is likely ok.  Especially if we are only allocating 8
bytes per cpu per file.  I guess in total that is at most 128K per file.
8bytes*16k cpus.  With the default system file-max on my systems 203871
to 705863, it looks like we would max out at between 1M and 5M per cpu.
Still a lot but survivable.

Somewhere it all falls down, but only if you max out a very rare
very large machine, and that seems to be case with just about everything.

Which all leads me to say that if we can avoid per cpu memory and not impact
performance I want to do that.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ