lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0802061517380.8470@blonde.site>
Date:	Wed, 6 Feb 2008 15:29:32 +0000 (GMT)
From:	Hugh Dickins <hugh@...itas.com>
To:	Tomasz Chmielewski <mangoo@...g.org>
cc:	LKML <linux-kernel@...r.kernel.org>,
	Mika Lawando <rzryyvzy@...shmail.net>
Subject: Re: What is the limit size of tmpfs /dev/shm ? 

On Wed, 6 Feb 2008, Tomasz Chmielewski wrote:
> > Hello Kernel Users,
> > 
> > is there a size limit for tmpfs for the /dev/shm filesystem?

There shouldn't be any artificial size limit on the /dev/shm filesystem,
it's an "internal" mount, and those are unlimited by default.  Which is
not to say that you magically receive unlimited memory along with it!

The practical limit would be somewhere round about the total size of
your swap plus, ooh, finger in the air, 75% of your RAM - though that
all depends on what else you'd be wanting to use RAM+swap for.

> > Normally its default size is set to 2 GB.

Hmm, where did that number come from, maybe I'm forgetting something.
The user-visible mounts are limited by default to half of ram, so if
you've 4GB of ram, then 2GB would be the default for those tmpfs mounts.

> > Is it possible to create a 2 TB (Terrabyte) filesystem with tmpfs?

Yes.

> > Or is there a maximum size defined in the linux kernel?
> 
> Depends on your arch.
> 
> If it's 32 bit, it's limited to 16TB:
> 
> # mount -o size=16383G -t tmpfs tmpfs /mnt/2
> # df -h
> (...)
> tmpfs                  16T     0   16T   0% /mnt/2
> 
> # umount /mnt/2
> 
> # mount -o size=16385G -t tmpfs tmpfs /mnt/2
> # df -h
> (...)
> tmpfs                 1.0G     0  1.0G   0% /mnt/2
> 
> So 16384G would mean the same as 0.

Hah!  Nice investigation.  That's more of a bug than anything,
though not one I feel urgently compelled to fix.

> If you're 64 bit, you need to have really loads of storage and/or RAM to
> accumulate 16EB:
> 
> # mount -t tmpfs -o size=171798691839G tmpfs /mnt/2
> # df -h
> (...)
> tmpfs                  16E     0   16E   0% /mnt/2

Mika, you answered:

> Nice, I will try this out.
> I have not the money for 16E of memory of RAM! :-) lol
> Already to provide 1 TB it would cost at least for the memory about 20 000
> EUR + the server costs which support this.

Don't forget that tmpfs overflows into swap, so you could save money
by adding adding more swap and cutting down on the RAM: though of
course that will perform very poorly once it's actually using the
swap, probably not the direction you want to go in.

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ