lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 07 Nov 2011 15:49:17 +0100
From:	Davidlohr Bueso <dave@....org>
To:	Valdis.Kletnieks@...edu
Cc:	Hugh Dickins <hughd@...gle.com>,
	Lennart Poettering <lennart@...ttering.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	lkml <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [RFC PATCH] tmpfs: support user quotas

On Mon, 2011-11-07 at 04:11 -0500, Valdis.Kletnieks@...edu wrote:
> On Sun, 06 Nov 2011 18:15:01 -0300, Davidlohr Bueso said:
> 
> > @@ -1159,7 +1159,12 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
> >  			struct page **pagep, void **fsdata)
> 
> > +	if (atomic_long_read(&user->shmem_bytes) + len > 
> > +	    rlimit(RLIMIT_TMPFSQUOTA))
> > +		return -ENOSPC;
> 
> Is this a per-process or per-user limit?  If it's per-process, it doesn't
> really do much good, because a user can use multiple processes to over-run the
> limit (either intentionally or accidentally).

This is a per-user limit.
> 
> > @@ -1169,10 +1174,12 @@ shmem_write_end(struct file *file, struct address_space *mapping,
> >  			struct page *page, void *fsdata)
> 
> > +	if (pos + copied > inode->i_size) {
> >  		i_size_write(inode, pos + copied);
> > +		atomic_long_add(copied, &user->shmem_bytes);
> > +	}
> If this is per-user, it's racy with shmem_write_begin() - two processes can hit
> the write_begin(), be under quota by (say) 1M, but by the time they both
> complete the user is 1M over the quota.
> 
I guess using a spinlock instead of atomic operations would serve the
purpose.

> >  @@ -1535,12 +1542,15 @@ static int shmem_unlink(struct inode *dir, struct dentry *dentry)
> > +	struct user_struct *user = current_user();
> > +	atomic_long_sub(inode->i_size, &user->shmem_bytes);
> 
> What happens here if user 'fred' creates a file on a tmpfs, and then logs out so he has
> no processes running, and then root does a 'find tmpfs -user fred -exec rm {} \;' to clean up?
> We just decremented root's quota, not fred's....
> 
Would the same would occur with mqueues? I haven't tested it but I don't
see anywhere that user->mq_bytes is decreased like this.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ