lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4FA28230.5080507@redhat.com>
Date:	Thu, 03 May 2012 09:03:44 -0400
From:	Doug Ledford <dledford@...hat.com>
To:	Dan Carpenter <dan.carpenter@...cle.com>
CC:	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	sfr@...b.auug.org.au
Subject: Re: [Patch 1/4] ipc/mqueue: improve performance of send/recv

On 5/3/2012 5:21 AM, Dan Carpenter wrote:
> On Tue, May 01, 2012 at 01:50:52PM -0400, Doug Ledford wrote:
>> @@ -150,16 +241,25 @@ static struct inode *mqueue_get_inode(struct super_block *sb,
>>  			info->attr.mq_maxmsg = attr->mq_maxmsg;
>>  			info->attr.mq_msgsize = attr->mq_msgsize;
>>  		}
>> -		mq_msg_tblsz = info->attr.mq_maxmsg * sizeof(struct msg_msg *);
>> -		if (mq_msg_tblsz > PAGE_SIZE)
>> -			info->messages = vmalloc(mq_msg_tblsz);
>> -		else
>> -			info->messages = kmalloc(mq_msg_tblsz, GFP_KERNEL);
>> -		if (!info->messages)
>> -			goto out_inode;
>> +		/*
>> +		 * We used to allocate a static array of pointers and account
>> +		 * the size of that array as well as one msg_msg struct per
>> +		 * possible message into the queue size. That's no longer
>> +		 * accurate as the queue is now an rbtree and will grow and
>> +		 * shrink depending on usage patterns.  We can, however, still
>> +		 * account one msg_msg struct per message, but the nodes are
>> +		 * allocated depending on priority usage, and most programs
>> +		 * only use one, or a handful, of priorities.  However, since
>> +		 * this is pinned memory, we need to assume worst case, so
>> +		 * that means the min(mq_maxmsg, max_priorities) * struct
>> +		 * posix_msg_tree_node.
>> +		 */
>> +		mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
>> +			min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
>> +			sizeof(struct posix_msg_tree_node);
> 
> "info->attr.mq_maxmsg" is a long, but the min_t() truncates it to an
> unsigned int.  I'm not familiar with this code so I don't know if
> that's a problem...

It's fine.  We currently cap mq_maxmsg at a hard limit of 65536, and
MQ_PRIO_MAX is 32768, so both well within the limits of truncating a
long to unsigned int.  In order for this to ever be a problem, we would
first have to change the accounting of mq bytes in the user struct from
a 32bit type to a 64bit type.  As long as it's still 32 bits, and as
long as mq_maxmsg * (sizeof(struct msg_msg) + mq_msgsize) must fit
within that 32bit struct, we will never have an mq_maxmsg large enough
to truncate in this situation.

> We do the same thing in mqueue_evict_inode() and mq_attr_ok().

All of the math in here would need an audit if we increased the maximum
mq bytes from 32bit to 64bit.


-- 
Doug Ledford <dledford@...hat.com>
              GPG KeyID: 0E572FDD
	      http://people.redhat.com/dledford

Infiniband specific RPMs available at
	      http://people.redhat.com/dledford/Infiniband


Download attachment "signature.asc" of type "application/pgp-signature" (899 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ