[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4927BC9A.3000001@kernel.org>
Date: Sat, 22 Nov 2008 17:02:34 +0900
From: Tejun Heo <tj@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: linux-kernel@...r.kernel.org, fuse-devel@...ts.sourceforge.net,
miklos@...redi.hu, greg@...ah.com
Subject: Re: [PATCH 5/5] CUSE: implement CUSE - Character device in Userspace
Andrew Morton wrote:
>> The only notable lacking feature compared to in-kernel implementation
>> is mmap support.
>>
>> ...
>>
>> +struct cuse_conn {
>> + struct fuse_conn fc;
>> + struct cdev cdev;
>> + struct vfsmount *mnt;
>> + struct device *dev;
>> +
>> + /* init parameters */
>> + bool unrestricted_ioctl:1;
>
> I'd suggest removal of the :1 here. If someone later comes along and
> adds another bitfield next to it, locking will be needed to prevent
> racess accessing the bitfields, and I seeno appropriate lock here, nor
> any comment explaining the locking..
/* init parameters */ pretty much indicates that they're set once during
initialization (no locking concerns). Anyways, at this point, it
doesn't matter anyway, I'll drop the :1.
>> +static int cuse_init_worker(void *data)
>> +{
>> + struct cuse_init_in iin = { };
>> + struct cuse_init_out iout = { };
>> + struct cuse_devinfo devinfo = { };
>
> You might want to check the generated code here. gcc has a habit of
> assembling a temp structure on the stack then memcpying it over, which
> is just junk. This will be gcc version dependent. Fixable by using an
> old-fashioned memset instead.
My gcc-4.3.1 20080507 does four movq's to initialize iin and iout on
stack during function preamble and one movq for devinfo later in the
function body.
At any rate, this code path is run only once per CUSE device
initialization and those structures are small enough to be on stack. I
would go for simpler code anyday here.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists