[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121006222104.GA4405@twin.jikos.cz>
Date: Sun, 7 Oct 2012 00:21:05 +0200
From: David Sterba <dave@...os.cz>
To: ????????? <jaegeuk.kim@...sung.com>
Cc: viro@...iv.linux.org.uk, "'Theodore Ts'o'" <tytso@....edu>,
gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
chur.lee@...sung.com, cm224.lee@...sung.com,
jooyoung.hwang@...sung.com
Subject: Re: [PATCH 02/16] f2fs: add on-disk layout
On Fri, Oct 05, 2012 at 08:56:44PM +0900, ????????? wrote:
> +struct node_footer {
> + __le32 nid; /* node id */
> + __le32 ino; /* inode nunmber */
> + __le32 cold:1; /* cold mark */
> + __le32 fsync:1; /* fsync mark */
> + __le32 dentry:1; /* dentry mark */
> + __le32 offset:29; /* offset in inode's node space */
A bitfield for a on-disk structure? This will have endianity issues,
(but I don't know if you intend to support big-endian). It's not enough
to use cpu_to_le* as in
fill_node_footer(...) {
rn->footer.offset = cpu_to_le32(ofs);
}
because the bitfield inside the structure will be already defined
reversed. The cpu_to_le macro will only convert value of 'ofs' but will
place it to different bits than it would on a little-endian arch.
There are macros to define bitfields in an endian-neutral way (or do it
by #ifdefs though it also involves duplicating the item names), or you
can alternatively use two structs fr disk-only and memory-only access,
the disk one stores __le32 with value combined of all and the in-memory
gets set up properly and will look like your current version of the
structure.
(More about not using bitfields http://yarchive.net/comp/linux/bitfields.html)
> + __le64 cp_ver; /* checkpoint version */
> + __le32 next_blkaddr; /* next node page block address */
> +} __packed;
> +
david
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists