[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130307103240.GA24132@dcvr.yhbt.net>
Date: Thu, 7 Mar 2013 10:32:40 +0000
From: Eric Wong <normalperson@...t.net>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Davide Libenzi <davidel@...ilserver.org>,
Al Viro <viro@...IV.linux.org.uk>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] epoll: trim epitem by one cache line on x86_64
Andrew Morton <akpm@...ux-foundation.org> wrote:
> It's going to be hard to maintain this - someone will change something
> sometime and break it. I suppose we could add a runtime check if we
> cared enough. Adding a big fat comment to struct epitem might help.
Thanks for looking at this patch. I'll send a patch with a comment
about keeping epitem size in check. Also, would adding (with comments):
BUILD_BUG_ON(sizeof(struct epitem) > 128);
...be too heavy-handed? I used that in my testing. I'll check for:
sizeof(void *) <= 8 too; in case 128-bit machines appear...
> I don't see much additional room to be saved. We could probably remove
> epitem.nwait, but that wouldn't actually save anything because nwait
> nestles with ffd.fd.
If we remove nwait, we can move epoll_event up and have event.events
tucked in there. I have more and more depending on epoll, so I'll be
around to comment on future epoll changes as they come up.
> I tested your patch on powerpc and it reduced sizeof(epitem) from 136
> to 128 for that arch as well, so I suggest we run with
>
> --- a/fs/eventpoll.c~epoll-trim-epitem-by-one-cache-line-on-x86_64-fix
> +++ a/fs/eventpoll.c
> @@ -105,7 +105,7 @@
> struct epoll_filefd {
> struct file *file;
> int fd;
> -} EPOLL_PACKED;
> +} __packed;
Thanks for testing on ppc. Looks good to me. For what it's worth:
Acked-by: Eric Wong <normalperson@...t.net>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists