[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871tutol7e.fsf@rasmusvillemoes.dk>
Date: Thu, 12 Jun 2014 23:46:13 +0200
From: Rasmus Villemoes <linux@...musvillemoes.dk>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Davidlohr Bueso <davidlohr@...com>,
Michal Simek <michal.simek@...inx.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] Per-task wait_queue_t
Peter Zijlstra <peterz@...radead.org> writes:
> On Tue, Jun 10, 2014 at 02:29:17PM +0200, Rasmus Villemoes wrote:
>> This is an attempt to reduce the stack footprint of various functions
>> (those using any of the wait_event_* macros), by removing the need to
>> allocate a wait_queue_t on the stack and instead use a single instance
>> embedded in task_struct. I'm not really sure where the best place to
>> put it is; I just placed it next to other list bookkeeping fields.
>>
>> For now, there is a little unconditional debugging. This could later
>> be removed or maybe be made dependent on some CONFIG_* variable. The
>> idea of using ->flags is taken from Pavel [1] (I originally stored
>> (void*)1 into ->private).
>>
>> Compiles, but not actually tested.
>>
>
> Doesn't look too bad, would be good to be tested and have some numbers
> on the amount of stack saved etc..
Here are some numbers, and the fact that it only has a positive effect
on 28 functions, together with Oleg's concerns, makes me think that it's
probably not worth it.
(defconfig on x86_64, based on 3.15)
file function old new delta
vmlinux i915_pipe_crc_read 120 136 +16
vmlinux try_to_free_pages 144 136 -8
vmlinux mousedev_read 112 96 -16
vmlinux sky2_probe 192 176 -16
vmlinux gss_cred_init 136 120 -16
vmlinux do_coredump 296 280 -16
vmlinux md_do_sync 360 344 -16
vmlinux save_image_lzo 200 168 -32
vmlinux read_events 184 152 -32
vmlinux loop_make_request 96 64 -32
vmlinux i801_access 104 72 -32
vmlinux tty_port_block_til_ready 104 72 -32
vmlinux loop_thread 152 120 -32
vmlinux evdev_read 136 104 -32
vmlinux __sb_start_write 96 64 -32
vmlinux rfkill_fop_read 104 72 -32
vmlinux intel_dp_aux_ch 152 120 -32
vmlinux i801_transaction 96 64 -32
vmlinux blk_mq_queue_enter 96 64 -32
vmlinux cypress_send_ext_cmd 152 104 -48
vmlinux rcu_gp_kthread 120 72 -48
vmlinux locks_mandatory_area 264 216 -48
vmlinux hub_thread 232 184 -48
vmlinux start_this_handle 120 72 -48
vmlinux autofs4_wait 136 88 -48
vmlinux fcntl_setlk 136 88 -48
vmlinux load_module 264 216 -48
vmlinux serport_ldisc_read 144 96 -48
vmlinux sg_read 160 112 -48
Rasmus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists