[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1403027312.2464.5.camel@buesod1.americas.hpqcorp.net>
Date: Tue, 17 Jun 2014 10:48:32 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: Jack Miller <millerjo@...ibm.com>
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
miltonm@...ibm.com, anton@....ibm.com
Subject: Re: [RESEND] shm: shm exit scalability fixes
On Tue, 2014-06-17 at 12:27 -0500, Jack Miller wrote:
> [ RESEND note: Adding relevant CCs, fixed a couple of typos in commit message,
> patches unchanged. Original intro follows. ]
>
> All -
>
> This is small set of patches our team has had kicking around for a few versions
> internally that fixes tasks getting hung on shm_exit when there are many
> threads hammering it at once.
>
> Anton wrote a simple test to cause the issue:
>
> http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
I'm actually in the process of adding shm microbenchmarks to perf-bench
so I might steal this :-)
>
> Before applying this patchset, this test code will cause either hanging
> tracebacks or pthread out of memory errors.
Are you seeing this issue in any real world setups? While the program
does stress the path you mention quite well, I fear it is very
unrealistic... how many shared mem segments do real applications
actually use/create for scaling issues to appear?
I normally wouldn't mind optimizing synthetic cases like this, but a
quick look at patch 1/3 shows that we're adding an extra overhead (16
bytes) in the task_struct.
In any case, I will take a closer look at the set.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists