[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1403066487.32307.4.camel@concordia>
Date: Wed, 18 Jun 2014 14:41:27 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Anton Blanchard <anton@...ba.org>
Cc: Davidlohr Bueso <davidlohr@...com>,
Jack Miller <millerjo@...ibm.com>,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
miltonm@...ibm.com
Subject: Re: [RESEND] shm: shm exit scalability fixes
On Wed, 2014-06-18 at 12:53 +1000, Anton Blanchard wrote:
> > I normally wouldn't mind optimizing synthetic cases like this, but a
> > quick look at patch 1/3 shows that we're adding an extra overhead (16
> > bytes) in the task_struct.
>
> > We have the shmmni limit (and friends) for that.
>
> If we want to use this to guard against the problem, we may need to
> drop shmmni. Looking at my notes, I could take down a box with 4096
> segments and 16 threads. This is where I got to before it disappeared:
>
> # ./bust_shm_exit 4096 16
> # uptime
> 03:00:50 up 8 days, 18:05 5 users,load average: 6076.98, 2494.09, 910.37
I win, using 4096 segments and 16 threads:
# uptime
13:50:46 up 1 day, 19:41, 2 users, load average: 7621.57, 1718.39, 943.13
13:52:35 up 1 day, 19:43, 2 users, load average: 15422.64, 7409.90, 3156.82
That's on a 16 cpu box running 3.16-rc1.
In contrast, if you run it with 1 segment and 16 threads it maxes out about:
# uptime
13:58:00 up 1 min, 2 users, load average: 1.81, 0.46, 0.15
And the box is entirely responsive.
cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists