[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMYGaxosaVXmpQQqpq+bGV9F7-i8APTpDq=ErWdhw2EHGEzmKg@mail.gmail.com>
Date: Fri, 4 May 2012 06:42:41 +0530
From: rajman mekaco <rajman.mekaco@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Andrew Morton <akpm@...ux-foundation.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Minchan Kim <minchan.kim@...il.com>,
Christoph Lameter <cl@...two.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1/1] mlock: split the shmlock_user_lock spinlock into per
user_struct spinlock
Thank you all for replying back.
>
> Hold this ... while the patch is correct, Peter raised
> a valid concern about its usefulness, which should be
> sorted out first.
>
Can't the shmctl(SHM_LOCK) system call be called for a huge number of
usermode processes ?
Other place from where usr_shm_lock() is called is for hugetlb from
shmget(SHM_HUGETLB)
system call via ipc_get().
As far as users are concerned, I think that if even 2 user_structs
encounter this on 2 different CPUs,
why should the processors waste any time at all at looping even once
if they belong to different
user_structs ?
I totally agree with you that maybe if we look at the entire workloads
it probably wouldn't matter much
because of low number of users, but why should the CPUs compete and
spin for different users at all
when nothing global is affected ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists