[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553F82BE.7050808@surriel.com>
Date: Tue, 28 Apr 2015 08:53:18 -0400
From: Rik van Riel <riel@...riel.com>
To: Heiko Carstens <heiko.carstens@...ibm.com>
CC: linux-kernel@...r.kernel.org,
Andy Lutomirsky <amluto@...capital.com>,
Frederic Weisbecker <fweisbec@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, williams@...hat.com
Subject: Re: [PATCH v2] context_tracking: remove local_irq_save from __acct_update_integrals
On 04/27/2015 07:18 AM, Heiko Carstens wrote:
> On Sat, Apr 25, 2015 at 08:50:49AM -0400, Rik van Riel wrote:
>> On 04/25/2015 05:43 AM, Heiko Carstens wrote:
>>> ...the READ_ONCE() doesn't give you any guarantees about reading
>>> tsk->acct_timexpd in an atomic way.
>>> Well, actually you don't need atomic semantics, but only to make sure that
>>> the read access happens with a single instruction, since you want to protect
>>> against interrupts.
>>> But still: if the size of acct_timexpd is 64 bit READ_ONCE() may still result
>>> in two instructions on 32 bit architectures.
>>> (or isn't there currently no 32 bit architecture with 64 bit cputime_t left?)
>>
>> Even if there is (maybe some ARM system?), can we even guarantee
>> that a single instruction to read 64 bits exists on such a system?
>
> I wouldn't bet on it. I can only talk for s390 and there is an instruction
> available which would do that. But since s390 is now a 64 bit only architecture
> it doesn't matter anyway.
> For other architectures I'd say: no, you can't rely on that.
So what can I do to move forward with this patch?
It speeds up syscall entry / exit by 7% when nohz_full
is enabled on a CPU...
Should I have the irq block compiled in only when
sizeof(cputime_t) > sizeof(long) ?
--
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists