lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20151007232010.GA21142@mtj.duckdns.org> Date: Wed, 7 Oct 2015 16:20:10 -0700 From: Tejun Heo <tj@...nel.org> To: Dave Chinner <david@...morbit.com> Cc: Waiman Long <waiman.long@....com>, Christoph Lameter <cl@...ux-foundation.org>, linux-kernel@...r.kernel.org, xfs@....sgi.com, Scott J Norton <scott.norton@....com>, Douglas Hatch <doug.hatch@....com> Subject: Re: [PATCH] percpu_counter: return precise count from __percpu_counter_compare() Hello, Dave. On Thu, Oct 08, 2015 at 10:04:42AM +1100, Dave Chinner wrote: ... > As it is, the update race you pointed out is easy to solve with > __this_cpu_cmpxchg rather than _this_cpu_sub (similar to mod_state() > in the MM percpu counter stats code, perhaps). percpu cmpxchg is no different from sub or any other operations regarding cross-CPU synchronization. They're safe iff the operations are on the local CPU. They have to be made atomics if they need to be manipulated from remote CPUs. That said, while we can't manipulate the percpu counters directly, we can add a separate global counter to cache sum result from the previous run which gets automatically invalidated when any percpu counter overflows. That should give better and in case of back-to-back invocations pretty good precision compared to just returning the global overflow counter. Interface-wise, that'd be a lot easier to deal with although I have no idea whether it'd fit this particular use case or whether this use case even exists. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists