lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 May 2011 15:15:43 +0800
From:	Shaohua Li <shaohua.li@...el.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Tejun Heo <tj@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"cl@...ux.com" <cl@...ux.com>,
	"npiggin@...nel.dk" <npiggin@...nel.dk>
Subject: Re: [patch V3] percpu_counter: scalability works

On Mon, 2011-05-16 at 14:55 +0800, Eric Dumazet wrote:
> Le lundi 16 mai 2011 à 14:37 +0800, Shaohua Li a écrit :
> > On Mon, 2011-05-16 at 14:11 +0800, Eric Dumazet wrote:
> > > Le lundi 16 mai 2011 à 08:58 +0800, Shaohua Li a écrit :
> > > 
> > > > so if _sum starts and ends here, _sum can still get deviation.
> > > 
> > > This makes no sense at all. If you have so many cpus 'here' right before
> > > you increment fbc->sum_cnt, then no matter how precise and super
> > > cautious you are in your _sum() implementation, as soon as you exit from
> > > sum(), other cpus already changed the percpu counter global value.
> > I don't agree here. The original implementation also just has quite
> > small window we have deviation, the window only exists between the two
> > lines:
> > 		atomic64_add(count, &fbc->count);
> > 	        __this_cpu_write(*fbc->counters, 0);
> > if you think we should ignore it, we'd better not use any protection
> > here.
> > 
> 
> Not at all. Your version didnt forbid new cpu to come in _add() and
> hitting the deviation problem.
if everybody agrees the deviation isn't a problem, I will not bother to
argue here.
but your patch does have the deviation issue which Tejun dislike.


> There is a small difference, or else I wouldnt had bother.
in _sum, set a bit. in _add, we wait till the bit is unset. This can
easily solve the issue too, and much easier.

> > as I wrote in the email, the atomic and cacheline issue can be resolved
> > with a per_cpu data, I just didn't post the patch. I post it this time,
> > please see below. There is no cache line bounce anymore.
> > 
> 
> I am afraid we make no progress at all here, if you just try to push
> your patch and ignore my comments.
I did try to push my patch, but I didn't ignore your comments. I pointed
out your patch still has the deviation issue and you didn't think it's
an issue, so you are ignoring my comments actually. On the other hand, I
push my patch because I thought mine hasn't the deviation.

> percpu_counter is a compromise, dont make it too slow for normal
> operations. It works well if most _add() operations only go through
> percpu data.
> 
> Please just move vm_committed_as to a plain atomic_t, this will solve
> your problem.
I can, but you can't prevent me to optimize percpu_counter.

Thanks,
Shaohua

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists