lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Jun 2017 12:45:39 +0300
From:   Kirill Tkhai <ktkhai@...tuozzo.com>
To:     Cyrill Gorcunov <gorcunov@...il.com>
Cc:     avagin@...nvz.org, linux-kernel@...r.kernel.org, bcrl@...ck.org,
        viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
        xemul@...tuozzo.com
Subject: Re: [PATCH] aio: Add command to wait completion of all requests

On 13.06.2017 11:14, Cyrill Gorcunov wrote:
> On Fri, Jun 09, 2017 at 12:49:34PM +0300, Kirill Tkhai wrote:
> ...
>> +static int aio_wait_all(struct kioctx *ctx)
>> +{
>> +	unsigned users, reqs = 0;
>> +	struct kioctx_cpu *kcpu;
>> +	int cpu, ret;
>> +
>> +	if (atomic_xchg(&ctx->dead, 1))
>> +		return -EBUSY;
>> +
>> +	users = atomic_read(&current->mm->mm_users);
>> +	if (users > 1) {
>> +		/*
>> +		 * Wait till concurrent threads and aio_complete() see
>> +		 * dead flag. Implies full memory barrier on all cpus.
>> +		 */
>> +		synchronize_sched();
>> +	} else {
>> +		/*
>> +		 * Sync with aio_complete() to be sure it puts reqs_available,
>> +		 * when dead flag is already seen.
>> +		 */
>> +		spin_lock_irq(&ctx->completion_lock);
>> +	}
>> +
>> +	for_each_possible_cpu(cpu) {
>> +		kcpu = per_cpu_ptr(ctx->cpu, cpu);
>> +		reqs += kcpu->reqs_available;
> 
> I'm not that familiar with AIO internals but this snippet worries me:
> the reqs_available is unsigned int, reqs is unsigned it as well but
> used as an accumulator over ALL cpus, can't it get overflow and
> gives modulo result, should not it be unsigned long or something?

All available reqs are initially contain in kioctx::reqs_available,
which is atomic_t, and then they are distributed over percpu counters.
So, this is OK.
 
>> +		kcpu->reqs_available = 0;
>> +	}
>> +
>> +	if (users == 1)
>> +		spin_unlock_irq(&ctx->completion_lock);
>> +
>> +	atomic_add(reqs, &ctx->reqs_available);
>> +
>> +	ret = wait_event_interruptible(ctx->wait, reqs_completed(ctx));
>> +
>> +	atomic_set(&ctx->dead, 0);
>> +
>> +	return ret;
>> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ