lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 20 Feb 2008 09:08:25 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Mike Travis <travis@....com>
Cc:	Paul Jackson <pj@....com>, Alan.Brunelle@...com,
	linux-kernel@...r.kernel.org, npiggin@...e.de, dgc@....com,
	arjan@...ux.intel.com
Subject: Re: IO queueing and complete affinity w/ threads: Some results

On Tue, Feb 19 2008, Mike Travis wrote:
> Paul Jackson wrote:
> > Jens wrote:
> >> My main worry with the current code is the ->lock in the per-cpu
> >> completion structure.
> > 
> > Drive-by-comment here:  Does the patch posted later this same day by Mike Travis:
> > 
> >   [PATCH 0/2] percpu: Optimize percpu accesses v3
> > 
> > help with this lock issue any?  (I have no real clue here -- just connecting
> > up the pretty colored dots ;).
> > 
> 
> I'm not sure of the context here but a big motivation for doing the
> zero-based per_cpu variables was to optimize access to the local
> per cpu variables to one instruction, reducing the need for locks.

I'm afraid the two things aren't related, although faster access to
per-cpu is of course a benefit for this as well. My expressed concern
was the:

        spin_lock(&bc->lock);
        was_empty = list_empty(&bc->list);
        list_add_tail(&req->donelist, &bc->list);
        spin_unlock(&bc->lock);

where 'bc' may be per-cpu data of another CPU

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists