lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 29 May 2015 11:46:29 -0500 (CDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Paul Turner <pjt@...gle.com>, Andrew Hunter <ahh@...gle.com>,
	Ben Maurer <bmaurer@...com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Josh Triplett <josh@...htriplett.org>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH] percpu system call: fast userspace percpu critical
 sections

There are some interesting things one could do with a similar system at
the kernel level.

If we had a table of IP ranges in the kernel that specify critical
sections with the restart points defined then the kernel could consult
that when preempting kernel threads and set the IP to the
restart point if the IP happened to be in one of the ranges.

If we had something like that then per cpu atomic sections would become
much simpler (avoid the preempt accounting and other preempt overhead
there?) and the per cpu atomic instructions would also be easier to
generalize for various architectures. Solves a lot of issues introduced by
preempt logic.

I think this could lead to a further simplification of allocator
fastpaths. Gets rid of a lot of strange code. Look at the simplification
of the fastpath due to the ability to be able to guarantee that a section
of code is executed without preemption.

mm/slubc::slab_alloc_node()

redo:
        do {
                tid = this_cpu_read(s->cpu_slab->tid);
                c = raw_cpu_ptr(s->cpu_slab);
        } while (IS_ENABLED(CONFIG_PREEMPT) &&
                 unlikely(tid != READ_ONCE(c->tid)));
        barrier();

        object = c->freelist;
        page = c->page;
        if (unlikely(!object || !node_match(page, node))) {

               ... slow path...

        } else {
                void *next_object = get_freepointer_safe(s, object);

                if (unlikely(!this_cpu_cmpxchg_double(
                                s->cpu_slab->freelist, s->cpu_slab->tid,
                                object, tid,
                                next_object, next_tid(tid)))) {

                        note_cmpxchg_failure("slab_alloc", s, tid);
                        goto redo;
                }
        }



-------------------------------------------------
		Using the restart point system
-------------------------------------------------


start_critical_section:
restart_point:

	c = this_cpu_ptr(s->cpu_slab);
	object = c->freelist;
	page = c->page;
	if (!unlikely(!object || !node_match(page, node)) {

		... slow path ...

	} else {

		void *next_object = get_freepointer_safe(s, object);

end_critical_section:
		c->freelist = next_object;

	}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ