lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161125191034.165b9203@vento.lan>
Date:   Fri, 25 Nov 2016 19:10:34 -0200
From:   Mauro Carvalho Chehab <mchehab@...pensource.com>
To:     Silvio Fricke <silvio.fricke@...il.com>
Cc:     linux-doc@...r.kernel.org, Jonathan Corbet <corbet@....net>,
        Ming Lei <ming.lei@...onical.com>,
        "Luis R . Rodriguez" <mcgrof@...nel.org>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        linux-kernel@...r.kernel.org,
        Jani Nikula <jani.nikula@...ux.intel.com>
Subject: Re: [PATCH v3 3/4] Documentation/local_ops.txt: convert to ReST
 markup

Em Fri, 25 Nov 2016 15:59:46 +0100
Silvio Fricke <silvio.fricke@...il.com> escreveu:

> ... and move to core-api folder.
> 
> Signed-off-by: Silvio Fricke <silvio.fricke@...il.com>

Reviewed-by: Mauro Carvalho Chehab <mchehab@...pensource.com>

> ---
>  Documentation/core-api/index.rst                                    |   1 +-
>  Documentation/local_ops.txt => Documentation/core-api/local_ops.rst | 273 +++----
>  2 files changed, 145 insertions(+), 129 deletions(-)
> 
> diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
> index f3e5f5e..25b4e4a 100644
> --- a/Documentation/core-api/index.rst
> +++ b/Documentation/core-api/index.rst
> @@ -9,6 +9,7 @@ Kernel and driver related documentation.
>  
>     assoc_array
>     atomic_ops
> +   local_ops
>     workqueue
>  
>  .. only::  subproject
> diff --git a/Documentation/local_ops.txt b/Documentation/core-api/local_ops.rst
> similarity index 57%
> rename from Documentation/local_ops.txt
> rename to Documentation/core-api/local_ops.rst
> index 407576a..1062ddb 100644
> --- a/Documentation/local_ops.txt
> +++ b/Documentation/core-api/local_ops.rst
> @@ -1,191 +1,206 @@
> -	     Semantics and Behavior of Local Atomic Operations
>  
> -			    Mathieu Desnoyers
> +.. _local_ops:
>  
> +=================================================
> +Semantics and Behavior of Local Atomic Operations
> +=================================================
>  
> -	This document explains the purpose of the local atomic operations, how
> +:Author: Mathieu Desnoyers
> +
> +
> +This document explains the purpose of the local atomic operations, how
>  to implement them for any given architecture and shows how they can be used
>  properly. It also stresses on the precautions that must be taken when reading
>  those local variables across CPUs when the order of memory writes matters.
>  
> -Note that local_t based operations are not recommended for general kernel use.
> -Please use the this_cpu operations instead unless there is really a special purpose.
> -Most uses of local_t in the kernel have been replaced by this_cpu operations.
> -this_cpu operations combine the relocation with the local_t like semantics in
> -a single instruction and yield more compact and faster executing code.
> +.. note::
>  
> +    Note that ``local_t`` based operations are not recommended for general
> +    kernel use. Please use the ``this_cpu`` operations instead unless there is
> +    really a special purpose. Most uses of ``local_t`` in the kernel have been
> +    replaced by ``this_cpu`` operations. ``this_cpu`` operations combine the
> +    relocation with the ``local_t`` like semantics in a single instruction and
> +    yield more compact and faster executing code.
>  
> -* Purpose of local atomic operations
> +
> +Purpose of local atomic operations
> +==================================
>  
>  Local atomic operations are meant to provide fast and highly reentrant per CPU
>  counters. They minimize the performance cost of standard atomic operations by
>  removing the LOCK prefix and memory barriers normally required to synchronize
>  across CPUs.
>  
> -Having fast per CPU atomic counters is interesting in many cases : it does not
> +Having fast per CPU atomic counters is interesting in many cases: it does not
>  require disabling interrupts to protect from interrupt handlers and it permits
>  coherent counters in NMI handlers. It is especially useful for tracing purposes
>  and for various performance monitoring counters.
>  
>  Local atomic operations only guarantee variable modification atomicity wrt the
>  CPU which owns the data. Therefore, care must taken to make sure that only one
> -CPU writes to the local_t data. This is done by using per cpu data and making
> -sure that we modify it from within a preemption safe context. It is however
> -permitted to read local_t data from any CPU : it will then appear to be written
> -out of order wrt other memory writes by the owner CPU.
> +CPU writes to the ``local_t`` data. This is done by using per cpu data and
> +making sure that we modify it from within a preemption safe context. It is
> +however permitted to read ``local_t`` data from any CPU: it will then appear to
> +be written out of order wrt other memory writes by the owner CPU.
>  
>  
> -* Implementation for a given architecture
> +Implementation for a given architecture
> +=======================================
>  
> -It can be done by slightly modifying the standard atomic operations : only
> +It can be done by slightly modifying the standard atomic operations: only
>  their UP variant must be kept. It typically means removing LOCK prefix (on
>  i386 and x86_64) and any SMP synchronization barrier. If the architecture does
> -not have a different behavior between SMP and UP, including asm-generic/local.h
> -in your architecture's local.h is sufficient.
> +not have a different behavior between SMP and UP, including
> +``asm-generic/local.h`` in your architecture's ``local.h`` is sufficient.
>  
> -The local_t type is defined as an opaque signed long by embedding an
> -atomic_long_t inside a structure. This is made so a cast from this type to a
> -long fails. The definition looks like :
> +The ``local_t`` type is defined as an opaque ``signed long`` by embedding an
> +``atomic_long_t`` inside a structure. This is made so a cast from this type to
> +a ``long`` fails. The definition looks like::
>  
> -typedef struct { atomic_long_t a; } local_t;
> +    typedef struct { atomic_long_t a; } local_t;
>  
>  
> -* Rules to follow when using local atomic operations
> +Rules to follow when using local atomic operations
> +==================================================
>  
> -- Variables touched by local ops must be per cpu variables.
> -- _Only_ the CPU owner of these variables must write to them.
> -- This CPU can use local ops from any context (process, irq, softirq, nmi, ...)
> -  to update its local_t variables.
> -- Preemption (or interrupts) must be disabled when using local ops in
> -  process context to   make sure the process won't be migrated to a
> +* Variables touched by local ops must be per cpu variables.
> +* *Only* the CPU owner of these variables must write to them.
> +* This CPU can use local ops from any context (process, irq, softirq, nmi, ...)
> +  to update its ``local_t`` variables.
> +* Preemption (or interrupts) must be disabled when using local ops in
> +  process context to make sure the process won't be migrated to a
>    different CPU between getting the per-cpu variable and doing the
>    actual local op.
> -- When using local ops in interrupt context, no special care must be
> +* When using local ops in interrupt context, no special care must be
>    taken on a mainline kernel, since they will run on the local CPU with
>    preemption already disabled. I suggest, however, to explicitly
>    disable preemption anyway to make sure it will still work correctly on
>    -rt kernels.
> -- Reading the local cpu variable will provide the current copy of the
> +* Reading the local cpu variable will provide the current copy of the
>    variable.
> -- Reads of these variables can be done from any CPU, because updates to
> -  "long", aligned, variables are always atomic. Since no memory
> +* Reads of these variables can be done from any CPU, because updates to
> +  "``long``", aligned, variables are always atomic. Since no memory
>    synchronization is done by the writer CPU, an outdated copy of the
> -  variable can be read when reading some _other_ cpu's variables.
> +  variable can be read when reading some *other* cpu's variables.
> +
>  
> +How to use local atomic operations
> +==================================
>  
> -* How to use local atomic operations
> +::
>  
> -#include <linux/percpu.h>
> -#include <asm/local.h>
> +    #include <linux/percpu.h>
> +    #include <asm/local.h>
>  
> -static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);
> +    static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);
>  
>  
> -* Counting
> +Counting
> +========
>  
>  Counting is done on all the bits of a signed long.
>  
> -In preemptible context, use get_cpu_var() and put_cpu_var() around local atomic
> -operations : it makes sure that preemption is disabled around write access to
> -the per cpu variable. For instance :
> +In preemptible context, use ``get_cpu_var()`` and ``put_cpu_var()`` around
> +local atomic operations: it makes sure that preemption is disabled around write
> +access to the per cpu variable. For instance::
>  
> -	local_inc(&get_cpu_var(counters));
> -	put_cpu_var(counters);
> +    local_inc(&get_cpu_var(counters));
> +    put_cpu_var(counters);
>  
>  If you are already in a preemption-safe context, you can use
> -this_cpu_ptr() instead.
> +``this_cpu_ptr()`` instead::
>  
> -	local_inc(this_cpu_ptr(&counters));
> +    local_inc(this_cpu_ptr(&counters));
>  
>  
>  
> -* Reading the counters
> +Reading the counters
> +====================
>  
>  Those local counters can be read from foreign CPUs to sum the count. Note that
>  the data seen by local_read across CPUs must be considered to be out of order
> -relatively to other memory writes happening on the CPU that owns the data.
> +relatively to other memory writes happening on the CPU that owns the data::
>  
> -	long sum = 0;
> -	for_each_online_cpu(cpu)
> -		sum += local_read(&per_cpu(counters, cpu));
> +    long sum = 0;
> +    for_each_online_cpu(cpu)
> +            sum += local_read(&per_cpu(counters, cpu));
>  
>  If you want to use a remote local_read to synchronize access to a resource
> -between CPUs, explicit smp_wmb() and smp_rmb() memory barriers must be used
> +between CPUs, explicit ``smp_wmb()`` and ``smp_rmb()`` memory barriers must be used
>  respectively on the writer and the reader CPUs. It would be the case if you use
> -the local_t variable as a counter of bytes written in a buffer : there should
> -be a smp_wmb() between the buffer write and the counter increment and also a
> -smp_rmb() between the counter read and the buffer read.
> -
> -
> -Here is a sample module which implements a basic per cpu counter using local.h.
> -
> ---- BEGIN ---
> -/* test-local.c
> - *
> - * Sample module for local.h usage.
> - */
> -
> -
> -#include <asm/local.h>
> -#include <linux/module.h>
> -#include <linux/timer.h>
> -
> -static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);
> -
> -static struct timer_list test_timer;
> -
> -/* IPI called on each CPU. */
> -static void test_each(void *info)
> -{
> -	/* Increment the counter from a non preemptible context */
> -	printk("Increment on cpu %d\n", smp_processor_id());
> -	local_inc(this_cpu_ptr(&counters));
> -
> -	/* This is what incrementing the variable would look like within a
> -	 * preemptible context (it disables preemption) :
> -	 *
> -	 * local_inc(&get_cpu_var(counters));
> -	 * put_cpu_var(counters);
> -	 */
> -}
> -
> -static void do_test_timer(unsigned long data)
> -{
> -	int cpu;
> -
> -	/* Increment the counters */
> -	on_each_cpu(test_each, NULL, 1);
> -	/* Read all the counters */
> -	printk("Counters read from CPU %d\n", smp_processor_id());
> -	for_each_online_cpu(cpu) {
> -		printk("Read : CPU %d, count %ld\n", cpu,
> -			local_read(&per_cpu(counters, cpu)));
> -	}
> -	del_timer(&test_timer);
> -	test_timer.expires = jiffies + 1000;
> -	add_timer(&test_timer);
> -}
> -
> -static int __init test_init(void)
> -{
> -	/* initialize the timer that will increment the counter */
> -	init_timer(&test_timer);
> -	test_timer.function = do_test_timer;
> -	test_timer.expires = jiffies + 1;
> -	add_timer(&test_timer);
> -
> -	return 0;
> -}
> -
> -static void __exit test_exit(void)
> -{
> -	del_timer_sync(&test_timer);
> -}
> -
> -module_init(test_init);
> -module_exit(test_exit);
> -
> -MODULE_LICENSE("GPL");
> -MODULE_AUTHOR("Mathieu Desnoyers");
> -MODULE_DESCRIPTION("Local Atomic Ops");
> ---- END ---
> +the ``local_t`` variable as a counter of bytes written in a buffer: there should
> +be a ``smp_wmb()`` between the buffer write and the counter increment and also a
> +``smp_rmb()`` between the counter read and the buffer read.
> +
> +
> +Here is a sample module which implements a basic per cpu counter using
> +``local.h``::
> +
> +    /* test-local.c
> +     *
> +     * Sample module for local.h usage.
> +     */
> +
> +
> +    #include <asm/local.h>
> +    #include <linux/module.h>
> +    #include <linux/timer.h>
> +
> +    static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);
> +
> +    static struct timer_list test_timer;
> +
> +    /* IPI called on each CPU. */
> +    static void test_each(void *info)
> +    {
> +            /* Increment the counter from a non preemptible context */
> +            printk("Increment on cpu %d\n", smp_processor_id());
> +            local_inc(this_cpu_ptr(&counters));
> +
> +            /* This is what incrementing the variable would look like within a
> +             * preemptible context (it disables preemption) :
> +             *
> +             * local_inc(&get_cpu_var(counters));
> +             * put_cpu_var(counters);
> +             */
> +    }
> +
> +    static void do_test_timer(unsigned long data)
> +    {
> +            int cpu;
> +
> +            /* Increment the counters */
> +            on_each_cpu(test_each, NULL, 1);
> +            /* Read all the counters */
> +            printk("Counters read from CPU %d\n", smp_processor_id());
> +            for_each_online_cpu(cpu) {
> +                    printk("Read : CPU %d, count %ld\n", cpu,
> +                            local_read(&per_cpu(counters, cpu)));
> +            }
> +            del_timer(&test_timer);
> +            test_timer.expires = jiffies + 1000;
> +            add_timer(&test_timer);
> +    }
> +
> +    static int __init test_init(void)
> +    {
> +            /* initialize the timer that will increment the counter */
> +            init_timer(&test_timer);
> +            test_timer.function = do_test_timer;
> +            test_timer.expires = jiffies + 1;
> +            add_timer(&test_timer);
> +
> +            return 0;
> +    }
> +
> +    static void __exit test_exit(void)
> +    {
> +            del_timer_sync(&test_timer);
> +    }
> +
> +    module_init(test_init);
> +    module_exit(test_exit);
> +
> +    MODULE_LICENSE("GPL");
> +    MODULE_AUTHOR("Mathieu Desnoyers");
> +    MODULE_DESCRIPTION("Local Atomic Ops");



Thanks,
Mauro

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ