lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2jWAR1QESe3OrhH@debian.me>
Date:   Mon, 7 Nov 2022 16:55:13 +0700
From:   Bagas Sanjaya <bagasdotme@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Stephen Rothwell <sfr@...b.auug.org.au>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        Neeraj Upadhyay <quic_neeraju@...cinc.com>,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Jonathan Corbet <corbet@....net>, rcu@...r.kernel.org,
        linux-doc@...r.kernel.org
Subject: [PATCH] Documentation: RCU: use code blocks with autogenerated line
 (was: Re: linux-next: build warning after merge of the rcu tree)

On Sun, Nov 06, 2022 at 09:02:12PM -0800, Paul E. McKenney wrote:
> On Mon, Nov 07, 2022 at 02:26:41PM +1100, Stephen Rothwell wrote:
> > Hi all,
> > 
> > After merging the rcu tree, today's linux-next build (htmldocs)
> > produced this warning:
> > 
> > Documentation/RCU/rcubarrier.rst:205: WARNING: Literal block ends without a blank line; unexpected unindent.
> > 
> > Introduced by commit
> > 
> >   21c2e3909721 ("doc: Update rcubarrier.rst")
> 
> Huh.  I guess that numbered code samples are not supposed to have more
> than nine lines?  Ah well, easy to fix by going back to left-justified
> numbers.  I was wondering about that!
> 

I think the proper fix is just let Sphinx generates line number:

---- >8 ----

From 5cea54b61b2dbf2ec5e00479c6c8f9e190b49091 Mon Sep 17 00:00:00 2001
From: Bagas Sanjaya <bagasdotme@...il.com>
Date: Mon, 7 Nov 2022 15:41:31 +0700
Subject: [PATCH] Documentation: RCU: use code blocks with autogenerated line
 numbers in RCU barrier doc

Recently Stephen Rothwell reported htmldocs warning in
Documentation/RCU/rcubarrier.rst when merging rcu tree [1], which has been
fixed by left-justifying the culprit line numbers. However, similar
issues can occur when line numbers are manually added without caution.

Instead, use code-block syntax with :linenos: option, which automatically
generates line numbers in code snippets.

Link: https://lore.kernel.org/linux-next/20221107142641.527396ea@canb.auug.org.au/ [1]
Signed-off-by: Bagas Sanjaya <bagasdotme@...il.com>
---
 Documentation/RCU/rcubarrier.rst | 213 +++++++++++++++++--------------
 1 file changed, 114 insertions(+), 99 deletions(-)

diff --git a/Documentation/RCU/rcubarrier.rst b/Documentation/RCU/rcubarrier.rst
index 5a643e5233d5f6..79adf39838653e 100644
--- a/Documentation/RCU/rcubarrier.rst
+++ b/Documentation/RCU/rcubarrier.rst
@@ -70,81 +70,87 @@ If your module uses multiple srcu_struct structures, then it must also
 use multiple invocations of srcu_barrier() when unloading that module.
 For example, if it uses call_rcu(), call_srcu() on srcu_struct_1, and
 call_srcu() on srcu_struct_2, then the following three lines of code
-will be required when unloading::
+will be required when unloading:
 
- 1 rcu_barrier();
- 2 srcu_barrier(&srcu_struct_1);
- 3 srcu_barrier(&srcu_struct_2);
+.. code-block:: c
+   :linenos:
+
+   rcu_barrier();
+   srcu_barrier(&srcu_struct_1);
+   srcu_barrier(&srcu_struct_2);
 
 If latency is of the essence, workqueues could be used to run these
 three functions concurrently.
 
 An ancient version of the rcutorture module makes use of rcu_barrier()
-in its exit function as follows::
+in its exit function as follows:
 
- 1  static void
- 2  rcu_torture_cleanup(void)
- 3  {
- 4    int i;
- 5
- 6    fullstop = 1;
- 7    if (shuffler_task != NULL) {
- 8     VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
- 9     kthread_stop(shuffler_task);
- 10   }
- 11   shuffler_task = NULL;
- 12
- 13   if (writer_task != NULL) {
- 14     VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
- 15     kthread_stop(writer_task);
- 16   }
- 17   writer_task = NULL;
- 18
- 19   if (reader_tasks != NULL) {
- 20     for (i = 0; i < nrealreaders; i++) {
- 21       if (reader_tasks[i] != NULL) {
- 22         VERBOSE_PRINTK_STRING(
- 23           "Stopping rcu_torture_reader task");
- 24         kthread_stop(reader_tasks[i]);
- 25       }
- 26       reader_tasks[i] = NULL;
- 27     }
- 28     kfree(reader_tasks);
- 29     reader_tasks = NULL;
- 30   }
- 31   rcu_torture_current = NULL;
- 32
- 33   if (fakewriter_tasks != NULL) {
- 34     for (i = 0; i < nfakewriters; i++) {
- 35       if (fakewriter_tasks[i] != NULL) {
- 36         VERBOSE_PRINTK_STRING(
- 37           "Stopping rcu_torture_fakewriter task");
- 38         kthread_stop(fakewriter_tasks[i]);
- 39       }
- 40       fakewriter_tasks[i] = NULL;
- 41     }
- 42     kfree(fakewriter_tasks);
- 43     fakewriter_tasks = NULL;
- 44   }
- 45
- 46   if (stats_task != NULL) {
- 47     VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
- 48     kthread_stop(stats_task);
- 49   }
- 50   stats_task = NULL;
- 51
- 52   /* Wait for all RCU callbacks to fire. */
- 53   rcu_barrier();
- 54
- 55   rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
- 56
- 57   if (cur_ops->cleanup != NULL)
- 58     cur_ops->cleanup();
- 59   if (atomic_read(&n_rcu_torture_error))
- 60     rcu_torture_print_module_parms("End of test: FAILURE");
- 61   else
- 62     rcu_torture_print_module_parms("End of test: SUCCESS");
- 63 }
+.. code-block:: c
+   :linenos:
+
+   static void
+   rcu_torture_cleanup(void)
+   {
+     int i;
+   
+     fullstop = 1;
+     if (shuffler_task != NULL) {
+      VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
+      kthread_stop(shuffler_task);
+     }
+     shuffler_task = NULL;
+   
+     if (writer_task != NULL) {
+       VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
+       kthread_stop(writer_task);
+     }
+     writer_task = NULL;
+   
+     if (reader_tasks != NULL) {
+       for (i = 0; i < nrealreaders; i++) {
+         if (reader_tasks[i] != NULL) {
+           VERBOSE_PRINTK_STRING(
+             "Stopping rcu_torture_reader task");
+           kthread_stop(reader_tasks[i]);
+         }
+         reader_tasks[i] = NULL;
+       }
+       kfree(reader_tasks);
+       reader_tasks = NULL;
+     }
+     rcu_torture_current = NULL;
+   
+     if (fakewriter_tasks != NULL) {
+       for (i = 0; i < nfakewriters; i++) {
+         if (fakewriter_tasks[i] != NULL) {
+           VERBOSE_PRINTK_STRING(
+             "Stopping rcu_torture_fakewriter task");
+           kthread_stop(fakewriter_tasks[i]);
+         }
+         fakewriter_tasks[i] = NULL;
+       }
+       kfree(fakewriter_tasks);
+       fakewriter_tasks = NULL;
+     }
+   
+     if (stats_task != NULL) {
+       VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
+       kthread_stop(stats_task);
+     }
+     stats_task = NULL;
+   
+     /* Wait for all RCU callbacks to fire. */
+     rcu_barrier();
+   
+     rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
+   
+     if (cur_ops->cleanup != NULL)
+       cur_ops->cleanup();
+     if (atomic_read(&n_rcu_torture_error))
+       rcu_torture_print_module_parms("End of test: FAILURE");
+     else
+       rcu_torture_print_module_parms("End of test: SUCCESS");
+   }
 
 Line 6 sets a global variable that prevents any RCU callbacks from
 re-posting themselves. This will not be necessary in most cases, since
@@ -191,21 +197,24 @@ queues. His implementation queues an RCU callback on each of the per-CPU
 callback queues, and then waits until they have all started executing, at
 which point, all earlier RCU callbacks are guaranteed to have completed.
 
-The original code for rcu_barrier() was roughly as follows::
+The original code for rcu_barrier() was roughly as follows:
 
- 1   void rcu_barrier(void)
- 2   {
- 3     BUG_ON(in_interrupt());
- 4     /* Take cpucontrol mutex to protect against CPU hotplug */
- 5     mutex_lock(&rcu_barrier_mutex);
- 6     init_completion(&rcu_barrier_completion);
- 7     atomic_set(&rcu_barrier_cpu_count, 1);
- 8     on_each_cpu(rcu_barrier_func, NULL, 0, 1);
- 9     if (atomic_dec_and_test(&rcu_barrier_cpu_count))
- 10       complete(&rcu_barrier_completion);
- 11    wait_for_completion(&rcu_barrier_completion);
- 12    mutex_unlock(&rcu_barrier_mutex);
- 13  }
+.. code-block:: c
+   :linenos:
+
+   void rcu_barrier(void)
+   {
+     BUG_ON(in_interrupt());
+     /* Take cpucontrol mutex to protect against CPU hotplug */
+     mutex_lock(&rcu_barrier_mutex);
+     init_completion(&rcu_barrier_completion);
+     atomic_set(&rcu_barrier_cpu_count, 1);
+     on_each_cpu(rcu_barrier_func, NULL, 0, 1);
+     if (atomic_dec_and_test(&rcu_barrier_cpu_count))
+        complete(&rcu_barrier_completion);
+     wait_for_completion(&rcu_barrier_completion);
+     mutex_unlock(&rcu_barrier_mutex);
+   }
 
 Line 3 verifies that the caller is in process context, and lines 5 and 12
 use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
@@ -230,18 +239,21 @@ This code was rewritten in 2008 and several times thereafter, but this
 still gives the general idea.
 
 The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
-to post an RCU callback, as follows::
+to post an RCU callback, as follows:
 
- 1  static void rcu_barrier_func(void *notused)
- 2  {
- 3    int cpu = smp_processor_id();
- 4    struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
- 5    struct rcu_head *head;
- 6
- 7    head = &rdp->barrier;
- 8    atomic_inc(&rcu_barrier_cpu_count);
- 9    call_rcu(head, rcu_barrier_callback);
- 10 }
+.. code-block:: c
+   :linenos:
+
+   static void rcu_barrier_func(void *notused)
+   {
+     int cpu = smp_processor_id();
+     struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
+     struct rcu_head *head;
+   
+     head = &rdp->barrier;
+     atomic_inc(&rcu_barrier_cpu_count);
+     call_rcu(head, rcu_barrier_callback);
+   }
 
 Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
 which contains the struct rcu_head that needed for the later call to
@@ -252,13 +264,16 @@ the current CPU's queue.
 
 The rcu_barrier_callback() function simply atomically decrements the
 rcu_barrier_cpu_count variable and finalizes the completion when it
-reaches zero, as follows::
+reaches zero, as follows:
 
- 1 static void rcu_barrier_callback(struct rcu_head *notused)
- 2 {
- 3   if (atomic_dec_and_test(&rcu_barrier_cpu_count))
- 4     complete(&rcu_barrier_completion);
- 5 }
+.. code-block:: c
+   :linenos:
+
+   static void rcu_barrier_callback(struct rcu_head *notused)
+   {
+     if (atomic_dec_and_test(&rcu_barrier_cpu_count))
+       complete(&rcu_barrier_completion);
+   }
 
 .. _rcubarrier_quiz_3:
 

base-commit: 2c9ea2f2e0a56cbf6931992812bffe47506f23d0

Thanks.

-- 
An old man doll... just what I always wanted! - Clara

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ