[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YBfzfs9zU8rgHl6y@alley>
Date: Mon, 1 Feb 2021 13:26:38 +0100
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH printk-rework 07/12] printk: add syslog_lock
On Tue 2021-01-26 22:21:46, John Ogness wrote:
> The global variables @syslog_seq, @syslog_partial, @syslog_time
> and write access to @clear_seq are protected by @logbuf_lock.
> Once @logbuf_lock is removed, these variables will need their
> own synchronization method. Introduce @syslog_lock for this
> purpose.
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -390,8 +390,12 @@ DEFINE_RAW_SPINLOCK(logbuf_lock);
> printk_safe_exit_irqrestore(flags); \
> } while (0)
>
> +/* syslog_lock protects syslog_* variables and write access to clear_seq. */
> +static DEFINE_RAW_SPINLOCK(syslog_lock);
I am not expert on RT code but I think that it prefers the generic
spinlocks. syslog_lock seems to be used in a normal context.
IMHO, it does not need to be a raw spinlock.
Note that using normal spinlock would require switching the locking
order. logbuf_lock is a raw lock. Normal spinlock must not be taken
under a raw spinlock.
Or we could switch syslog_lock to the normal spinlock later
after logbuf_lock is removed.
> +
> #ifdef CONFIG_PRINTK
> DECLARE_WAIT_QUEUE_HEAD(log_wait);
> +/* All 3 protected by @syslog_lock. */
> /* the next printk record to read by syslog(READ) or /proc/kmsg */
> static u64 syslog_seq;
> static size_t syslog_partial;
> @@ -1631,6 +1643,7 @@ int do_syslog(int type, char __user *buf, int len, int source)
> bool clear = false;
> static int saved_console_loglevel = LOGLEVEL_DEFAULT;
> int error;
> + u64 seq;
This allows to remove definition of the same temporary variable
for case SYSLOG_ACTION_SIZE_UNREAD.
>
> error = check_syslog_permissions(type, source);
> if (error)
> @@ -1648,8 +1661,14 @@ int do_syslog(int type, char __user *buf, int len, int source)
> return 0;
> if (!access_ok(buf, len))
> return -EFAULT;
> +
> + /* Get a consistent copy of @syslog_seq. */
> + raw_spin_lock_irq(&syslog_lock);
> + seq = syslog_seq;
> + raw_spin_unlock_irq(&syslog_lock);
> +
> error = wait_event_interruptible(log_wait,
> - prb_read_valid(prb, syslog_seq, NULL));
> + prb_read_valid(prb, seq, NULL));
Hmm, this will not detect when syslog_seq gets cleared in parallel.
I hope that nobody rely on this behavior. But who knows?
A solution might be to have also syslog_seq latched. But I am
not sure if it is worth it.
I am for taking the risk and use the patch as it is now. Let's keep
the code for now. We could always use the latched variable when
anyone complains. Just keep it in mind.
> if (error)
> return error;
> error = syslog_print(buf, len);
Otherwise, the patch looks good to me.
Best Regards,
Petr
Powered by blists - more mailing lists