[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190709090609.shx7j2mst7wlkbqm@pathway.suse.cz>
Date: Tue, 9 Jul 2019 11:06:09 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Andrea Parri <andrea.parri@...rulasolutions.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH POC] printk_ringbuffer: Alternative implementation of
lockless printk ringbuffer
On Tue 2019-07-09 03:34:43, John Ogness wrote:
> On 2019-07-08, Petr Mladek <pmladek@...e.com> wrote:
> >> 1. The code claims that the cmpxchg(seq_newest) in prb_reserve_desc()
> >> guarantees that "The descriptor is ours until the COMMITTED bit is
> >> set." This is not true if in that wind seq_newest wraps, allowing
> >> another writer to gain ownership of the same descriptor. For small
> >> descriptor arrays (such as in my test module), this situation is
> >> quite easy to reproduce.
> >
> Let me inline the function are talking about and add commentary to
> illustrate what I am saying:
>
> static int prb_reserve_desc(struct prb_reserved_entry *entry)
> {
> unsigned long seq, seq_newest, seq_prev_wrap;
> struct printk_ringbuffer *rb = entry->rb;
> struct prb_desc *desc;
> int err;
>
> /* Get descriptor for the next sequence number. */
> do {
> seq_newest = READ_ONCE(rb->seq_newest);
> seq = (seq_newest + 1) & PRB_SEQ_MASK;
> seq_prev_wrap = (seq - PRB_DESC_SIZE(rb)) & PRB_SEQ_MASK;
>
> /*
> * Remove conflicting descriptor from the previous wrap
> * if ever used. It might fail when the related data
> * have not been committed yet.
> */
> if (seq_prev_wrap == READ_ONCE(rb->seq_oldest)) {
> err = prb_remove_desc_oldest(rb, seq_prev_wrap);
> if (err)
> return err;
> }
> } while (cmpxchg(&rb->seq_newest, seq_newest, seq) != seq_newest);
>
> I am referring to this point in the code, after the
> cmpxchg(). seq_newest has been incremented but the descriptor is still
> in the unused state and seq is still 1 wrap behind. If an NMI occurs
> here and the NMI (or some other CPU) inserts enough entries to wrap the
> descriptor array, this descriptor will be reserved again, even though it
> has already been reserved.
Not really, the NMI will not reach the cmpxchg() in this case.
prb_remove_desc_oldest() will return error. It will not
be able to remove the conflicting descriptor because it will
still be occupied by a two-wraps-old descriptor.
BTW: I did meet these problems in some early variatns. But everything
started working at some point. I always looked how you solved
a particular situation in the link-based approach. Then I
somehow translated it into the pure-array variant.
> > BTW: There is one potential problem with my alternative approach.
> >
> > The descriptors and the related data blocks might get reserved
> > in different order. Now, the descriptor might get reused only
> > when the related datablock is moved outside the valid range.
> > This operation might move also other data blocks outside
> > the range and invalidate descriptors that were reserved later.
> > As a result we might need to invalidate more messages in
> > the log buffer then would be really necessary.
> >
> > If I get it properly, this problem does not exist with the
> > implementation using links. It is because the descriptors are
> > linked in the same order as the reserved data blocks.
>
> Descriptors in the committed list are ordered in commit order (not the
> reserve order). However, if there are not enough descriptors
> (i.e. avgdatabits is higher than the true average) this problem exists
> with the list approach as well.
Thanks for the explanation.
> > I am not sure how big the problem, with more invalidated messages,
> > would be in reality. I am not sure if it would be worth
> > a more complicated implementation.
>
> I am also not sure how big the problem is in a practical sense. However
> to help avoid this issue, I will increase the descbits:avgdatabits ratio
> for v3.
Yup, it sounds reasonable. The number of reserved but not committed
descriptors is basically limited by the number of CPUs.
The only unknown variable is the length of the messages. Which brings
another problem. We might need a solution for continuous lines.
People would want it. Also storing one line in many entries would
be quite inefficient. But let's discuss this later when
printk() gets converted into the lockless ring buffer.
> > Anyway, I still need to fully understand the linked approach
> > first.
>
> You may want to wait for v3. I've now split the ringbuffer into multiple
> generic data structures (as unintentionally suggested[0] by PeterZ),
> which helps to clarify the role of each data structure and also isolates
> the memory barriers so that it is clear which data structure requires
> which memory barriers.
Sure, I am interested to see v3.
Best Regards,
Petr
Powered by blists - more mailing lists