[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200902122125.GB9496@alley>
Date: Wed, 2 Sep 2020 14:21:28 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Andrea Parri <parri.andrea@...il.com>,
Paul McKenney <paulmck@...nel.org>, kexec@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: misc: was: [PATCH next v3 6/8] printk: ringbuffer: add
finalization/extension support
On Mon 2020-08-31 03:16:56, John Ogness wrote:
> Add support for extending the newest data block. For this, introduce
> a new finalization state flag (DESC_FINAL_MASK) that denotes when a
> descriptor may not be extended, i.e. is finalized.
>
> Three new memory barrier pairs are introduced. Two of them are not
> significant because they are alternate path memory barriers that
> exactly correspond to existing memory barriers.
>
> But the third (_prb_commit:B / desc_reserve:D) is new and guarantees
> that descriptors will always be finalized, either because a
> descriptor setting DESC_COMMIT_MASK sees that there is a newer
> descriptor and so finalizes itself or because a new descriptor being
> reserved sees that the previous descriptor has DESC_COMMIT_MASK set
> and finalizes that descriptor.
Just for record. All the barriers look good to me.
There are always full barriers when we change DESC_COMMIT_MASK.
It guarantees consistency of the descriptor and data rings.
The only relaxed modification is adding DESC_FINAL_MASK. It is OK.
It can be done only when DESC_COMMIT_MASK has been set before
using a full barrier. It means that all changes of the rings
has already been since before.
> --- a/kernel/printk/printk_ringbuffer.c
> +++ b/kernel/printk/printk_ringbuffer.c
> +
> +/**
> + * prb_reserve_in_last() - Re-reserve and extend the space in the ringbuffer
> + * used by the newest record.
> + *
> + * @e: The entry structure to setup.
> + * @rb: The ringbuffer to re-reserve and extend data in.
> + * @r: The record structure to allocate buffers for.
> + * @caller_id: The caller ID of the caller (reserving writer).
> + *
> + * This is the public function available to writers to re-reserve and extend
> + * data.
> + *
> + * The writer specifies the text size to extend (not the new total size) by
> + * setting the @text_buf_size field of @r. Dictionaries cannot be extended so
Better might be to say that extending dictionaries is not
supported.
> + * @dict_buf_size of @r should be set to 0. To ensure proper initialization of
> + * @r, prb_rec_init_wr() should be used.
> + *
> + * This function will fail if @caller_id does not match the caller ID of the
> + * newest record. In that case the caller must reserve new data using
> + * prb_reserve().
> + *
> + * Context: Any context. Disables local interrupts on success.
> + * Return: true if text data could be extended, otherwise false.
> + *
> + * On success:
> + *
> + * - @r->text_buf points to the beginning of the entire text buffer.
> + *
> + * - @r->text_buf_len is set to the new total size of the buffer.
s/buf_len/buf_size/
> + * - @r->dict_buf and @r->dict_buf_len are cleared because extending
> + * the dict buffer is not supported.
Same here.
> + *
> + * - @r->info is not touched so that @r->info->text_len could be used
> + * to append the text.
> + *
> + * - prb_record_text_space() can be used on @e to query the new
> + * actually used space.
> + *
> + * Important: All @r->info fields will already be set with the current values
> + * for the record. I.e. @r->info->text_len will be less than
> + * @text_buf_size and @r->info->dict_len may be set, even though
> + * @dict_buf_size is 0. Writers can use @r->info->text_len to know
> + * where concatenation begins and writers should update
> + * @r->info->text_len after concatenating.
> + */
> +bool prb_reserve_in_last(struct prb_reserved_entry *e, struct printk_ringbuffer *rb,
> + struct printk_record *r, u32 caller_id)
> +{
> + unsigned int data_size;
> + struct prb_desc *d;
> + unsigned long id;
> +
> + local_irq_save(e->irqflags);
> +
> + /* Transition the newest descriptor back to the reserved state. */
> + d = desc_reopen_last(&rb->desc_ring, caller_id, &id);
> + if (!d) {
> + local_irq_restore(e->irqflags);
> + goto fail_reopen;
> + }
> +
> + /* Now the writer has exclusive access: LMM(prb_reserve_in_last:A) */
> +
> + /*
> + * Set the @e fields here so that prb_commit() can be used if
> + * anything fails from now on.
> + */
> + e->rb = rb;
> + e->id = id;
> +
> + /*
> + * desc_reopen_last() checked the caller_id, but there was no
> + * exclusive access at that point. The descriptor may have
> + * changed since then.
> + */
> + if (caller_id != d->info.caller_id)
> + goto fail;
> +
> + if (BLK_DATALESS(&d->text_blk_lpos)) {
We do not clear d->info when reopening the descriptor. I would at
least add here a consistency check similar to the one below:
if (WARN_ON_ONCE(d->info.text_len)) {
pr_warn_once("wrong data size (%u, expecting 0) for data\n",
d->info.text_len);
d->info.text_len = 0;
}
> + r->text_buf = data_alloc(rb, &rb->text_data_ring, r->text_buf_size,
> + &d->text_blk_lpos, id);
> + } else {
> + if (!get_data(&rb->text_data_ring, &d->text_blk_lpos, &data_size))
> + goto fail;
> +
> + /*
> + * Increase the buffer size to include the original size. If
> + * the meta data (@text_len) is not sane, use the full data
> + * block size.
> + */
> + if (WARN_ON_ONCE(d->info.text_len > data_size)) {
> + pr_warn_once("wrong data size (%u, expecting >=%hu) for data\n",
> + data_size, d->info.text_len);
> + d->info.text_len = data_size;
> + }
> + r->text_buf_size += d->info.text_len;
> +
> + if (!data_check_size(&rb->text_data_ring, r->text_buf_size))
> + goto fail;
> +
> + r->text_buf = data_realloc(rb, &rb->text_data_ring, r->text_buf_size,
> + &d->text_blk_lpos, id);
> + }
> + if (r->text_buf_size && !r->text_buf)
> + goto fail;
[...]
> @@ -1261,6 +1615,59 @@ void prb_commit(struct prb_reserved_entry *e)
> local_irq_restore(e->irqflags);
> }
>
> +/**
> + * prb_commit() - Commit (previously reserved) data to the ringbuffer.
> + *
> + * @e: The entry containing the reserved data information.
> + *
> + * This is the public function available to writers to commit data.
> + *
> + * Note that the data is not yet available to readers until it is finalized.
> + * Finalizing happens automatically when space for the next record is
> + * reserved.
> + *
> + * See prb_final_commit() for a version of this function that finalizes
> + * immediately.
> + *
> + * Context: Any context. Enables local interrupts.
> + */
> +void prb_commit(struct prb_reserved_entry *e)
> +{
> + struct prb_desc_ring *desc_ring = &e->rb->desc_ring;
> + unsigned long head_id;
> +
> + _prb_commit(e, 0);
> +
> + /*
> + * If this descriptor is no longer the head (i.e. a new record has
> + * been allocated), extending the data for this record is no longer
> + * allowed and therefore it must be finalized.
> + */
> + head_id = atomic_long_read(&desc_ring->head_id); /* LMM(prb_commit:A) */
> + if (head_id != e->id)
> + desc_make_final(desc_ring, e->id);
Nice trick! I wonder why I suggested to do it the complicated
way via desc_state.
> +}
> +
I finished review of this patch.
Best Regards,
Petr
Powered by blists - more mailing lists