lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1334982460.28106.90.camel@gandalf.stny.rr.com>
Date:	Sat, 21 Apr 2012 00:27:40 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Vaibhav Nagarnaik <vnagarnaik@...gle.com>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...hat.com>,
	Michael Rubin <mrubin@...gle.com>,
	David Sharp <dhsharp@...gle.com>,
	Justin Teravest <teravest@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 2/4] trace: Make removal of ring buffer pages atomic

On Thu, 2012-02-02 at 12:00 -0800, Vaibhav Nagarnaik wrote:
> This patch adds the capability to remove pages from a ring buffer
> without destroying any existing data in it.
> 
> This is done by removing the pages after the tail page. This makes sure
> that first all the empty pages in the ring buffer are removed. If the
> head page is one in the list of pages to be removed, then the page after
> the removed ones is made the head page. This removes the oldest data
> from the ring buffer and keeps the latest data around to be read.
> 
> To do this in a non-racey manner, tracing is stopped for a very short
> time while the pages to be removed are identified and unlinked from the
> ring buffer. The pages are freed after the tracing is restarted to
> minimize the time needed to stop tracing.
> 
> The context in which the pages from the per-cpu ring buffer are removed
> runs on the respective CPU. This minimizes the events not traced to only
> NMI trace contexts.

Can't do this. (see below)



> +rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int
> nr_pages)
>  {
> -       struct buffer_page *bpage;
> -       struct list_head *p;
> -       unsigned i;
> +       unsigned int nr_removed;
> +       int page_entries;
> +       struct list_head *tail_page, *to_remove, *next_page;
> +       unsigned long head_bit;
> +       struct buffer_page *last_page, *first_page;
> +       struct buffer_page *to_remove_page, *tmp_iter_page;
> 
Also, please use the "upside down x-mas tree" for the declarations:

ie.

       struct list_head *tail_page, *to_remove, *next_page;
       struct buffer_page *to_remove_page, *tmp_iter_page;
       struct buffer_page *last_page, *first_page;
       unsigned int nr_removed;
       unsigned long head_bit;
       int page_entries;

See, it looks easier to read then what you had.

> +		/* fire off all the required work handlers */
> +		for_each_buffer_cpu(buffer, cpu) {
> +			cpu_buffer = buffer->buffers[cpu];
> +			if (!cpu_buffer->nr_pages_to_update)
> +				continue;
> +			schedule_work_on(cpu, &cpu_buffer->update_pages_work);

This locks up. I just tried the following, and it hung the task.

Here:

# cd /sys/kernel/debug/tracing
# echo 1 > events/enable
# sleep 10
# echo 0 > event/enable
# echo 0 > /sys/devices/system/cpu/cpu1/online
# echo 100 > buffer_size_kb

<locked!>

I guess you could test if the cpu is online. And if so, then do the
schedule_work_on(). You will need to get_online_cpus first.

If the cpu is offline, just change it.

-- Steve

PS. The first patch looks good, but I think you need to add some more
blank lines in your patches. You like to bunch a lot of text together,
and that causes some eye strain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ