lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 15 Dec 2018 16:38:41 -0200
From:   Helen Koike <helen.koike@...labora.com>
To:     Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        Lucas A. M. Magalhães <lucmaga@...il.com>
Cc:     linux-media@...r.kernel.org, hverkuil@...all.nl,
        lkcamp@...ts.libreplanetbr.org, linux-kernel@...r.kernel.org
Subject: Re: [Lkcamp][PATCH] media: vimc: Add vimc-streamer for stream control

Hi Mauro,

On 12/15/18 4:01 PM, Mauro Carvalho Chehab wrote:
> Hi Lucas,
> 
> 
> Em Sat, 15 Dec 2018 14:46:31 -0200
> Lucas A. M. Magalhães <lucmaga@...il.com> escreveu:
> 
>> The previous code pipeline used the stack to walk on the graph and
>> process a frame. Basically the vimc-sensor entity starts a thread that
>> generates the frames and calls the propagate_process function to send
>> this frame to each entity linked with a sink pad. The propagate_process
>> will call the process_frame of the entities which will call the
>> propagate_frame for each one of it's sink pad. This cycle will continue
>> until it reaches a vimc-capture entity that will finally return and
>> unstack.
> 
> I didn't review the code yet, but I have a few comments about the
> way you're describing this patch.
> 
> When you mention about a "previous code pipeline". Well, by adding it
> at the main body of the patch description, reviewers should expect
> that you're mentioning an implementation that already reached upstream.
> 
> I suspect that this is not the case here, as I don't think we merged
> any recursive algorithm using the stack, as this is something that
> we shouldn't do at Kernelspace, as a 4K stack is usually not OK
> with recursive algorithms.
> 
> So, it seems that this entire patch description (as-is) is bogus[1].
> 
> [1] if this is not the case and a recursive approach was indeed
> sneaked into the Kernel, this is a bug. So, you should really
> use the "Fixes:" meta-tag indicating what changeset this patch is
> fixing, and a "Cc: stable@...r.kernel.org", in order to hint
> stable maintainers that this require backports.

Just fyi, this is not the case, the current implementation in mainline
is bogus indeed (implemented by me when I was starting kernel
development, sorry about that and thanks Lucas for sending a fix). Not
only when propagating the frame [1] but also when activating the
pipeline [2].

But in any case this should be better written in the commit message.


[1]
Every entity calls vimc_propagate_frame()
https://git.linuxtv.org/media_tree.git/tree/drivers/media/platform/vimc/vimc-debayer.c#n506
That calls the process_frame() of each entity directly connected, that
calls vimc_propagate_frame() again:
https://git.linuxtv.org/media_tree.git/tree/drivers/media/platform/vimc/vimc-common.c#n237

[2]
.s_stream is calling the .s_stream of the subdevices directly connected
https://git.linuxtv.org/media_tree.git/tree/drivers/media/platform/vimc/vimc-debayer.c#n355


I was actually wondering if this is worthy in sending this to stable, as
this implementation is not a real problem, because the topology in vimc
is hardcoded and limited, and according to:
https://www.kernel.org/doc/Documentation/process/stable-kernel-rules.rst
"It must fix a real bug that bothers people"

So as the topology is fixed (in the current implementation), the max
number of nested calls is 4 (in the sensor->debayer->scaler->capture
path), this doesn't triggers any bug to users. But this will be a
problem once we have the configfs API in vimc.

You could say that if your memory is low, this can be a problem in the
current implementation, but then your system won't have memory for any 4
nested function calls anyway (which I think the kernel wouldn't work at
all).

Mauro, with that said, do you still think we should send this to stable?

Thanks
Helen

> 
> Please notice that the patch description will be stored forever
> at the git tree. Mentioning something that were never merged
> (and that, years from now people will hardly remember, and will
> have lots of trouble to seek as you didn't even mentioned any
> ML archive with the past solution) shouldn't be done.
> 
> So, you should rewrite the entire patch description explaining
> what the current approach took by this patch does. Then, in order
> to make easier for reviewers to compare with a previous implementation,
> you can add a "---" line and then a description about why this approach
> is better than the first version, e. g. something like:
> 
> 	[PATCH v2] media: vimc: Add vimc-streamer for stream control
> 
> 	Add a logic that will create a linear pipeline walking 
> 	backwards on the graph. When the stream starts it will simply
> 	loop through the pipeline calling the respective process_frame
> 	function for each entity on the pipeline.
> 
> 	Signed-off-by: Your Name <your@...il>
> 
> 	---
> 
> 	v2: The previous approach were to use a recursive function that
> 	it was using the stack to walk on the graph and
> 	process a frame. Basically the vimc-sensor entity starts a thread that
> 	generates the frames and calls the propagate_process function to send
> 	this frame to each entity linked with a sink pad. The propagate_process
> 	will call the process_frame of the entities which will call the
> 	propagate_frame for each one of it's sink pad. This cycle will continue
> 	until it reaches a vimc-capture entity that will finally return and
> 	unstack.
> 	...
> 
> If the past approach was written by somebody else (or if you sent it
> a long time ago), please add an URL (if possible using 
> https://lore.kernel.org/linux-media/ archive) pointing to the previous 
> approach, in order to help us to check what you're referring to.
> 
> Regards,
> Mauro
> 
> Thanks,
> Mauro
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ