lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 27 Nov 2014 16:18:06 +0100
From:	Luis de Bethencourt <luis@...ethencourt.com>
To:	Greg KH <gregkh@...uxfoundation.org>
Cc:	linux-kernel@...r.kernel.org, eunb.song@...sung.com,
	gulsah.1004@...il.com, paul.gortmaker@...driver.com,
	devel@...verdev.osuosl.org
Subject: Re: [PATCH] staging: octeon: Fix checkpatch warning

On Wed, Nov 26, 2014 at 06:34:10PM -0800, Greg KH wrote:
> On Thu, Nov 27, 2014 at 12:35:23AM +0000, Luis de Bethencourt wrote:
> > On Wed, Nov 26, 2014 at 01:45:23PM -0800, Greg KH wrote:
> > > On Tue, Nov 25, 2014 at 01:26:14PM +0000, Luis de Bethencourt wrote:
> > > > This patch fixes the checkpatch.pl warnings:
> > > > 
> > > > WARNING: line over 80 characters
> > > > +                       int cores_in_use = core_state.baseline_cores - atomic_read(&core_state.available_cores);
> > > > 
> > > > WARNING: line over 80 characters
> > > > +                       skb->data = skb->head + work->packet_ptr.s.addr - cvmx_ptr_to_phys(skb->head);
> > > > 
> > > > Signed-off-by: Luis de Bethencourt <luis@...ethencourt.com>
> > > > ---
> > > >  drivers/staging/octeon/ethernet-rx.c | 6 ++++--
> > > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
> > > > index 44e372f..bd83f55 100644
> > > > --- a/drivers/staging/octeon/ethernet-rx.c
> > > > +++ b/drivers/staging/octeon/ethernet-rx.c
> > > > @@ -295,7 +295,8 @@ static int cvm_oct_napi_poll(struct napi_struct *napi, int budget)
> > > >  			 */
> > > >  			union cvmx_pow_wq_int_cntx counts;
> > > >  			int backlog;
> > > > -			int cores_in_use = core_state.baseline_cores - atomic_read(&core_state.available_cores);
> > > > +			int cores_in_use = core_state.baseline_cores -
> > > > +				atomic_read(&core_state.available_cores);
> > > >  			counts.u64 = cvmx_read_csr(CVMX_POW_WQ_INT_CNTX(pow_receive_group));
> > > >  			backlog = counts.s.iq_cnt + counts.s.ds_cnt;
> > > >  			if (backlog > budget * cores_in_use && napi != NULL)
> > > > @@ -324,7 +325,8 @@ static int cvm_oct_napi_poll(struct napi_struct *napi, int budget)
> > > >  		 * buffer.
> > > >  		 */
> > > >  		if (likely(skb_in_hw)) {
> > > > -			skb->data = skb->head + work->packet_ptr.s.addr - cvmx_ptr_to_phys(skb->head);
> > > > +			skb->data = skb->head + work->packet_ptr.s.addr -
> > > > +				cvmx_ptr_to_phys(skb->head);
> > > >  			prefetch(skb->data);
> > > >  			skb->len = work->len;
> > > >  			skb_set_tail_pointer(skb, skb->len);
> > > > -- 
> > > > 2.1.3
> > > 
> > > No longer applies to my tree :(
> > 
> > I'm confused.
> > 
> > I just tried applying it to what I think is your tree is and it worked.
> > https://git.kernel.org/cgit/linux/kernel/git/gregkh/staging.git/log/?h=staging-next
> > 
> > Do I have this wrong?
> 
> I'm applying patches first to staging-testing to get some 0-day buildbot
> testing before merging them to staging-next these days, as I've been
> burned with common problems too many times.  I took some other octeon
> patches that were sent before yours were that caused the conflict.  If
> you look at staging-testing right now you can see that.
> 
> Hope this helps,
> 
> greg k-h

This is very helpful!

I am about to send a new version of the patch, this time against staging-test.
Thanks for the time to explain this, and sorry about the previos patch not
applying.

Luis
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ