[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120105.135751.2230446777137893740.davem@davemloft.net>
Date: Thu, 05 Jan 2012 13:57:51 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: cascardo@...ux.vnet.ibm.com
Cc: venkat.x.venkatsubra@...cle.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, dledford@...hat.com,
Jes.Sorensen@...hat.com, rds-devel@....oracle.com
Subject: Re: [PATCH] rds_rdma: don't assume infiniband device is PCI
From: Thadeu Lima de Souza Cascardo <cascardo@...ux.vnet.ibm.com>
Date: Thu, 5 Jan 2012 15:05:24 -0200
> On Thu, Jan 05, 2012 at 08:56:34AM -0800, Venkat Venkatsubra wrote:
>> Hi Cascardo,
>>
>> Your changes look good to me.
>> But our latest code doesn't use this rdsibdev_to_node macro anywhere.
>> Checking with the people in my group who know the history of the NUMA feature.
>> Trying to find out if the call to kzalloc_node() can be replaced by kzalloc().
>> In which case this macro can be removed.
>>
>> I will keep you posted.
>>
>> Venkat
>>
>
> Hi, Venkat.
>
> Do you have any public tree where we can track the last changes in RDS?
> Note that I have changed ibsdev_to_node, which rdsibdev_to_node makes
> use of. Anyway, replacing kzalloc_node with kzalloc has crossed my mind,
> but since I was not sure if this would affect latency of RDS in any use
> cases, I kept that and used a better function to get the node from the
> device. And we have dev_to_node since 2.6.20, so it should not be a
> problem to use it.
>
> If possible, keep everyone copied and avoid top posting.
Indeed, otherwise it's impossible for anyone to follow the progress on
this patch. If anything, you should never remove netdev from the CC:
list when discussing a patch. Otherwise the followups don't make into
our patch tracking system at:
http://patchwork.ozlabs.org/project/netdev/list/
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists