[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200804151333.47395.vgusev@openvz.org>
Date: Tue, 15 Apr 2008 13:33:47 +0400
From: Vitaliy Gusev <vgusev@...nvz.org>
To: Andi Kleen <andi@...stfloor.org>
Cc: David Miller <davem@...emloft.net>, kuznet@....inr.ac.ru,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH][NET] Fix never pruned tcp out-of-order queue
On 15 April 2008 12:30:34 Andi Kleen wrote:
> Vitaliy Gusev wrote:
> > On 15 April 2008 12:18:10 David Miller wrote:
> >> From: Andi Kleen <andi@...stfloor.org>
> >> Date: Tue, 15 Apr 2008 10:14:56 +0200
> >>
> >>> The main difference seems to be that
> >>> sk_rmem_schedule/__sk_mem_schedule is called more often, but it is
> >>> unclear how this affects the ooo pruning which only checks
> >>> the queue length anyways.
> >> tcp_data_queue() would not do the tcp_prune_ofo_queue() in some
> >> cases, it's the whole point of the patch.
>
> I still think the guards are pretty much the same as before, sorry:)
>
> > Yes, if second sk_rmem_schedule() failed then tcp_prune_ofo_queue() is force called
> > and try sk_rmem_schedule() again.
>
> Yes but that doesn't affect the ooo prune guards at all, they only check
> rmem_alloc and neither sk_rmem_schedule() nor __sk_mem_schedule
> change that. Also the two callers are the same too in their checks.
>
> But why not repeat the whole prune for all cases in this case then?
>
> e.g. you should probably at least repeat the third step (setting
> pred_flags to 0) too.
Did you mean merely add check on tcp_memory_allocated < prot->sysctl_mem[2]
to tcp_prune_queue() ?
It is not enough as __sk_mem_schedule() can fail also
because of memory_pressure is on and there are too many opened sockets.
i.e. I avoid duplicating checks from __sk_mem_schedule().
>
> -Andi
--
Thank,
Vitaliy Gusev
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists