lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 01 Dec 2006 23:41:37 +0800
From:	Wensong Zhang <wensong@...ux-vs.org>
To:	home_king <home_king@....com>
Cc:	Horms <horms@...ge.net.au>, netdev@...r.kernel.org,
	David Miller <davem@...emloft.net>,
	Julian Anastasov <ja@....bg>, Joseph Mack NA3T <jmack@...d.net>
Subject: Re: [PATCH] [IPVS] transparent proxying


Hi Jinhua,

home_king wrote:
> hi, Wensong. Thanks for your appraise.
>
> > I see that this patch probably makes IPVS code a bit complicated and
> > packet traversing less efficiently.
>
> In my opinion, worry about the side-effect to the packet throughput is 
> not
> necessary. First, normal packets with mark rarely appear in the 
> NF_IP_FORWARD
> chain, while people mark packets aiming at the network administration job
> usually on the NF_IP_LOCAL_IN or NF_IP_OUTPUT chain. Second, the new 
> hook fn
> is called after ipvs SNAT hook fn, and pass the packets handled by the 
> latter
> hook fn by simply checking the ipvs_property flag, so it would not 
> disturb the
> SNAT job. Third, the new hook fn is just a thin wrapper of ip_vs_in(), 
> so now
> that all packets which go through NF_IP_LOCAL_IN will be entirely 
> checked up
> by ip_vs_in(), no matter they are virtual-server relative or not, why 
> we mind
> that a comparatively small quantity of packets which go through 
> NF_IP_FORWARD
> will be checked too?
>
I see that every firewall-marked packet will be checked by ip_vs_in(), 
no matter whether
the packet is related to IPVS or not. It's a bit less efficient.
> > If I remember correctly, policy-based routing can work with IPVS in
> > kernel 2.2 and 2.4 for transparent cache cluster for a long time. It
> > should work in kernel 2.6 too.
>
> Indeed, policy route can help too, but the patch provides a native 
> manner to
> deploy transparent proxy, and meanwhile, this manner will not break the
> backbone networking context, such as policy routing setting, iptables 
> rules,
> etc.
I am afraid that the method used in the patch is not native, it breaks 
on IP fragments.
IPVS is a kind of layer-4 switching, it routes packet by checking 
layer-4 information
such as address and port number. ip_vs_in() is hooked at NF_IP_LOCAL_IN, so
that all the packets received by ip_vs_in() are already defragmented. On 
NF_IP_FORWARD
hook, there may be some IP fragements, ip_vs_in() cannot handle those IP 
fragments.

I think that it's probably better to let each part do its own things in 
the design.

Cheers,

Wensong

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ