I've been deploying DirectAccess and have created a mixed IPv6 and IPv4 infrastructure on the internal side. The external side is IPv4.
In a single server installation I have got inbound access and manage out access working perfectly...
When I introduce another node for load balancing the new node doesn't get a different client IPv6 Prefix so the entire load-balanced cluster uses the same Client IPv6 prefix - this means I can't route manage out traffic, or even return traffic correctly.
This is using IP-HTTPS. The external network scope is 172.28.242.0/24 and there is a Citrix Netscaler to load balance the inbound traffic. The internal network scope, IPv4 172.28.246.0/24 and the IPv6 is fd11:1:1:246::/64, the next hop on the IPv6 network is fd11:1:1:246::1 which is a Cisco ASA and that routes off to the network quite happily.
If each node in the cluster had a different client IPv6 prefix then manage out/return traffic would be very simple to organise.
Does any one know how to make each node have different client IPv6 prefixes?