They are also simpler to script configuration for, since we’re using the same IP address on all rbridges.
On other hand, we still need to configure IP routing within our VCS fabric.
Hardware Switch Controller (HSC) component of overlay-gateway subsystem on VCS uses Virtual IP for Management and Control plane communications, namely for talking to NSX-v Controllers.
Therefore, you will need to make sure you have a route in your VCS’s Management VRF toward your NSX infrastructure Management network, in particular covering the IP addresses of your Controller Cluster.
This titbit will become relevant when figuring out why your Management/Control plane may not be working at a later stage, since VCS talks to NSX Controllers from this IP, and thus things like OVSDB would also be handled on the Associated rbridge.
Next up, we need to deal with the VTEP on our VCS, which is where it gets interesting.
After you’ve deployed and configured your VCS fabric, the first test to conduct is to check if your VCS Virtual IP can reach your Controllers.
Since we’re starting to run some commands now, it’s time to introduce our lab environment: Things are also connected to two IP networks: Management (dark gray) and VTEP (black); both with Router elements in them.
Training, mentoring and coaching of talent within the Life Sciences sector is our strength.Please refer to earlier posts on the meaning of this, especially the Router (Data) one.With the above in mind, let’s check if our VCS fabric’s Virtual IP (1.100) can reach the Controllers (1.150–152).Brocade HW VTEP implementation supports two configurations for interfaces backing the VTEP: This brings us to a choice we need to make: which way to go?
Both ways, according to my contacts in the know, deliver the same functionality; so there are no benefits / drawbacks to help us decide.
They can get ridiculously convoluted as in the case above and, according to the specification, are often too strict anyway.