UX in network apps: how not to fail
UX in network apps: how not to fail

16 December 2020

Network infrastructure planning

The modern, interoperable DC - Part 3: Seamless and secure connectivity for edge computing and hybrid clouds (video)

109 minutes reading

The modern, interoperable DC - Part 3: Seamless and secure connectivity for edge computing and hybrid clouds (video)

In the third part of our “Modern DC” webinar series we focus on connectivity with resources located beyond the local data center, somewhere in the Internet. This includes extending Layer 2 and Layer 3 tunnels to public clouds (such as AWS, Azure and GCP), branch offices, Edge Computing servers and IoT devices based on single-board computers.

As we explain in the webinar, five mechanisms are required to make all this possible:

  • routing between overlay networks
  • forwarding traffic between the overlay and underlay networks
  • extending VXLAN tunnels over the Internet
  • encrypting traffic forwarded over untrusted networks
  • using Cloud-init to pre-provision VMs on public cloud

By using these functionalities, along with features we presented in the preceding two webinars, you will be able to provide secure and seamless connectivity to remote resources, even if they are located behind a NAT device.

In the first part of the video we discuss various considerations in regards to tunneling traffic to public clouds as well as inside of them; using consumer-grade routers to enable EVPN on branch locations; employing small computers such as Raspberry Pi to act as an EVPN-enabled IoT device; and, finally, how we can interconnect with more traditional VPNs such as L3VPN based on MPLS-over-GRE tunneling.
The second part of the webinar shows a live demo that includes the following topics:

  • Overlay <> Underlay/Internet communication

    • Making use of an L3 VXLAN Gateway, EVPN Type-5 routes and NAT
  • Extending overlay to public cloud:

    • Service migration from a local DC to AWS Cloud using VRRP
    • Service advertisement on AWS EC2 using BGP
    • Accessing AWS native resources (RDS)
    • AWS EC2 deployment using ZTP paradigm and Cloud-init
  • Branch office EVPN:

    • Extending VLANs to interconnect at L2 with different overlay networks:

      • WiFi, SSID ‘cameras’—ONVIF camera access from DC on L2
      • LAN, ‘users’—Internet access forced to go through a DC where a central firewall filters traffic
    • Consumer grade device support not supporting EVPN (using Mikrotik as an example)

  • Extending overlay to IoT:

    • Send command from a local DC to IoT to get reading from a sensor
    • Send another command to perform an action on a device connected to the IoT

Finally, in the last part of the presentation we go over a short list of the solution’s benefits and drawbacks and give a summary of what we have learned.


Good evening and welcome to today's webinar. For those who have missed our previous parts, my name is Adam and I work in smartNIC department and my name is Jurek and I work in the Professional Services Department. Today, we will continue extending our EVPN topology and we'll show you how can it be used to interconnect cloud IoT or branch offices. Please note that in the YouTube description there is a Slido link where you can ask questions and those questions will be posted at the end of the webinar and that will be the time we will be answering them. OK, first, a few words about our company, CodiLime was founded in 2011, and now we have about 200 people on board and we are still growing. We are no longer a startup, obviously, but we continue to keep its spirit, a culture of agility, innovation and adaptability. Most of our team is located in Warsaw and Gdansk. However, in the current year we work mostly from home. So, all that said, we frequently work with the modern EVPN base DC often with some SDN on top of it. Having some experience in our team got us thinking, how can we improve this architecture? Can we speed up changes in DC? Can be extended it or what can be done better? Previous and today's presentation are all about that. So far, we have shown you how you use BGP autodiscovery with the IPv6 connectivity to create fully automated and scalable DC. Then we use that DC to interconnect different resources such as Kubernetes, VM containers all together using EVPN and FRR. What was left is the world outside of the DC means public cloud, branch offices, ioT. We'll try to cover those during this webinar. Yes, but first off, let's take a short recap and see what we have talked about on the first and on the second webinar. So we have shown that we can use IPv6 link local addresses so we do not have to manually configure each of the addressing on the links between the networking devices and also on connections to servers. We have also shown that we can use neighbor discovery so the BGP session can be automatically established even if we do not explicitly configure neighbors. We have used FRR on the servers a routing Daemon, which is capable of running BGP. And thanks to that we are able to automatically advertise all of the IP addresses that are configured on the servers in our data center. And even though we were using IPv6 addresses on the connections, we are still able to use IPv4 for communication with applications, with services, with whatever we are running in the data center. And also thanks to BGP, we have achieved load balancing, through ECMP, and very fast failover thanks to BFD protocol. Now, BGP protocol was also required for us to provide EVPN and this was the point of the second webinar. So here we had an IP fabric which provides Layer 3 only connectivity, and we were still able to allow for Layer 2 communication between VMs, between containers through the use of VXLAN tunnels. However, we did not want to manually configure VXLAN tunnels. We wanted them to be automatically configured, established and also for the Mac addresses and IP addresses reachable to also be advertised through some automated means. And this was the role of EVPN. And at the same time we were also able to provide multitenancy. So we are able to separate the applications and services from each other and it is perfectly possible to use VRFs to have several customers using our infrastructure without them being able to communicate with each other, unless we want them to. So this way we are able to create a heterogeneous datacenter in which containers VMs, some physical servers, some laptops, different kinds of resources were able to speak with each other. And they are too. OK, today we'll try to show what we can do next, we'll try to show how you can connect various resources outside of the data center. To do that we'll create a hub and you will use that hub along with the work at tunnels to attach different resources from the Internet. Of course, we'll also cover how to connect from the inside of the data center, from the VMs to the Internet, how to route between those networks via a firewall which knows nothing about the VPN. We'll try to extend the EVPN to the IoT devices as well to the branch offices with layer two extensivity or layer three as well. OK, so first off, let's get started with a technical topic that is routing between overlay networks. And in truth, this is something that we have already used on the second webinar during the demonstration. However, we haven't talked about it in detail. And the routing between overlay networks is very similar to what we know about routing between VLANs. However, here we have a device that is connected to two or more overlays. It has some interfaces attached to it and it has IP addresses configured. In this example to the green and to the blue overlay. So, if it receives a packet in a VXLAN tunnel that belongs to the green overlay, it will be able to deencapsulate it. Look at the IP header, see the destination IP, which is in the green overlay. So it will encapsulate again in the other VXLAN and forward it to the destination server or the destination virtual machine. Now this can be done in software on Linux without much problems. However, this capability in case of hardware devices is not available on every datacenter switch. So keep in mind to check data sheets whether or not they can act as a VXLAN layer three gateway because this is required for this routing capability. Now, what would happen if a server that is connected to only one of the overlays received the packet destined for the other overlay? So here, for example, server three would have to check its routing table and we could actually find a route to the destination overlay. This is because EVPN allows to advertise not only individual Mac and IP addresses, but it can use type five routes over here. Yeah, it can use type five routes which allow it to advertise whole prefixes. So server one server two and server four will advertise that they have reachability to this subnetwork. 10.0.2 and this information would be available at server three. So if server three received such a packet, it would just be forwarded to one of the servers using a VXLAN tunnel. What you can also see on this example is that the servers have interfaces configured with the same Mac address and same IP address, every one of them over here. And this allows for the resources connected to an overlay network, for example, virtual machines to use the same Mac and IP address as the default gateway. So, for example, if a virtual machine is migrated from one server to another, it can still use the same gateway without any ARP requests as soon as it is migrated. OK, and one last thing on this slide is the fact that the communication between overlay networks can also be restricted. So we do not need to provide connectivity between each and every overlay like in VRFs. We can use routing policies to filter some of the routes. We might also use status ACLs or some stateful firewall policies on our devices in order to restrict the traffic. Now, this is provided that the layer three gateway is capable of doing that. All right. And finally, we can also use some advanced configurations so we can steer the traffic through, for example, a physical firewall or some dedicated VM that acts as a firewall. And in this way also do some better, deeper inspection. OK. right. So we can route between the overlays. Now, what about communication with the physical network? Well, by default, it is not enabled, it is not possible. What we need to do is we need to use either a layer two gateway or a layer three gateway. Now a layer two gateway is a device that is able to stretch and overlay network onto a VLAN. And so this is something we have used as well before. And it was the situation where we connected a bare metal server that didn't have any routing daemon running through leaf switches to an overlay network. And we can take the same concept and extend overlay, for example, to a firewall where this firewall might be further connected to the rest of the physical network or the Internet. And the firewall or the device that we connect to in the underlying doesn't need to know anything about VXLANs, EVPNs and so on. Now, the issue with this L2 Gateway is the fact that usually doing the configuration, that mapping between an overlay network and a VLAN is one to one. So if you want to extend 50 overlay networks to the physical, to have physical connectivity with the physical network, we would need 50 VLANs. So basically, this is not a very scalable approach. And in those situations, we can use the VXLAN L3 gateway. So the thing that we have shown on the previous slide. Now the only difference here is that the L3 gateway will be connected to a physical network through one of its interfaces. It could be multiple interfaces if you wanted to provide redundancy. And OK, so looking at this scenario, we want to provide connectivity with the Internet. So in order to do that at the entry gateway, we'll advertise a default route. And this route will be sent to the, to all of the other routers in the overlay. So server four server three and server one using EVPN type five routes. So thanks to that, the servers will know that whenever they need to forward traffic in the direction of the Internet through the default route, they will send it using VXLAN to the spine and spine will deencapsulate the packet and send it in a native format to the underlay network. Now, in the other direction, connectivity from the spine to the overlay network is also based on these type five routes. So the servers will advertise information that they are connected to this prefix. But if appropriately configured, they can also advertise individual IP addresses of devices that are connected to them. So VM one is connected to server one, so server one will advertise the IP address assigned to this virtual machine. So thanks to that spine device, if it wants to send packets to VM one, it will know that the best route to this virtual machine goes through server one device. Mm hmm. And this way we can achieve optimized traffic flows. The last thing on the slide is the network address translation. So if we want to reach the Internet from a private subnetwork, obviously we need to use node and this node can be done on the spying device, on the L3 gateway as long as this router is able to support it. Or it can also be done on a device somewhere along the way to the Internet, for example, on a firewall. All right. And we can also restrict the connectivity using routing policies ACLs, firewall, just as in the scenario when we are routing between overlays. OK, now the third topic that we need to mention before we move on is tunneling traffic through some untrusted networks such as the Internet. In theory, this shouldn't be a problem because VXLAN is based on UDP and UDP is generally allowed on the Internet without much restrictions. The problem is the size of the packets. And so the maximum transmission you need on the Internet is fifteen hundred bytes. This means that if we have a large frame that is encapsulated inside of the VXLAN tunnel, we need to add the overhead of an IP header UDP header and the VXLAN header. And it is quite often with larger frames that the size of the encapsulated packet will be over this maximum MTU. And this would be especially true if we are using jumbo frames within our datacenter locations. So basically, in order to forward such a big packet through the Internet, one of the devices will need to perform fragmentation. So it will need to splice the packet into several fragments that will fit inside of the MTU and forward them in the direction of the destination. And this is something that is natively supported by IPv4, by IPv6. It is quite commonly used, but there is a restriction that is introduced by VXLAN standard and it basically says that VTEPs, unfortunately, cannot fragment VXLAN packets. Also, the destination VTEP may silently any fragmented VXLAN packets that it receives. So the solution here comes from this sentence from RC Intermediate routers may fragment encapsulated VXLAN packets, which basically means that if we want the solution to work, we need to perform fragmentation and reassembly on the devices somewhere between the VTEPs, somewhere between VXLAN end points. And this is perfectly doable. Now, if we want to have a full picture, we need to mention also that fragmentation in general is not recommended. This is because it does require some extra use of resources at the devices that are performing the fragmentation and reassembly. It will also lead to slightly lower throughput. This is because if we have, for example, 10 fragments, then we need to attach an IP header to each of these fragments. And we had only one big packet, then we would have one IP header. So we will have an extra overhead, which will lead to slightly lower throughput so we might see additional latency. This is mainly because of reassembly, where a device will need to wait for all of the fragments of the packet to arrive, and only then it will be able to put it back together and send it to the destination. So it will take a little bit of time, which might be visible. There are also some more elusive issues that are summarized in RFC 8900. And if you do decide to use this solution, this would be a good lecture just to make sure that you do not fall into some corner cases where fragmentation might be problematic. But we do not want to discourage you from the solution. It is actually quite good, quite possible to transmit VXLAN packets through the Internet. And fragmentation is very commonly used on the Internet right now. And the prime example of that would be IPsec, which also relies on fragmentation to tunnel large packets through the Internet. OK, and one last thing when it comes to the topic is the fact that VXLAN doesn't in, any way, obfuscate the transmitted data. So, the information is tunneled in plain text and anyone who is able to capture the traffic, would be able to read it. So, if we want to secure the traffic, we will need to use some form of encryption in order for it to be safe. Now, in case of data center scenarios where we want to encrypt traffic within a data center or between data centers, we might want to use layer one or layer two encryption. This would be possible. However, when we are talking about encryption through the Internet, we need to use a layer three encryption or some higher layer form of tunnel traffic so we could use, for example, IPsec, we could use a wire card, we could use open VPN. And there are several other solutions that could work here as well. What we want from this layer three or higher encryption is that it is usable over the Internet. It does not require specialized hardware. So we would like to be able to run it on a normal Cpu without any asics. We would want it also to be able to suit a wide range of connectivity scenarios. So, for example, we want this encrypted tunnel to work over network address translation, which is relevant for IoT and edge computing. And finally, the encrypted, subtunnel solution needs to be able to handle this fragmentation and reassembly because usually these devices will be performing these tasks. Okay. And with that, we can actually have all the tools that are required to talk about further topics, which are the cloud and the branch and IoTs and connectivity to them. OK, let's start with the cloud first before we start describing our approach to the cloud, which will be based on AWS, let's summarize. Why do we think it's a good idea to do that? I mean, why not use the native cloud resources such as AWS VPN with IPsec and BGP? In theory, they are doing exactly the same thing. They're stretching data center to the cloud, but they are doing just that and nothing more to get too deep into the network or to the cloud products the more issues you will find. Let's summarize the issues that we are aware of right now. For example, in a public cloud, we have no layer two working properly, which means you can find proxy ARPS. You will find that no custom routing between the VMs will be working. For example, on GCP you will find that the interface has assigned 32 IP mask, which means there will be no ARPS at all and all traffic will be routed to the gateway. And there will be no multicast in layer two or layer free. There will be very limited IPv6 support. Some clouds do not support it at all. Some clouds are supported only in selected locations. So IPv6 is also an issue. The native solution and the native VPN can be very pricey, for example on Azure it's quite expensive. There are limited protocols that can be run inside the cloud. For example, GRE tunnels were dropped not long ago on the GCP also. There were some issues on AWS with them as well. When you are using native and native solution, you will find there is no BFD support, so the failover is pretty slow, up to 90 seconds. You can play with the BGP timers, but will not get you far. And at the end, each cloud has different issues, different approaches to the network, which means each cloud has to be treated differently at all. OK, so how can we overcome those issues that we mentioned earlier? Well, we can feel directly with the cloud using EVPN BGP and we can't use the cloud as the down player to switch it. That just does not work. What we can do, we can create additional overlay on the cloud using UDP tunnels because GRE won't work and we can use some kind of control plane again, EVPN, which we know very well, to control them in the cloud. However, the tunnels and the control planes often require custom images, which is no go for us as well. So let's sum up the requirements that we have set up first we require. That would be there would be a seamless integration with the existing VMs. So no custom images. We must be using pure images from the cloud to Redhat for Ubuntu the zero touch provision should be done using cloud init as all of those instances are supporting them. We are required that we must have full network support from layer two layer to layer seven with all the protocols in the middle. We also should also have access to the cloud native resources such as Azure LAMBDA or Azure RDS sorry AWS LAMBDA or AWS RDS. However, those requirements limit our solution. For now, only Linux hosts and the BSD should be working as well with the Windows at the end, which has pretty limited cloud init support. So let's keep it concentrated on Linux for now, OK? We also require that the gateway on the cloud should have access to the rest of the topology using existing tools such as FRR and EVPN so it can integrate seamlessly with the rest of our topology. OK, so with all the above considerations in mind, and after we did some R&D, we came for the following solution we'll use the cloud you need to configure your virtual machine on a cloud, this way we'll avoid the modification of the virtual machine image for each deployment, for each VPC we'll create a gateway. The gateway will be built from wireguard. There will be an FRR on it, The EVPN. Also there will be a custom zero touch provision. I will convert that a little bit later and to connect a virtual machine that will be created with the gateway we'll use the generic encapsulation protocol, which will form a tunnel between the virtual machine and the VM. So, whenever we create a new virtual machine using AWS UI, we will add a static user data, which is basically four lines of the code and is static and the same for all the virtual machines in one location. So it's simple. Copy and paste. Then during each boot, the layer two interface will be created using the boot unit close then during only first boot the zero touch provision will be performed, so the virtual machine will call the gateway. The gateway will create the end point on its own point. The Gateway Point. The End Point tunnel to the virtual machine. The virtual machine will get configuration from the gateway so it will remove its default gateway, default route. It will configure the static IP on its physical interface and create the DHCP configuration on the tunnel at the end, on the Gateway we'll periodically launch a script that will clean this end point that is no longer in use. We'll ask the AWS for the IP addresses on the current VPC, and if we have tunnels to different IP addresses, those tunnels will be removed. We have created that solution and we have found that the ZTP we created is working pretty good on Ubuntu, Redhat, Suse and Amazon Linux. So we have covered most of the current distributions. OK. All right, so the next thing that we want to talk about, the next connectivity scenario is connectivity with a branch office using EVPN and the technology stack consists of the things, of the mechanisms that we have talked about before. So we've got EVPN, we've got VXLANs and we have also an encrypted tunnel such as Wireguard. For our demo we'll be using Wireguard and for one of the branches we'll be using IPsec. So both of these technologies are in use. And if we have this technology stack, we can provide layer two and layer three connectivity between branch offices, connectivity with headquarters and data centers. We can connect to cloud, we can connect to IoTs. So basically to the devices, to the resources that are also using this technology stack now in case of branch offices we can enable it on, for example, enterprise grade networking devices. So we might have an expensive router to run EVPN and VXLANs and we can also have some separate firewall to perform the encryption. However, in many branches, such big performance, such big devices are not really needed. And in many cases we would do good with a server that is running these solutions in CPU, in software, in some cases we might actually do good also with a mini PC, with some extra connectivity enabled. But we actually can go one step further and we can use consumer grade routers such as Microtik, such as D Link, and such as Asus, and use an open source Linux distribution that has drivers for all kinds of interfaces and wireless devices. And that also allows us to install software such as FRR and such as Wireguard. And this is exactly what we did. And we'll show it during the demo where we installed Open double URT on a MicroTik router. Now, keep in mind that the connections between the locations between the branches are using a Wireguard tunnel or some other encryption. And inside of it there is a BGP protocol session running. This means that we can use the load balancing and the redundancy features that we have presented on the first webinar. So, for example, if we have two connections to the Internet, we can use two encrypted tunnels and load balance traffic between them. If one of them fails, the traffic will very quickly fail over thanks to BFB. Now, why would we want to stretch layer two to a branch office or between branch offices? Well, to be honest, it is mostly about flexibility. It is true that we can do a lot with layer three only. But in some scenarios, layer two will simplify our topology and simplify our configuration quite a bit. So, for example, if we would like for the multicast traffic to be able to be forwarded between branch locations in some overlay network, well, we can do that. We just stretch layer two if we want to use some services which are basing the high availability on layer two connectivity. Well, we can stretch layer two again between the branches. If we would like, for example, to group users into a single network and do not care into which location the user is connecting to, stretching layer two will also help us with that problem. And finally, we can use VRFs to create all kinds of different logical topologies where we can, for example, force some of the traffic to be flowing through a specific device. So here we do not allow direct connectivity between blue and green overlay. We force the traffic to go through a physical firewall in the central location and only then to the destination overlay network. So we've got much more possibilities that can be very useful in various connectivity scenarios. So this layer two extension to branch is something that in many cases is very beneficial. OK, but what if we have a device that will not be supporting EVPN or a Wireguard or worse, what if we have a device where it can be flash with an open reliability. Sometimes the device can be locked. Sometimes the device is lacking features. What can we do with that? Well, in most of the cases, that device already is supporting KPLS plus GRE plus BGP. And we know that the FRR is supporting those protocols as well. We can add the IPsec to protect the payload as the endpoint device will not be supporting the Wireguard as well. So we have the same solution. On the hub, we redistribute the layer three VPN with the APLS between layer three VPN and between EVPN. So everything that comes from the legacy device will be redistributed into EVPN VXLAN encapsulation and in the different direction as well. However, there is one drawback of that. Linux does not support the PLS, the layer two MPLS VPN, so we won't be able to stretch layer two networks towards legacy devices. We can exchange the Linux hub with the 3PSD hub, which is supporting layer two VPN VPLS. But it's outside of the scope of this webinar. The plus of the solution is that since we are doing all the redistribution on the hub, there is no need to change the configuration of the different endpoints. Everything is being done on the Hub. The original endpoints still are supporting either only EVPN and VXLAN or layer 3 VPN and MPLS. We tested the solution and we know it is working on Juniper SRX. We know it is working on Microtik and other syscom. Now, the last connectivity scenario that we want to talk about is connectivity to edge servers and to IoT devices, and the solution here is again the same technology stack with EVPN VXLANS and some form of encrypted tunnel. And the thing is that these technologies can be supported inside of a CPU using only software, and it can be run on different CPU architectures. So, for example, in case of edge computing servers, we can use Intel's and AMD's in case of IoTs, we can use ARMs, we can use MIBS PBCs. So, different CPU architectures. So, it is perfectly possible to run the solution on a single board computer such as Raspberry Pi. Now, in case of IoTs, obviously there will be some limitations. In case of single board computers such as the Raspberry Pi, it will be limited by the CPU, which usually isn't very fast, but it will mainly limit the throughput through the encrypted tunnel and usually the IoT devices aren't very talkative, or at least they shouldn't be very talkative. Now, when it comes to low amounts of RAM memory, it will lead to a limited amount of routes that can be processed by the BGP protocol. Now, again, in case of IoTs, usually they do not have to talk to every possible Mac and IP address in the data center, which means that we can use route aggregation and route filtering to send information into the BGP protocol to IoTs only about those machines, only about those resources that are needed for the IoT to work correctly. Obviously, in case of S servers. We do not have these limitations with CPU and RAM memory, but it is also a good idea to filter the routes so we have more security. All right. There is one more thing here that we need to remember about, and it is that the encryption tunnel that we are using will support client to site VPNs. Because the IoTs and the edge servers will be decided that will be initiating the creation of the tunnel. And also the tunnel needs to be able from network address translation. And this is because, for example, if we are using a 3G or 4G modem to connect to the Internet on the IoT device, in many cases the ISP will assign a private IP address and only then the packets will be translated to a public IP address and forwarded to the gateway. So working through NAT traversal is a must in these kinds of scenarios. Now there is one more thing that we have on the logical diagram over here. It is not directly connected to IoTs or edge computing, but it shows that we can use the route manipulation and ACLs and in order to restrict connectivity between the devices that might be located in the same overlay. So, through route manipulation, we can, for example, allow IoTs and edge servers to talk to resources in the data center, but we could also prohibit them from talking to each other. So this is one of the use cases that might be useful and will be quite similar to private VLANs on Ethernet switches. OK, now it's time for the demo. Let's talk about the demo agenda. First, the demo will be a little bit longer than usual because we have a lot to cover. We'll start with the presentation of the topology. We will describe the communication between the underlay and overlay and communication towards the Internet. We will also discuss how to extend the overlay to the public cloud as well. And we will discuss service migration from the local DC to the AWS cloud using VRP protocol. Then we will discuss how the service can be migrated. Using BGP as a service solution. We'll show how the resource and the data center can access native AWS resources such as FDS. And at the end of the first part we'll show how AWS VM can be deployed using the earlier described solution, with ZTP and cloud in it. Right. And during the second part of the demo we will show that we can use this in Microtik router with OpenWRT already installed on it. And we will have an IP camera wirelessly connecting to this router and this wireless camera should be reachable over layer two from the data center. So we should be able to use ONVIF multicast discovery to see the IP address of the camera. We will also simulate a LAN user that is trying to access the Internet and the user will be connected to this router. However, the communication with the Internet will not be direct. The traffic will be forced to travel through the data center, where there is a central firewall that has traffic filtering configured on it. And finally we'll show an example with a legacy device with Microtik running its original software that is able to establish IPsec tunnel with an MPLS based L3 VPN to the hub location. OK. And the last connectivity scenario would be an extending overlay to an IoT, which in our case is an Orange pi zero device that is connected to the Internet through a 4G modem and to this device. We've got an OLED screen connected. So, we'll try to get some reading from this orange pi and we will also try to send a command that will display something on the OLED screen. OK, so this is the topology in detail. I agree it's pretty complicated. We'll split it into parts later on the next slides. But just a quick summary. On the left, we can see the resources that we created in the earlier, webinars. So, we have a bare metal server with a virtual machine with both VRFs red and green. Then we have the server with containers also two VRFs, red and green. Those containers will be later used to access cloud resources. Then we have resources placed on the AWS cloud. We have a hub, we have a gateway, we have a machine with HTTP service VRP as well and we have native AWS resources database. On the right we have the IoT devices and branch office devices and we have here there is an open WRT device serving a branch office. There is an IoT device and we have a legacy Microtik device that will be using MPLS over GRE at the bottom. We have, as before, two leaves, one spine. We have added a DSLX device to allow Internet connectivity. OK, so this is a simpler look up. We have one network on the red VRF that is stretched across Data Center AWS and as well as the branch office on the open WRT. We have Native Network on AWS, which will be used to access RDS and in the VRF red we have also a Microtik device with the original software serving one network which will be advertised using an L3VPN. At the bottom we have VRF Green, which is also distributed to the branch office with the help of the open WRT. The VRF green is also present in the data center and the VRF Green is also present in the Orange pi, which is acting as a IoT device. All right. So on this slide, we have all of the relevant containers, virtual machines, desktops, cameras and other resources that are connected to all of these locations. And in the first connectivity example, what we are trying to do is we want to be able to access the Internet from a virtual machine running in the data center in the red VRF. So, using a default gateway, it will send the packet to its FRR router running on the same host as this virtual machine. Then the FRR router will check its routing table and see that a default route is reachable through a spine device, through VXLAN tunnel. And further the spine device should also have a default route in its routing table and see that it can forward packet to SRX device to the firewall, which will further forward it to the Internet. Now this all relies on the fact that the default route is present in all these routing tables along the way. And we do not want to use the static route. So, instead of that, we are going to use EVPN type 5 routes in order to advertise this default route. So this is the thing that we want to show this advertisement and then the connectivity beeping from VM to the Internet. So, let's start by showing you the screen where we have the overlay scenario. And here we've got the Alpine red, which is the virtual machine. Here we've got an SRX firewall to which we'll need to reconnect real quick. All right. And we also have on the right a spine device. Now, what we would like to check in the beginning is whether or not we've got this automatic route advertisement configured. So we might start on the SRX device and see what is the routing table there. Yeah, show route. So basically we are checking the routing table of the firewall and we can see that indeed it has some static at route and that is the only static route in this example. Now, the SRX should advertise this route using dynamic routing to the spine device. So, it has a direct connection to the spine device using one of its interfaces. And we can see that indeed it is advertising default routes into the OSPF. OK, so let's take a look at the spine device, whether or not it receives this advertisement. Right. So, what we are going to check is the red VRF routing table, which has OSPF protocol enabled in it. And indeed a route from OSPF protocol that can be used to reach the Internet. So, everything is fine so far. Now, the spine device has the responsibility of taking this default route and advertising it further to every router running our overlay. So all of the FRR servers. And this advertisement will be done using EVPN Type 5 routes that we have talked about during the presentation. So, we are looking at all of the routes that are being advertised to the FRR router where the VM is located at. Alright. And this is the route that we are interested in. This is a type 5 route and it is the default 0/0 prefix. So, yeah, it does perform the advertisement, which in turn means that we should also be able to see this route on the compute server itself, where the virtual machine is running. All right, so let's see the routing table in the red VRF. You are inside the virtual machine. So let's try it again. I was inside of a virtual machine. I want to check the route on the host. So, IP route over here and what we should be able to see over here, the default route that is going through this IP address. This is the IP address of a loopback on the spine router. So, everything is in order over here. So, let's go back into the virtual machine and verify whether or not that is able to ping the Internet. So, we are going to try to ping the Google DNS server and we should see that indeed it is able to reach it. And just to be sure, we can also verify if this traffic is actually flowing through our SRX firewall. So, we are going to view all of the stateful firewall sessions that had the destination graphics of 8844. And indeed we do see traffic from this IP address. This is the IP address of this virtual machine. And we actually can see that some form of network address translation is being performed here. The source network address translation. Usually we would see a public IP address but in our lab that is translated to another private addressing and it receives a public address somewhere further along the way. But basically we see that this connectivity is working as expected. OK, so let's move to the second part, the next part of the presentation. And now we'll talk about the service migration from the data center to the cloud we'll cover two examples. The first one with the VRP, we have an HTTP server running inside the container in the DC. It has VRP with the virtual machine on the cloud, there is an HTTP server as well. This is the master, this is a slave. So from here, we'll ask the web page. We'll see it come here and then we'll start to migrate to the cloud. Great, first, let's ensure that we have running VRP service. Yes, it's running. Let's check if there is an IP address on the site, virtual. Yes, we can see that there is this is the virtual IP address. So VRP is active on the datacenter, on the container. Now, let's start watching the web page and we can see that the WWW is being served from the container. Now, let's start. Watching the logs on the cloud.    The cloud is in the backup state, so on the master, let's stop the VRP service. And we can see that immediately, the IP address has been moved to the cloud and we are surfing the Web page from the cloud as well. OK, but. Let's just before we move to the BGP, let's try the opposite direction. It's being done, we move the service back to the data center. OK, but the VRP, it's not state of the art protocol. It's being used but a lot of people have moved to the BGP as a service solution. We can do that as well. Let's first ensure there is no BGP running on the cloud. And, let's say that we want to serve something from the Web using a Web page using IP address. So we are doing a cURL here trying to connect to that IP address. OK, so since we are not advertising anything, there is a time out. There is no communication. But what would happen if we first start the BGP and we'll establish BGP peering with the gateway on the cloud? OK, the state is established, we already received some routes. And now let's advertise the IP address from the cloud, from the DC, from the cloud toward the gateway, and we can see that the cURL running on the DC is getting responses. What we have done here is run a go BGP instance. We peered with the gateway on the cloud. We advertise loopback routes. So, here a request goes via the hub, via the VPN towards the gateway and here towards the AWS resource. OK, so this is BGP as a service. Now let us move to accessing native resources. First, let's take a look at the topology. OK. So that's not being updated. We have LXC Containers on the data center, which has a MySQL client. And we have created a database serverless on AWS in two zones, and we would like to access that resource from the data center. via the hub via the gateway of the cloud, so. First, let's check if the instance is up and running. OK, we have a database, we have a cluster, it's fine here. On the container let's check if we have access to the DNS service. It has two IP addresses in private range. It's not the Internet. So now let's launch the MySQL client and see if we can get a response, well, we do. So we ran MySQL, we connected to the resources on the cloud. We did an SQL query select version. And we got the response. OK, so we can now access native resources, such as an RDS without much of an effort. Everything is being taken care of by the FRR and our topology. Now, the next part is deploying the AWS VM on the cloud. So, what we will do, we'll go to the UI, we'll click to create the EC2 instance. The EC2 instance will be deployed with the cloud configuration. It will create first the interface we talked about earlier, the layer two interface, then it will hit the zero touch provision on the gateway. The zero touch provision will create the other end point of the tunnel here. Then there will be a network configuration on the virtual machine which will remove the default route toward the physical interface and add the ACP configuration toward the VMC. So OK, let's see how it will work out. Now we have an AWS UI. We can create. To speed up things we'll use an existing template. Again, this template does nothing. Well, let's select here the red hot, not the default Ubuntu. This template is just predefined. There is nothing special, there is an SSH key, of course, there is a VPC assigned. The most important part is at the bottom. We have here user data, which pretty much means that we create the layer 2 interface. We assign a Mac address that is based on cloud instance ID just to find it easily later. And then we run the zero touch provision, OK. We load that instance. OK, we're seeing it's being created, it's a pending state. Now let's go back to the logs. OK, so here on the computer compute one, we have the DHCP running in the network that is stretched to the cloud, so as soon as the instance will boot up, it should ask for the DHCP address. And when we get the DHCP address we'll try to SSH from the LX container on the different bare metal server in the DC machine. Now, it can take a little bit of time, so we just have to wait. Right now. It's going to take some time. OK, we see that we have DHCP discover the Mac address 5A28 is the same as the instance ID. So we have assigned the IP address ok, so let's try to SSH there. The endpoint is 28. And we are here. Let's see if we have connectivity, how the network is being configured on the Redhat. We can see we have a default HT0 connection, as well as the VMC virtual interface that we create during bootup scripts. The IP route should show us the default gateway, the default route is going through our gateway, not through the physical interface. So we should be able to access old DC resources as well as to go to the Internet through the DC gateway, through the DC firewall. OK, so this sums up the cloud part of the demo. Right, and we have two things left to show here. Now, the first part will be about open WRT, where we have an IP camera wirelessly connected to the branch router. Now, this wireless camera receives IP addresses from the DHCP server, but the DHCP server is located in our data center. Now, what we will be trying to do is we will try to use ONVIF discovery that requires multicast to see what is the IP address of the camera. And then we'll also try to see what the camera is viewing, whether or not we can actually communicate at layer 3 with it without problems. And we also show commands through which we will see that the Microtik router running original software is also visible to the overlay network and the routing tables are properly filled out. So let's start this example by logging onto this. OpenWRT device. Yes, we are already here. And let's see, really quickly what is its configuration. So first off, we'll check what are these stations connected to the Wi-Fi network. So the Wi-Fi still works, even though we've launched different software than the original Microtik operating system. And there is a single station connected. This is this Wi-Fi camera. All right. Let's check out the CPU information. Because we want to verify whether or not this is indeed a Microtik router. And yes, originally it was Microtik now this is OpenWRT with CPU architecture of MIPS. And what is also interesting is the amount of memory on this device, which is 60 megabytes and still with not a lot of memory, we are able to run FRR EVPN and wire guard encrypted tunneling without problems. OK, so we talked about encryption using Wireguard so we can verify that indeed we have a Wireguard tunnel established to a location in Amazon. So the hub for our VPN for the encrypted tunnels is in Amazon in this LAN. And finally, one last thing. The NTU configuration on the Wireguard tunnel, we can see that we have set it to nine thousand bytes in order for it to be able to encrypt large packets, which then it will have to fragment for them to be sent through the Internet. OK, and one more thing that we want to check before we try to use multicast is whether or not we are actually using EVPN over here. So, I'm accessing the FRR console and just showing whether or not we have some BGP sessions established. And we can see that indeed we have got some sessions for unicast addresses through the Wireguard Tunnel and some IBGP sessions for layer two connectivity for EVPN directly through the tunnel to the spine device, which acts as the root reflector. So, everything seems to be in order here. OK, yeah. And the first thing that we are going to show actually is the connectivity scenario from a user connected to this branch router to the Internet. So we actually do not have a laptop connected to this router. So instead of that, we are going to use an IP address that is configured on one of the interfaces in the green VRF. So what we are going to do, we are going to try to ping from the green VRF, from the bridge user's interface, a location somewhere on the Internet. And what we can see that indeed we are able to get to this IP address. However, remember that this traffic is actually flowing through a firewall. There is a central firewall, on which we have security policies configured, and there is a security policy that should block traffic to And indeed. This traffic is not flowing. We can verify it on the SRX device just to make sure we can see the configuration of the security policies over here. And indeed, the first rule will block traffic from this IP address on the open WRT device to the Internet to But all the other addresses are permitted over here. OK, so we've got connectivity from branch to the Internet. Now, let's see if the connectivity between datacenter resources and the IP camera is also possible. So in order to do that, we will enable this CP dump on the open WRT because we are going to try ONVIF multicast discovery. So we are interested in packets that are either destined to this multicast IP address or directly to the camera. So let's see, OK, there is actually a stream flowing right now. So instead of, well, we'll be only interested in the multicast packets. All right. And let's try from the comp 6 device. This is a desktop that is connected to the data center. It is comp 6. So it is a bare metal server. It does not have FRR routing installed. It doesn't have any extra routing demons it is just, you know, a plain bare metal server without anything extra on it. And using this tool we'll send a multicast packet that will try to detect the camera. And indeed, we can see that this multicast communication was possible because the camera was successfully detected and this is its IP address. All right. So, this works without a problem. We have the multicast capability, the link local capability without configuring multicast routing. So, the next step would be to also check whether or not we are able to see what that camera is looking at. So this is just simple connectivity. Nothing special about that. But we will actually need this for the next step of the demo where we present the IoT. OK, and it works. So the camera is looking at the IoT device, at the Orange PI, which has an OLED screen attached to it. And the dock is very important here because it supports this structure. What we will try to do further on is to display something on the OLED. So we will get back to that shortly. OK, but before that we will cover one more topic in our presentation, which is legacy access using Microtik original software I've got disconnected from the hub. It happens. OK, so first let's go back to the topology and see what we will be doing. We'll be having a hub here and we'll be having Microtik, which has original software. There is GRE plus IPsec plus MPLS layer 3 VPN with the BGP help as well. All together it's here configured and the hub will be redistributing those routes towards the rest of the topology. OK, so let's fast track the configuration of the Microtik. We have one routing instance, VRF red, it has the same route target as the EVPN, so that's why the redistribution is possible. Of course, we have the GRE tunnel, the GRE tunnel has a VM tool side, it has the address of the hub at the end point. There is an IP secret which creates the IPsec policy transparently inside the route OS. This is the pre shared key. So since we should be having a BGP session, we should be receiving routes as well. And we can see we are receiving all the routes from the hub, this is the IP address of the hub and all the routes from the rest of the EVPN topology. OK. So we can do one more thing on the Microtik device. It has its own IP address assigned and let's see on the hub how this IP address is visible. And we can see on the hub that we are receiving this using BGP protocol. And we are seeing that the end point is in the VRF default. However, we have to push labor 16 so we can differentiate between VRFs between Microtik and the hub. The configuration of the hub is pretty simple here. So first we have added the Microtik eBGP session inside the BGP configuration. We added that BGP session to the IP for VPN family. And inside the VRF, we have added VPN configuration, Import-Export as well as the route target and route distinguisher. All this requires and label VPN export to assign VPN labels MPLS labels. All that is needed. And with that, the redistribution between the EVPN and layer pre MPLS is being done directly on the router. So all other end points are seeing just pure EVPN configuration. So, this sums up the legacy part. And there is one more connectivity scenario that we wanted to talk about, and this is connectivity with IoT devices with this Orange pi that is connected to the Internet through a GSM, through 4G modem. And we've got an IP camera looking at it, looking at its screen. So we will try to issue some commands that display something on it. Now, in order for the command to reach the IoT, it will actually have a few steps to travel. So, we'll send a command from Comp 6, which is in the data center in the Red VRF and the IoT. We've put the IoT in the green VRF in order for it to be a little more complicated. So the packet will travel to FRR, then it will be forwarded to layer three Gateway at spine it will go through the firewall so we can limit the connectivity on the firewall without any problem. Then we've got spine in the green VRF, FRR and this is actually FRR on the Orange pi already. And then to an IP address, configured on its logical interface. So, this is the connectivity scenario and we'll see in a second whether or not it works as expected. All right. So again, let's start with verifying the configuration of the Orange pi. So we'll take a look at its CPU architecture and then we'll see that it is indeed not an Intel, not AMD, it is ARM. OK, the next thing that we might want to check is the IP route. This is the main routing table. And what we can see here is that the default route to the Internet, the connectivity with the Internet is through some USB device. This is the USB 4G modem OK. Next thing to verify is that we are encrypting the traffic and not using unencrypted VXLAN tunnels. And the hub again, the Amazon virtual machine. All right. And the last thing, just to make sure we can check the FRR, whether or not it is being used. So, I've accessed this console and we can see in the BGP sessions that indeed we've got the EVPN address family and we are using it to have this layer two, layer three connectivity. OK, so the IP address with which we are trying to connect is located on this logical interface, green L2. We can see that it is 10 to 103.1 And on this IP address we are running a service, a simple HTTP server running using Python, so we can verify that indeed there is a service listening and we've got a service listening on all of the interfaces in the green VRF. This means that if a packet would arrive in some other VRF or on the main routing table, it would not be able to access this service. It is only for the packets arriving in the green VRF. And this is actually quite simple to accomplish. It is quite simple to put a service in a VRF because all we need to do is we need to issue a command IP VRF exec. Then we specify the name of the VRF and finally the command that is starting up the service. So, nothing complicated over here. OK, so the service is running, the service is ready, so we can try to communicate with Orange pi, from a desktop computer that is connected to the datacenter. So, first off, we can try to check whether or not we can read its temperature sensor. And indeed we've got some response. So, it is working as expected. Now, what we might also want to do is we might want to send it some commands. Now, this command will try to display something on the other screen, so I will also need this view from the camera. So I will issue the command and then switch to the camera and what we can see, that indeed there is something happening. We issued a command, in our case, this is an OLED screen, but it could be, for example, manipulating some server, opening something up, starting up an engine or stuff like that, you know, IoT, what the IoT are used for. So it is working as expected. Now, we can also verify one more thing. We can verify whether or not we are actually going through a firewall, that we are forcing the traffic to be pushed through the firewall in the central location. Ok, right. So in order to do that, I've logged onto the SRX device and what I will be interested in are the sessions to the Orange pi. Right now nothing is flowing. But if I issue this command again, we should see that indeed there is a session and we could limit connectivity only to those devices that should be able to talk to IoTs and in the other direction as well, we could protect IoTs or edge computing servers from accessing some of the resources in the data center in the cloud or some other locations. OK. And this actually concludes our demo. So, just a last command issued to the IoT. And it works as well. OK, so it is time to summarize what we have said here today, and we'll try to make it quick. So, I will not talk about every procedure here because we have mentioned them during this presentation, but hopefully you are able to see that we are able to create various logical layer two and layer three topologies with which you can connect resources that are located in dispersed locations. They are not in single locations, but they can be in various parts of the Internet. Now, it is true that we can do a lot with layer three, but we can do even more. When we also have this option of layer 2 connectivity, and it might be important in the kinds of scenarios where we are migrating some services between local DC in the cloud. It can be very useful when we have HA solution that requires a lot of connectivity and it is quite useful for various scenarios in case of enterprise networking, where this layer two connectivity can be quite helpful. Now, if we need only layer three, then EVPN type 5 routes can also be able to help us with that. For the traffic encryption, we can use anything that can encapsulate UDP that supports IP fragmentation and reassembly. And we also need to remember about this network address translation traversal. And finally, the last point, probably the most important here is that this solution, the software components, are based on open standards. So they are interoperable between devices from different vendors. And we can also run it on open source software and also it can be used on low end devices using various CPU architectures. OK, but as we all know, sometimes there are drawbacks. And well, what we have discovered, what we have shown is this configuration is pretty powerful. However, it is complex. So, sometimes it requires advanced network knowledge as well. Sometimes the MTU has to be taken care of. Most of the time, the MTU path discovery works fine. So, nothing is needed. Everything is being auto discovered along the way. However, MTU is something that we need to keep in mind as well. There is no FRR support Only VPLS support, no FRR so there will be no layer two support for legacy devices for FRR only Linux, on BSD DVPS it is working just fine. And this solution works mostly with the Cloud-Init enable devices. So the support for the windows on the Cloud would be pretty much limited here. We also encountered a lot of problems. I will try to sum up some of the most important of them. So on the open BRT the current stable distribution is based on 414 kernel and this kernel does not yet support full set of the EVPN features There is no layer 2 Mac address learning. So the app requests are being flooded toward the rest of the topology. And this will be fixed with the next release, hopefully this year. So it's a short term problem. There are some issues with the FRR, so we have to end that with the exact branch, exact subset patch of the FRR because the previous version has some issues. The newest one introduced a different issue with the internet segment. So, we have to add it in the middle. Of course, the guys at FRR are aware of those issues and are already fixed in the master branch. And also the one thing that is pretty obvious but complicates things a lot. Each Linux configures its network differently. Even when we are talking about Linux with the sys configuration, Fedora, Suse, Amazon, Linux or Redhat. Each of them are configuring the network in a different way. Some of them have network managers, some of them have Script, some of them are based on udev events. So all those things complicate and creates a lot of exceptions in the zero touch provision that needs to be taken care of. OK. And with that, we would like to summarize what we have learned about, not only on this webinar, but also on the previous one. And this is what we can use. Open standard technologies that are widely available and we can use them to deliver quite an advanced networking solution. So, some of them that we have used were iPv6 link local addressing for auto configuration, neighbor discovery, BGP unnumbered sessions, We have used BFD for very fast failover, ECMP for load balancing, Cloud-Init for automatically configuring new VMs. And it can also be used for servers. We use VXLAN for layer 2 connectivity and VRFs for advanced architectures with layer 2 and layer 3 networks. EVPN because we didn't want to manually configure all of those tunnels and manually fill out the bridging tables. We've got GRE, MPLS and IPsec for legacy branches. And, you know, we could continue with this list a little bit more, but basically, hopefully we were able to show that it is indeed possible and quite beneficial. And also that it is beneficial to run a routing demon such as FRR on a server. Because all of that would not be possible without the solution. And we are very happy that such software as FRR is out there at the people's disposal. Now, in today's webinar, we also showed that EVPN with VXLAN tunneling can do a great job for providing layer 2 and layer 3 connectivity between remote locations. And also that even though public cloud networking is limited in most of the cases, we can still achieve full connectivity between virtual machines inside of the cloud and also between virtual machines in the cloud and in local data centers with the use of overlays. So we hope that you have enjoyed this webinar series and we have certainly enjoyed creating them. And now let's see if there are any questions. OK, So far, we have two questions, let's go with them one by one. The first one is, have you tried different clouds such as Azure or GCP? Not in the demo, but we are working on them, during R and D, there are issues. As I said before, Azure has a different network. GCP has slash 32 prefix. Those issues create a little bit of more work, but they are working just fine. And the VXLAN or the GENEVE are typically just UDP traffic for the underlying cloud. So, it should be working just fine without any issues. However, with the lack of time, we didn't manage to add it to the demo. OK, all right. So let's see the second question. Would an Arduino board also be able to run EVPN? Well, we have used a quite powerful IoT device which was Orange pi zero to do the EVPN and Wireguard? And in the case of Arduino boards, some of them can have really low RAM memory, like eight kilobytes or something like that. With that low amount of ram, we would not be able to do that. Also, the CPUs are far less powerful. So, in these kinds of cases, what we would recommend is to use, for example, open WRT router as the gateway for them. So, the open WRT router would be able to stretch layer 2 connectivity to an overlay network, just like we have seen it happen, using leaf switches which are stretching the connectivity to a server, to a bare metal server. The concept is exactly the same. So with every device that cannot support the EVPN directly, we can pull it into an overlay using Layer two gateway. And also we could use an USB device to extend this to IoTs that do not directly talk on IP. So, for example, some IoTs such as Laura, such as Zigbee, such as Z wave, such as other IoTs that have different kinds of connectivity requirements. But we could, you know, talk with them, read their sensors for a device such as Raspberry Pi, for example. And the last question so far is about the gateway redundancy on the cloud. OK, that's a big topic, because What would happen if the gateway failed and the machines on the cloud lost connectivity? Of course, we can spawn a different machine which will take the IP of the previous one, restore the tunnels and continue working. However, this would require calling the API of the cloud, which is an issue because the API can be slow. With AWS we are lucky. Such a readdressing of the second gateway would take seconds, on a different cloud, Azure It can take minutes. So, this is a cloud native solution, but it won't work well. So, I think the best solution would be to create a second tunnel on the machine to keep things simple and add the bridge connecting those tunnels. So we'll still have a simple one IP address with two directions, to both of the clouds to both of the gateways on the clouds. And we can keep running VRP between those gateways. Since we are having full layer two connectivity inside the overlay, we can run between gateways VRRP. So, if one fails, the other will take over. We can make the VRP stronger with the help of the BFD so the failover will be even faster. Of course, we can do some proprietary solution with the active standby tunnels, but this won't be as open as we would like it to be. OK, and we've got one more question, This setup is a pretty complicated project. Did you have it all planned from day 0 or did you have to adapt as you were going along? So we did work with EVPN quite a lot, so we did know the capabilities that it has. However, there were some issues, there were some problems, there were some, you know, like dead-end alleys that we went into and that we had to back out from. So it wasn't as if we had everything planned out from day zero. We actually needed to change some of the approaches along the way. Because indeed, the solution does consist of many different parts that need to work together correctly. And this was probably the most important. And quite difficult to make them work as we wanted them to. The next question is, does this a lotion support for EVPN over MPLS? So, EVPN generally is supported over MPLS EVPN is a protocol that can run on MPLS, on VXLAN tunnels. There are drafts for other tunneling protocols as well. However, in case of the solutions we are heavily using FRR software routing demon. And in this scenario, VXLAN is supported. It works very well. And we didn't even try using MPLS. I'm actually not sure if it is supported directly on FRR or not. However, MPLS is a little bit more complicated. And I think it would be our recommendation, that if you can use VXLANs instead of MPLS, you do not have advanced connectivity requirements that would make MPLS worthwhile. Then you can stick to the VXLANs. If you are talking about data centers, VXLANs are usually enough. Now, if you wanted to, for example, use this connectivity through an ISP network and you are the ISP with full control over the MPLS, then you might play with it as well. It should work without many problems. The next question is, Is multicast support requires EVPN native multicast route type? Inside layer two inside one stretched network, there is no problem, as we can see, the multicast is being forwarded correctly without any issues. Between networks. We can use multicast protocols agnostic to the EVPN so it will be terminated on the server and propagate it further. So, I think it would work without native support of the EVPN. I'm not even sure. There are some drafts for EVPN that do provide connectivity for multicast traffic. However, in this demo, we did not explore them. So to be honest, we are not sure if FRR could handle them. If it does, I don't think it supports, you know, type six or type seven or other routes. I think it stops currently at least at type five. So we've got the prefix advertisement, but there is no support for the drafts, at least I don't believe there is for the drafts that talk about multicast routing. But as Adam said, in case of layer 2 stretch networks, we can support multicast directly through layer two connectivity. Mm hmm. And I think that's it when it comes to the questions, if anyone thinks about the question that is important for them after the webinar, you can contact us as well. We'll try to help with that. Oh, there's one more question. OK. What about the controller to all of that? Not yet, but yeah, it would be nice to have, but one more thing that is important. The FRR is using plain text configuration. So, everywhere we're doing, we're trying to keep as open a solution as possible. So at start, we can use plain and simple to do a lot of automation here, because using a template, we are showing that in the first webinar with a simple template we can populate endpoints with FRR configuration or populate routers in the middle. The FRR configuration and the rest will happen automatically as the FRR and EVPN will take over with the configuration. We can limit the route advertisement with extra configuration as well. Again, simple text file, so it can be easily done and easily controlled from one point. So, we didn't even try to make a controller for that. It was not part of this webinar series. However, it would be possible if someone wanted to do that and it shouldn't even be that difficult. OK. So let's wait a few more seconds for any more questions. Well, if not, then in the following days we'll post the configuration and the commands and the demo and the presentation to Github in our CodiLime repository. It needs just a few extra descriptions just to provide a clean code without any internal comments and with that. Thank you very much for taking part. Oh, stop. One more question. What about scaling cloud instance bandwidth limits as opposed to the AWS/GCP provided and integrated VPC devices (inet-gw, NAT-GW)? Yes, but the native device, NAT gateway or eNET gateway scales pretty well. However, we can do a local breakout. That can be easily done without any issues and it will work. However, if you want to direct all the traffic back to DC, it's not so great because on AWS there is a hidden limitation, not a hidden. But there is a documented limitation that the VPN gateway can do only 1.25 gigabits per second. So it's not that fast. We can go faster using Transit Gateway and using ICMP. But of course we can spread out our gateways with the Wireguard as well and be using ICMP as well. On Azure, there is a similar limitation. On Azure there are limitations. Different gateways have different pricing limits and you can go faster. But I think up to two gigabits. It will be very, very expensive and it will be cheaper to deploy an instance there On a GCP. I don't think there is any limitation. The gateway on the VPN Gateway. But again, we can spread more instances and achieve the same with pretty much similar cost at the end. OK, so again, thank you very much, if you do have further questions. This is our contact email so you can send the questions there. And also, if you are interested in learning more about the solutions, we are also open to have a conversation about that. So thank you very much. And have a pleasant evening. We hope you enjoyed this presentation. Thank you. Thank you.


Jerzy Kaczmarski

Senior Network Engineer

Adam Kułagowski

Principal Network Engineer