Mistception: Fun with Apstra, Mist, EVE-NG, and more!

LabScreensYou'd think that since most of my work articles are about Juniper Mist, I spend ALL of my work time on wireless. But that's not the case. I do get to focus on a single "genre" of technology mostly, but that genre is "network infrastructure" so it's a bit of a vast field. So, for something a little different than what I've written about in the past, this article is going to be about datacenter networking! But don't fret, my Marvis-loving readers. Mist is still in here! But wait you say. Datacenter networks and access networks mingling?! surely it can't be so! But verily I say unto you, my dear readers. This article contains not only Mist for access networks and datacenter networks but also WAN ROUTING! I know, you gasp in disbelief. But it's true!

Ok, I might have gotten a little out of control on that paragraph. So, the TLDR version is that this article is about using EVE-NG to build up a lab to simulate a two-datacenter network with a datacenter interconnect (DCI) and external connectivity. I did NOT develop this lab by myself (or this would be a whole series I might get published sometime in 2030). In fact, the Apstra piece of this lab was based entirely on Colin Doyle's 5 Minute Junos video series on building an Apstra lab in EVE-NG. You can find those on YouTube (like and subscribe to the channel as they say!) as well as a discussion thread for each video on the Juniper Elevate Community (which is a fantastic community to join if you do anything with Juniper). But, my goal was not only to get an Apstra lab setup, but also to see how the new Marvis for Datacenter and Juniper Routing Assurance worked. And I wanted to use the latest virtual Junos images to replace some of the “legacy” nodes. And overall, I was able to accomplish that pretty handily. This will be a very long article, but hopefully, you can stick through it (or skim to the parts you really want). Let's get started!

Components

There are several pieces I used to get this all working together. This is the short list version. But when you get to the detail section below, I'll go into a bit more detail.

  1. EVE-NG Professional - I used Release 6.2.0-3 and Release 6.2.0-4 (more on that later)
    I've had an EVE-NG server for quite a few years. I personally think that the professional license is well worth the money, I know for some that it’s less attractive, so they do have a free community version. I use EVE for lab learning (like this project) and client environment simulation. As far as setup goes, I run a bare metal server on an old Dell R710 with dual 6-core/12-thread processors and 148GB of RAM. You'll also see that I used an Eve satellite node (a pro feature) that I ran as a VM on a similarly configured ESXi node. In total, I had 44 vCPUs and 260GB of RAM for EVE to use. I could probably get away with a single bare metal host if I upgraded to something like an r730, but that's money I haven't gotten approved by accounting, so we make do for now!
  2. vJunos-switch images - I used version 23.2R1.14
    Previously, Juniper made a vQFX image available, but lots of folks were asking for a virtual EX. For those not familiar with Juniper switch lines, the QFX is more datacenter-focused, and the EX is more enterprise-focused. The vJunos release was meant to "replace" the vQFX and provide EX functionality. It's been pretty good so far, though it's still quirky sometimes.
  3. vJunos-router images - I used version 23.2R1.15
    Similar to the vJunos Switch, this is the "successor" to the vMX. The vMX was separated into 2 separate nodes (the control plane VCP and forwarding/data plane VFP), while the vJunos router is a single combined node.
  4. vSRX images (registration required) - I used version 23.2R2.21
    Where vJunos images are meant for lab and non-production use, the vSRX is a production-ready virtual appliance. Like other firewall manufacturers, this can be deployed in place of a physical appliance (with caveats). They work well for labbing, too! The downloads do require registration with Juniper (but it's free).
  5. Apstra server - I used version 4.2.2-2
    Apstra is the new hotness in the Juniper Datacenter (DC) solution. It is a multivendor, intent-based datacenter automation solution. There’s a lot of discussion that goes into what Apstra is that I’m not going to deep-dive on here. But suffice it to say this is a big part of the “secret sauce” that Juniper is bringing to bear in the datacenter environments. If you're reading this, and you're not familiar with what Apstra is, keep going, and you will be! This was the primary solution I wanted to become more familiar with in the whole exercise.
  6. Apstra Cloud Services Container
    On the same download page as the Apstra server, under "Application Tools," you'll find the "Apstra Cloud Services Edge Image." This container ties the Apstra server above together with "Juniper Apstra Cloud Services" (AKA Mist for the Datacenter).
  7. A Docker server
    The Apstra cloud services container needs to run on a docker instance. I did a little work trying to get it running directly in EVE-NG using a container in a container node, but after a couple of hours of trying to get an easily reusable node (without it being tied to a specific cloud services organization), I ended up just running it on my lab docker instance. I'm sure it's possible to get it to work directly in EVE; I just didn't get it going. I will be returning to that at some point.
  8. Juniper Mist Organization - registration required
    This is the "main" Juniper Mist platform. There are multiple geographic segments here (global, EMEA, APAC), and behind each geographic segment, there are multiple instances. This allows the cloud instances to cater to specific regulatory locals. This documentation page discusses the instances and is updated with guidance for deciding on the specific instance to build your organization in. Migration between instances is possible but requires support assistance. Anyone can register for an account, create an organization, and get a free trial license (90 days at the time I'm writing this).
  9. Juniper Apstra Cloud Services Organization - registration required
    Currently, Apstra Cloud Services is a separate cloud service instance. It uses a separate account from the Production Juniper Mist instances for login. To set up this lab, you will need to create an account on the DC instance. I recommend linking it to the same email address as your "main" Mist organization.
  10. Juniper Routing Assurance Organization - registration required
    Similar to the Apstra instance, routing assurance is currently a separate cloud instance. I did find that it seems to *somewhat* be linked to the DC instance (it seemed like if I changed my password in one, it updated the other). But I didn't spend any time confirming this. So, for now, I recommend using the same credentials as your DC instance. You might try simply logging into the routing instance if you're setting things up in the order I have listed here. I will update this if I get more information about it.

High-Level Process

In the next section, I go into detail, but if you want a bit of a challenge, here's the high-level process I used to get things up and running. If you're a pro, you can speed-run through this and be ready to go. If you run into issues, you can use the further sections (and linked resources) to see what might be going on. I like doing this for labs in training classes as a challenge, so here's your chance!

  1. Get your EVE server running and the node images added
  2. Build your EVE lab file. I used something similar to the following
    • 2 datacenters - each with at least two spines, two leaves, two border leaves, two to three servers (I used the built-in EVE Linux Ubuntu mate images), and one router
    • 1 WAN sim vSRX connected to an internet-accessible EVE cloud network (I used the Management network or Cloud0 for this)
    • An Apstra server connected to an Out Of Band (OOB) network cloud that all of the switch management interfaces are connected to
  3. Get all of your nodes booted and ready for Apstra to connect
  4. Get your lab Apstra server configured with 2 DC blueprints that are connected with an over-the-top (OTT) interconnect (DCI)
  5. Get your lab external routers configured for Internet access via the lab WAN sim SRX
  6. Get your cloud services container instantiated and connected to your Apstra instance and the Apstra cloud services
  7. Get your Apstra cloud services connected to the Mist cloud
  8. Get your lab external routers onboarded into the Routing Assurance cloud
  9. Break various things in the lab and see what the Mist cloud instances show you and how you can resolve them
  10. Have a beverage of your choice you deserve it!

The Nitty Gritty

If you've been reading straight through, now is a good time for a bio-break, beverage refill, and a stretch. This process builds on the short version I gave above and goes into some more depth. The first 6 steps are essentially a follow-along version of Colin’s 5-Minute Junos series. His videos are fantastic. He not only gives you the steps, but some gotchas, why he does things, etc. You can download his documentation from the Elevate community threads. So rather than me reinventing the wheel, those steps will link to the elevate thread and just give some of my commentary.

Step 1 - EVE-NG Server Setup

There are lots of people who are MUCH better at EVE-NG and Juniper than I am. My favorite resource for this is Christian Scholz. He is a Juniper Ambassador, a fellow MistFit, and a Smarthome master. He has also literally "taught the class" multiple times on working with Juniper nodes in EVE-NG. You can also go straight to the EVE Documentation. It’s solid and well-written, with screenshots to get you up and running.

If you don't have EVE-NG running already, here are my recommendations. First, go bare metal if you can. It's a great performance boost. But if you don't have a spare server lying around, don't worry. I ran EVE as a VM for a long time. I even built out a 3-node EVE cluster with all VMs when I've done some of my bigger labs. When I got my newest server (a fancy Dell R720), I decided to convert one of my r710 VMware nodes, which was basically dedicated to EVE anyway, into a bare metal EVE node. Second, get the pro license if you can. I find it’s well worth it to be able to connect/disconnect links while nodes are booted, but there are a ton of other features I like using when I’m labbing. Being able to inject jitter, latency, and loss on a link when you’re labbing is awesome. Especially if you’re doing probes on an SRX or something. Third, give it all the resources you can. You’ll find ways to use them don’t worry!

eve resourcesBeware:  this lab EATS resources. RAM especially, but also CPU. I ended up splitting it across my physical box and a VM satellite to spread the resources a bit. I noticed a little bit of weirdness when I had the RAM and CPU peaked on just my bare metal instance.

After your EVE server is up and running, it’s time to get the images. Again, the EVE documentation is golden here. Follow the instructions carefully; I have yet to have an issue getting an image loaded when I do. I’ve had nodes not boot, but that’s a completely different issue.

Step 2 – Build your EVE lab file

green 5 icon 5-Minute Junos video reference: Video 2

Now that your different nodes are able to be added, the fun comes with building out your lab file. You can download my lab file if you want a shortcut. It uses the nodes I linked to in the components section, which, if you loaded them too, should make it pretty simple to just use mine. Don't trust me? No problem, you can download the EVE lab file Colin used from the elevate thread above. There are some differences between his and mine, though, so keep an eye out for those. Or if you want to do it all yourself, go for it!

My goal was to keep it as “simple” as I could while maintaining a similar environment to what Colin had. You have some freedom here to build something you want to play with more. Ultimately, you’ll want these major pieces:LabScreenshot

    • 2 Datacenters with at least:
      • two spines – vJunos switch
      • two leaves – vJunos switch
      • two border leaves – vJunos switch
      • two to three servers – EVE built-in Linux Ubuntu mate
      • one WAN edge router – vJunos router
    • 1 WAN sim vSRX connected to an internet-accessible EVE cloud network (I used the Management network or Cloud0 for this)
    • Apstra server node connected to an Out Of Band (OOB) network cloud that all of the switch management interfaces are connected to.

You’ll notice that in both Colin's and my setups, a pair of SRX firewalls are connected to each border leaf. He didn’t end up using them in the video series, but they can be used for data center segmentation. You can skip those if you’d like. I have them because I’ll be playing with them some more in some future labs I want to do.

I do recommend following Colin's instructions and setting a base MAC address on your nodes when you add them. Especially if you have the OOB network piped to a “real” network, this will make it so you don’t need to configure static IPs on the OOB interfaces, you can simply use DHCP with reservations to make things easier. It’s a nice tweak that Colin had that I really liked, and made things simple when I had to rebuild some nodes (thanks to long power outages!),

Once you have all of the nodes laid out, then you get to begin connecting everything. First, I did the mgmt interfaces for the switches. As Colin suggested in the videos, I had this cloud piped through to a separate OOB network in my lab so I could connect directly to my Apstra host in this lab. It’s not strictly necessary, but it did end up being even more useful when I ran into some fun on the cloud integration. You could use the EVE management cloud (Cloud0), but I liked the thought of having that separate and not “looping” things around. You also can just use a cloud without piping it out into your lab. But then you will want/need to add another desktop node in the lab so you have a box that you can use to interact with Apstra.

Next, I started cabling the switches. You can see that mine is similar to Colin’s but has some slight variations (I like to be different). With the switches connected, you can move to the hosts. In Colin’s labs he put an SRX as a fronting device so he didn’t have to configure LACP on the hosts. I decided to go for it (fewer resources!) on the built-in hosts, so I went ahead and connected them directly to the leaf nodes. It was even mostly stable!

With all of your nodes connected, you’re almost ready to go! But one note first. I noticed that the EVE templates for the vJunos switches used 16GB of RAM per node. This was way more than the Juniper recommended amount I had seen. I asked Christian about it, and he said they had run into some issues when the new images were released when they used less memory, so they found this increased memory helped things run stable. Since I was running tight on resources on my EVE server with everything booted, I went ahead and dropped the RAM down to 5GB per switch node. I didn’t run into any real issues during my labs but keep in mind that this *could* be problematic. This is a strong YMMV item! If you’ve got the resources on your EVE box, go with more, but if you’re on ancient hardware like me, you may need to drop things down. And it’s easier to do it before you start.

Step 3 – Boot your nodes and set up a base configuration

green 5 icon 5-Minute Junos video reference: Video 2 

While this may seem like a simple thing, EVE *can* have issues if you just tell everything to start all at once. So I recommend booting 3-4 nodes at once, letting them get most of the way through booting up, then boot the next few nodes. This is one reason I like to run EVE locally versus in the cloud. I can simply leave everything up all the time so I don’t have to go through this process frequently. You can set boot delays and such to help with this, but I’ve had mixed results with this in the past. And some of these nodes can take a while to boot. So this is a good time to take a rest break and grab a sandwich between booting sets of nodes.

Once the nodes are booted, there is a little bit of “pre” work you’ll want to do before you jump into things. Colin has his steps in his notes. I ended up doing the first node, then grabbing the set command config lines with the encrypted passwords so I could just do a single copy/paste with all of the code after deleting the config. *In theory* you could probably do this in a more automated fashion through the use of EVE startup configs and the like. But I just did the copy/pasta move for this setup. But maybe I’ll do an update post later for a more automated method.

For the servers, since I was tight on resources I decided to try and get LACP working. Since I was just using the built-in EVE nodes, they had a GUI (I know, GUI on Linux—WTF?). I went to the networking pane, deleted the existing interfaces, added a bond interface, set the mode to 802.3ad (for LACP), and then added the 2 interfaces to the bond interface. It seemed to work pretty well and be stable, so give it a go or use Colin's method of fronting them with a vSRX, both work.

With your nodes booted and clean base configuration applied you’re ready to start actually going in the lab!

Step 4 – Build and deploy your blueprintsApstraBlueprints createderrors

green 5 icon 5-Minute Junos video reference: Videos 3, 4, 5, 6, 7, 8, 9, 11

Ok, so this is one of those steps that will take you more time than everything else. This is the majority of the 5-minute Junos video series. You’ll run through almost all of the rest of the videos. In the series, Colin does a direct connect DCI in addition to the OTT and then reverts. It’s a great exercise, so definitely run through that too. I also recommend running through the Day 2 operations video. If you’re already familiar with things, you can always skip it, but I’m a big fan of reinforced learning, so I went through it before moving on to the “extra” pieces.

As I mentioned before, I didn’t want to use the older vMX nodes. So, where the 5-minute Junos series has vMX VCP and VFP nodes, you can simply drop in a vJunos router node to replace them. Everything Colin does in the videos is directly transferable.

Step 5 – Connect your lab to “the internet” via the WAN SIM

green 5 icon 5-Minute Junos video reference: Video 10 

I feel like "the penultimate video" deserves its own step. And it makes this a 10-step process! We’re going to need this connectivity for some of the cloud integrations. Not all of them, depending on some of the options you made in step 2, but it is needed with my setup. You could bypass this somewhat by making other changes, but if I listed out all the potential options, you’d never get done reading…

Insider Note: I was able to talk to someone from the Apstra team. One of the topics I discussed with him was it would be nice if Apstra had the ability to configure the routing side of things. With the Mist-ification of the various Juniper product lines, I’m wondering how much of this might be in Apstra and how much you might see from, say, Routing Assurance or some integration between the 2. There are definitely discussions to be had around both approaches and probably lots of arguments between various thoughts about what should configure a router. Some will say that the router should be configured by a routing-specific management system that is designed for it, but I can see use cases in more simple deployments where being able to deploy configuration from within Apstra would be really helpful. This also touches on a point I have made with the Mist side of the house for the existing Wireless, Wired, and WAN modules. I’m *hoping* that we start to see some larger cross-cutting integrations with the different modules. So rather than having to configure a network (VLAN) for a wireless network, then on switches, then on the WAN devices (and now datacenter and routers), we would be able to create it once, make it a variable, and then use it in various places. This has been getting better, so I'm hoping more things continue down this path.

Step 6 – Setup your Apstra Cloud Services Container

We've made it through the 5-Minute Junos series. Now we get into why this is a long blog post, not just me pointing you at the series with a couple of notes. We’re halfway through the steps but have the majority of the work done!

LabScreenshotThis is also where I started spending time trying to make things more reproducible in an easier way and ran into some fun. You’ll notice in my lab diagram that I had a second server icon for this connector that isn’t booted. That’s because I couldn’t get the container to run cleanly inside of the lab itself. I tried the different container options inside of EVE and spent more time than I wanted trying to get it running. So, I ended up just dropping it on one of the Docker instances I already have running in my lab. That worked since my OOB network was reachable from within my main lab network. If you didn’t choose to do that in step 2, you may run into some of the same difficulties. A way around this would be to set up a full Ubuntu node running docker, so you have more control over it. While it’s more of a hassle (and takes more resources), it’s doable.

The other part of the fun I had was I was hoping to create a custom node that I could easily reuse later if I wanted to just drop a connector node into a lab. Since I ran into challenges with the built-in nodes, I tossed this out for now. Like other items, I may follow up on this in the future. This is partially because the container instantiates and onboards itself into the Apstra Cloud on the first boot. So it’s a little more challenging to create a drop in prebuilt node type that you can easily change the Apstra Cloud Org on. And since I’m not a fan of embedding an API Key in the lab, we’re back to just dropping it on my lab docker instance in a one-off setup for now.

With all of that said, the setup here is pretty simple. If you didn’t set up your Apstra Cloud Services Organization when I listed it above, you’ll want to do that now. Then, you can follow this documentation to get your container instantiated and tied to your Apstra Server.

Insider Note: In my talks with Apstra, I asked about this container and why it’s separate. The short version is that it was the shortest pathway to getting the cloud integration done. They are looking at various ways to make this part "easier." I’m definitely looking forward to that!

Step 7- Connect your Apstra Cloud Services Org to your Production Mist Org

mgmtinterfaceNow we get to maybe the main reason why I even wanted to build this lab. As I said, Apstra Cloud Org and your production Mist Org (for your wireless and wired networks) are separate entities. So, to actually tie things together, if you’re an existing Mist user, you’ll need to link them up. Luckily, it’s very easy to do. Just follow these instructions and you should be able to easily move from production Mist to Apstra Mist to Apstra server with a few clicks! This ties into the connector container above, and depending on the link/URL you use in that connector, you may need to be on a VPN to get to the Apstra server interface since it currently just opens a new browser with that URL. Because we all know you don’t expose management interfaces to the internet right? RIGHT?!

MarvisActionsBut if you’re sitting on your lab network like I was you can now see in Mist that you have a Datacenter Marvis Actions button. When you click on those you’re cross-launched into a new tab that is tied into the Apstra Cloud Services view. This gives a grouped view of any anomalies you may have in Apstra. Similar to Marvis Actions in Mist, This view gives you a grouped layout similar to what you can see in Apstra. I like this breakdown a little better because it is a bit easier to see the different groups of anomalies without a lot of extra clicks back and forth in the Apstra interface. This makes it easier to find a root cause of the anomalies similar to the root cause section in Apstra itself. From the detailed view of the action, you can click the little Apstra icon, and you’ll be cross-launched to your Apstra server interface (assuming you have network access to it from your local machine). So it becomes easy to drill into issues and then take action in Apstra.

Insider Note: In my talk with Apstra, I had a few questions about this. First was why wasn’t root cause analysis enabled in Apstra by default. It’s pretty simple: it’s very new. Since it’s just come into Apstra, it makes sense to make admins enable it manually, which helps the admin understand what its current use case is. Which in 4.2.2 is just detecting when a cabling/connectivity anomaly is causing extra noise. The second question was where will the development efforts go for Root Cause Analysis (RCA), cloud or Apstra local? This one is a bit of both. They hope to bring more RCA use cases into Apstra. Since not everyone will connect Apstra to Mist (Allyn says, “But why?!” even though I know why), however, there are also thoughts about using the power of Mist AI to really accelerate some of the analysis and potentially its scope. So, WAY more to come here! And the last question (well that I’m noting here at least) is when will the clouds be integrated? I really don’t want multiple Mist cloud instances based on role. The semi-answer is they likely won’t be merged, but you’ll start to see a lot more domain specific functionality be brought into the individual instances.

One thing I did pass along that I think would be cool would be being able to tunnel the Apstra server interface through the Mist cloud to allow remote access without requiring a VPN. And I know, you don’t have to say it, this is certainly one of those “but security!” discussion moments. While I understand the security concerns, I still think it makes some sense. Similar to tunneling CLI access for switches, being able to not *having* to be “on the net” to access Apstra may improve security while allowing for greater agility. And fits more in line with the modern-day cloud approach. It allows you to maintain direct, local, out-of-band control of your datacenter (as you should!) while enabling secure, remote, authorized access. This really plays towards a zero-trust model and, I think, just fits. I’m sure there will be LOTS more discussion around this feature in the future, but I look forward to it.

Step 8 – Routing Assurance

routingassuranceRouting Assurance is even newer than Apstra Cloud Services. So it is still a bit limited in scope. But the potential here for troubleshooting is probably almost as big as what I see on the wireless side of things. And while it’s sometimes hard to get the “CLI till I die” folks to see value in “GUIs”, I know of some that definitely have their interest peaked here. There is not currently a link between your routing assurance org and your production mist org, but I’m sure it’s coming. Or maybe they’ll skip that step and just bring the routing side of things into the production side. I don’t have any insider scoops here (yet), but I’ll be sure to pass them along when/if I get them (and am allowed to share, darned NDAs)!

So how do we set things up? Well, if you’ve used Production Mist on gear that needs manually adopted, you’re ahead of the game. You’re going to log in to your routing assurance organization. Head to the Organization then Inventory menu. In the top right, you’ll see “Adopt Routers”. This will get you some junos config lines you’ll need to paste onto your vJunos routers (in config mode). Once you’ve done that they should show up in just a few minutes. It may take 15-20 for everything to populate though. If they don’t show up, make sure that your vJunos have internet access with DNS resolution. I had forgotten to add a nameserver to my routers since they weren’t needed within the lab.

With your routers in Mist, you can now start to see information on them, such as interfaces, Insights, etc. I found that it was helpful to move some data through the routers and cause a general stir to get some “pretty” or fun graphs, which really leads us to step 9.

Step 9 – Break Stuff!

MistceptionAs I said in Step 8 for routing assurance, to really start to see some of the good stuff you need to make some noise. From a baseline standpoint having some scripts that move some data on the server hosts is good. Then do things like shutdown or reboot a switch, break a link, restart a router, or bounce a routing instance, and see what happens across the various platforms. This is where you can start to bring in those cases of “remember when this happened, and we were trying to troubleshoot it” and see if these new tools are helpful. If not, could they be with features, or is it something that’s just not going to be helpful. If it’s the former, all of the teams I’ve talked to would LOVE to have your feedback. I’m happy to help foster that conversation or just give you a contact! If you’re a Juniper customer bubble it up through your account team and/or VAR!

Step 10 – Have a beverage!

While hopefully you’ve been drinking at least some water (or my favorite - magical bean water) as you went through this, now it’s time to have a little celebration. You’ve now built a lab with all of the cutting-edge pieces of the Juniper Mist portfolio. I hope it’s given you some ideas for further lab time stuff you can do. I know I have a laundry list of things like that. But since it’s already taken me a month to get this article put together, and it’s LONG, those will have to wait for future articles.

A Couple of Closing Notes

As I mentioned above, I had a couple of extra notes for little things I found while working my way through the lab. These aren't tied to specific steps in the process but more the fun things you find as you work through stuff like this. This is in addition to the notes Colin gives in his second video of things you need to keep an eye on.

First, just before I started I was running EVE 6.2.0-3. This release had a little annoying bug. So about halfway through the Apstra steps, I decided to do the quick version bump to get rid of it. So I shut down my nodes, did the upgrade, and went through the fun of booting my nodes again. And as sometimes happens, a couple of my nodes got borked. I was annoyed but figured it was a good way to see how Apstra handled things. This is where I found how nice having the MAC address set on the node and using DHCP was. I wiped the node, booted it, applied my clean basic config, then went into Apstra and just pushed the full node config. There are a couple of ways to do it, either from the blueprint or the managed device page. If you do the device page head to telemetry and then "apply full config". If you do the blueprint, go into the blueprint, then the active tab, physical, click on the device, click on the device tab to the right, then at the bottom apply full config. Colin talks about steps you should take to replace a device with another (like in the case of simulating an RMA). But for this little glitch I had, it was the same device, so I could just push the same config again. And recovery was just a couple of minutes. Which was great cause I had to do it again when I lost power for 12 hours one day. Fun right?!

The second note that I have is about how I ended up needing to split my lab across 2 EVE nodes because I was running out of resources. Deploying cluster satellite nodes is a pro feature, but it allows you to spin nodes up and down across hosts. As long as the latency isn't bad it works great. You can also host an on-prem main node and a satellite in Google Cloud. The recommendation, though, is to run similar types of nodes on the same hosts. This helps cut down on the disk needed for the base image.

Final note, the 2 new cloud integrations are still very new. But as I said above I really see some potential here. And based on some things I've seen and heard (but can't talk specifically about yet), it's only going to get better. So if you look and say, "Yeah, so what? There's not much there." I understand. But as Mist started with just wireless and has since moved to a full stack, you're going to see more and more features come into the services. 

 

With that, I think we're done with this article. And I have a feeling this is going to become a bit of a series. This is a great lab for building out different DC options. Especially as more and more features are released. I really appreciate you if you’ve made it this far! I’d love to hear your comments, feedback, critiques, etc. You can reach me at This email address is being protected from spambots. You need JavaScript enabled to view it. or hit me on one of my socials! If you’re looking for a bigger discussion around this, I’d be happy to chat. Reach out, and I’m sure we can figure something out. And as always, I’d love to see you in person. I might even have a member of the #MarvisMiniArmy for you if you want one!

Go forth, Be Kind, Have fun!