Infrastructure as Code (IaC) is a scorching matter nowadays, and the IaC device of selection is Terraform by HashiCorp. Terraform is a cloud provisioning product that gives infrastructure for any utility. You can check with a protracted record of suppliers for any goal platform.
Terraform’s record of suppliers now consists of Cisco Modeling Labs (CML) 2, so we will use Terraform to regulate digital community infrastructure operating on CML2. Keep studying to discover ways to get began with Terraform and CML, from the preliminary configuration by its superior options.
How does Terraform work?
Terraform makes use of code to explain the specified state of the required infrastructure and monitor this state over the infrastructure’s lifetime. This code is written in HashiCorp Configuration Language (HCL). If it modifications, Terraform figures out all of the variations (state modifications) to replace the infrastructure and assist attain the brand new state. Eventually, when the infrastructure isn’t wanted anymore, Terraform can destroy it.
A Terraform supplier gives sources (issues which have state) and knowledge sources (read-only knowledge with out state).
In CML2 phrases, examples embody:
- Resources: Labs, nodes, hyperlinks
- Data sources: Labs, nodes, and hyperlinks, in addition to accessible nodes and picture definitions, accessible bridges for exterior connectors, and person lists and teams, and so on.
NOTE: Currently, only some knowledge sources are carried out.
Getting began with Terraform and CML
To get began with Terraform and CML, you’ll want the next:
Define and initialize a workspace
First, we’ll create a brand new listing and alter it as follows:
$ mkdir tftest $ cd tftest
All the configuration and state required by Terraform stays on this listing.
The code snippets offered want to enter a Terraform configuration file, sometimes a file referred to as important.tf. However, configuration blocks will also be unfold throughout a number of information, as Terraform will mix all information with the .tf extension within the present working listing.
The following code block tells Terraform that we wish to use the CML2 supplier. It will obtain and set up the newest accessible model from the registry at initialization. We add this to a brand new file referred to as important.tf:
terraform { required_providers { cml2 = { supply = "registry.terraform.io/ciscodevnet/cml2" } } }
With the supplier outlined, we will now initialize the surroundings. This will obtain the supplier binary from the Hashicorp registry and set up it on the native pc. It will even create numerous information and a listing that holds further Terraform configuration and state.
$ terraform init Initializing the backend... Initializing supplier plugins... - Finding newest model of ciscodevnet/cml2... - Installing ciscodevnet/cml2 v0.4.1... - Installed ciscodevnet/cml2 v0.4.1 (self-signed, key ID A97E6292972408AB) Partner and neighborhood suppliers are signed by their builders. If you'd wish to know extra about supplier signing, you'll be able to examine it right here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to document the supplier alternatives it made above. Include this file in your model management repository in order that Terraform can assure to make the identical alternatives by default if you run "terraform init" sooner or later. Terraform has been efficiently initialized! You could now start working with Terraform. Try operating "terraform plan" to see any modifications which are required in your infrastructure. All Terraform instructions ought to now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working listing. If you overlook, different instructions will detect it and remind you to take action if mandatory. $
Configure the supplier
The CML2 terraform supplier wants credentials to entry CML2. These credentials are configured as proven within the following instance. Of course, handle, username and password must match the precise surroundings:
supplier "cml2" { handle = "https://cml-controller.cml.lab" username = "admin" password = "supersecret" # skip_verify = true }
The skip_verify is commented out within the instance. You may wish to uncomment it to work with the default certificates that’s shipped with the product, which is signed by the Cisco CML CA. Consider putting in a trusted certificates chain on the controller.
While the above works OK, it’s not advisable to configure clear-text credentials in information which may find yourself in supply code administration (SCM). A greater method is to make use of surroundings variables, ideally together with some tooling like direnv. As a prerequisite, the variables must be outlined inside the configuration:
variable "handle" { description = "CML controller handle" kind = string default = "https://cml-controller.cml.lab" } variable "username" { description = "cml2 username" kind = string default = "admin" } variable "password" { description = "cml2 password" kind = string delicate = true }
NOTE: Adding the “sensitive” attribute ensures that this worth shouldn’t be printed in any output.
We now can create a direnv configuration to insert values from the surroundings into our supplier configuration by making a .envrc file. You may obtain this by manually “sourcing” this file utilizing supply .envrc. The good thing about direnv is that this routinely occurs when becoming the listing.
TF_VAR_address="https://cml-controller.cml.lab" TF_VAR_username="admin" TF_VAR_password="secret" export TF_VAR_username TF_VAR_password TF_VAR_address
This decouples the Terraform configuration information from the credentials/dynamic values in order that they’ll simply be added to SCM, like Git, with out exposing delicate values, equivalent to passwords or addresses.
Define the CML2 lab infrastructure
With the essential configuration completed, we will now describe our CML2 lab infrastructure. We have two choices:
- Import-mode
- Define-mode
Import-mode
This imports an present CML2 lab YAML topology file as a Terraform lifecycle useful resource. This is the “one-stop” resolution, defining all nodes, hyperlinks and interfaces in a single go. In addition, you should use Terraform templating to interchange properties of the imported lab (see beneath).
Import-mode instance
Here’s a easy import-mode instance:
useful resource "cml2_lifecycle" "this" { topology = file("topology.yaml") }
The file topology.yaml can be imported into CML2 after which began. We now must “plan” the change:
$ terraform plan Terraform used the chosen suppliers to generate the next execution plan. Resource actions are indicated with the next symbols: + create Terraform will carry out the next actions: # cml2_lifecycle.this can be created + useful resource "cml2_lifecycle" "this" { + booted = (identified after apply) + id = (identified after apply) + lab_id = (identified after apply) + nodes = { } -> (identified after apply) + state = (identified after apply) + topology = (delicate worth) } Plan: 1 so as to add, 0 to vary, 0 to destroy. $
Then apply it (-auto-approve is a short-cut and must be dealt with with care):
$ terraform apply -auto-approve
Terraform used the chosen suppliers to generate the next execution plan. Resource actions are indicated with the next symbols: + create
Terraform will carry out the next actions: # cml2_lifecycle.this can be created + useful resource "cml2_lifecycle" "this" { + booted = (identified after apply) + id = (identified after apply) + lab_id = (identified after apply) + nodes = { } -> (identified after apply) + state = (identified after apply) + topology = (delicate worth) } Plan: 1 so as to add, 0 to vary, 0 to destroy. cml2_lifecycle.this: Creating... cml2_lifecycle.this: Still creating... [10s elapsed] cml2_lifecycle.this: Still creating... [20s elapsed] cml2_lifecycle.this: Creation full after 25s [id=b75992ec-d345-4638-a6fd-2c0b640a3c22] Apply full! Resources: 1 added, 0 modified, 0 destroyed. $
We can now have a look at the state:
$ terraform present # cml2_lifecycle.this: useful resource "cml2_lifecycle" "this" { booted = true id = "b75992ec-d345-4638-a6fd-2c0b640a3c22" nodes = { # (3 unchanged parts hidden) } state = "STARTED" topology = (delicate worth) } $ terraform console > keys(cml2_lifecycle.this.nodes) tolist([ "0504773c-5396-44ff-b545-ccb734e11691", "22271a81-1d3a-4403-97de-686ebf0f36bc", "2bccca61-d4ee-459a-81bd-96b32bdaeaed", ]) > cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545-ccb734e11691"].interfaces[0].ip4[0] "192.168.122.227" > exit $
Simple import instance with a template
This instance is just like the one above, however this time we import the topology utilizing templatefile(), which permits templating of the topology. Assuming that the CML2 topology YAML file begins with
lab: description: "description" notes: "notes" timestamp: 1606137179.2951126 title: ${toponame} model: 0.0.4 nodes: - id: n0 [...]
then utilizing this HCL
useful resource "cml2_lifecycle" "this" { topology = templatefile("topology.yaml", { toponame = "yolo lab" }) }
will exchange the title: ${toponame} from the YAML with the content material of the string “yolo lab” at import time. Note that as a substitute of a string literal, it’s completely wonderful to make use of a variable like var.toponame or different HCL options!
Define-mode utilization
Define-mode begins with the definition of a lab useful resource after which provides node and hyperlink sources. In this mode, sources will solely be created. If we wish to management the runtime state (e.g., begin/cease/wipe the lab), then we have to hyperlink these parts to a lifecycle useful resource.
Here’s an instance:
useful resource "cml2_lab" "this" { } useful resource "cml2_node" "ext" { lab_id = cml2_lab.this.id nodedefinition = "external_connector" label = "Internet" configuration = "bridge0" } useful resource "cml2_node" "r1" { lab_id = cml2_lab.this.id label = "R1" nodedefinition = "alpine" } useful resource "cml2_link" "l1" { lab_id = cml2_lab.this.id node_a = cml2_node.ext.id node_b = cml2_node.r1.id }
This will create the lab, the nodes, and the hyperlink between them. Without additional configuration, nothing can be began. If these sources must be began, then you definitely’ll want a CML2 lifecycle useful resource:
useful resource "cml2_lifecycle" "prime" { lab_id = cml2_lab.this.id parts = [ cml2_node.ext.id, cml2_node.r2.id, cml2_link.l1.id, ] }
Here’s what this seems like after making use of the mixed plan.
NOTE: For brevity, some attributes are omitted and have been changed by […]:
$ terraform apply -auto-approve Terraform used the chosen suppliers to generate the next execution plan. Resource actions are indicated with the next symbols: + create Terraform will carry out the next actions: # cml2_lab.this can be created + useful resource "cml2_lab" "this" { + created = (identified after apply) + description = (identified after apply) + teams = [ ] -> (identified after apply) + id = (identified after apply) [...] + title = (identified after apply) } # cml2_lifecycle.prime can be created + useful resource "cml2_lifecycle" "prime" { + booted = (identified after apply) + parts = [ + (known after apply), + (known after apply), + (known after apply), ] + id = (identified after apply) + lab_id = (identified after apply) + nodes = { } -> (identified after apply) + state = (identified after apply) } # cml2_link.l1 can be created + useful resource "cml2_link" "l1" { + id = (identified after apply) + interface_a = (identified after apply) + interface_b = (identified after apply) + lab_id = (identified after apply) + label = (identified after apply) + link_capture_key = (identified after apply) + node_a = (identified after apply) + node_a_slot = (identified after apply) + node_b = (identified after apply) + node_b_slot = (identified after apply) + state = (identified after apply) } # cml2_node.ext can be created + useful resource "cml2_node" "ext" { + configuration = (identified after apply) + cpu_limit = (identified after apply) + cpus = (identified after apply) [...] + x = (identified after apply) + y = (identified after apply) } # cml2_node.r1 can be created + useful resource "cml2_node" "r1" { + configuration = (identified after apply) + cpu_limit = (identified after apply) + cpus = (identified after apply) [...] + x = (identified after apply) + y = (identified after apply) } Plan: 5 so as to add, 0 to vary, 0 to destroy. cml2_lab.this: Creating... cml2_lab.this: Creation full after 0s [id=306f3ebf-c819-4b89-a99d-138a58ca7195] cml2_node.ext: Creating... cml2_node.r2: Creating... cml2_node.ext: Creation full after 1s [id=32f187bf-4f53-462a-8e36-43cd9b6e17a4] cml2_node.r2: Creation full after 1s [id=5d59a0d3-70a1-45a1-9b2a-4cecd9a4e696] cml2_link.l1: Creating... cml2_link.l1: Creation full after 0s [id=a083c777-abab-47d2-95c3-09d897e01d2e] cml2_lifecycle.prime: Creating... cml2_lifecycle.prime: Still creating... [10s elapsed] cml2_lifecycle.prime: Still creating... [20s elapsed] cml2_lifecycle.prime: Creation full after 22s [id=306f3ebf-c819-4b89-a99d-138a58ca7195] Apply full! Resources: 5 added, 0 modified, 0 destroyed. $
The parts lifecycle attribute is required to tie the person nodes and hyperlinks into the lifecycle useful resource. This ensures the proper sequence of operations primarily based on the dependencies between the sources.
NOTE: It’s not potential to make use of each import and parts on the identical time. In addition, when importing a topology utilizing the topology attribute, a lab_id can’t be set.
Advanced utilization
The lifecycle useful resource has a number of extra configuration parameters that management superior options. Here’s a listing of these parameters and what they do:
- configs is a map of strings. The keys are node labels, and the values are node configurations. When these are current, the supplier will examine for all node labels to see whether or not they’re matching and, if they’re, exchange the node’s configuration with the supplied configuration. This permits you to “inject” configurations right into a topology file. The base topology file may haven’t any configurations, during which case the precise configurations can be supplied through an instance file(“node1-config”) or a literal configuration string, as proven right here:
configs = { "node-1": file("node1-config") "node-2": "hostname node2" }
- staging defines the node begin sequence when the lab is began. Node tags are used to realize this. Here’s an instance:
staging = { phases = ["infra", "core", "site-1"] start_remaining = true }
The given instance ensures that nodes with the tag “infra” are began first. The supplier waits till all nodes with this tag are marked as “booted.” Then, all nodes with the tag “core” are began, and so forth. If, after the top of the stage record, there are nonetheless stopped nodes, then the start_remaining flag determines whether or not they need to stay stopped or must be began as properly (the default is true, e.g., they may all be began).
- state defines the runtime state of the lab. By default that is STARTED, which implies the lab can be began. Options are STARTED, STOPPED, and DEFINED_ON_CORE
– STARTED is the default
– STOPPED may be set if the lab is at present began, in any other case it can produce a failure
– DEFINED_ON_CORE is wiping the lab if the present state is both STARTED or STOPPED
- timeouts can be utilized to set completely different timeouts for operations. This may be mandatory for giant labs that take a very long time to begin. The defaults are set to 2h .
- wait is a boolean flag, which defines whether or not the supplier ought to await convergence (for instance, when the lab begins, and that is set to false, then the supplier will begin the lab however is not going to wait till all nodes inside the lab are “ready”).
- id is a read-only computed attribute. A UUIDv4 can be auto-generated at create time and assigned to this ID.
CRUD operations
Of the 4 primary operations of useful resource administration, create, learn, replace, and delete (CRUD), the earlier sections primarily described the create and browse side. But Terraform may cope with replace and delete.
Plans may be modified, new sources may be added, and present sources may be eliminated or modified. This is all the time a results of modifying/altering your Terraform configuration information after which having Terraform work out the required state modifications through the terraform plan adopted by a terraform apply as soon as you’re happy with these modifications.
Updating sources
It is feasible to replace sources, however not each mixture is seamless. Here are some things to contemplate:
- Only a number of node attributes may be modified seamlessly; examples are coordinates (x/y), label, and configuration
- Some plan modifications will re-create sources. For instance, operating nodes can be destroyed and restarted is that if the node definition is modified
Deleting sources
Finally, a terraform destroy will delete all created sources from the controller.
Data Sources
As against sources, knowledge sources don’t maintain any state. They are used to learn knowledge from the controller. This knowledge can then be used to reference parts in different knowledge sources or sources. A very good instance, though not but carried out, can be a listing of accessible node- and image-definitions. By studying these into a knowledge supply, the HCL defining the infrastructure may take accessible definitions into consideration.
There are, nevertheless, a number of knowledge sources carried out:
- Node: Reads a node by offering a lab and a node ID
- Lab: Reads a lab by offering both a lab ID or a lab title
Output
All knowledge in sources and knowledge sources can be utilized to drive output from Terraform. A helpful instance within the context of CML2 is the retrieval of IP addresses from operating nodes. Here’s the way in which to do it, assuming that the lifecycle useful resource is known as this and likewise assuming that R1 is ready to purchase an IP handle through an exterior connector:
cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545- ccb734e11691"].interfaces[0].ip4[0]
Note, nevertheless, that output can also be calculated when sources may not exist, so the above will give an error as a result of node not being discovered or the interface record being empty. To guard in opposition to this, you should use HCL:
output "r1_ip_address" { worth = ( cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4 == null ? "undefined" : ( size(cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4) > 0 ? cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4[0] : "no ip" ) ) }
Output:
r1_ip_address = "192.168.255.115"
Conclusion
The CML2 supplier suits properly into the general Terraform eco-system. With the flexibleness HCL offers and by combining it with different Terraform suppliers, it’s by no means been simpler to automate digital community infrastructure inside CML2. What will you do with these new capabilities? We’re curious to listen to about it! Let’s proceed the dialog on the Cisco Learning Network’s Cisco Modeling Labs Community.
Single customers should purchase Cisco Modeling Labs – Personal and Cisco Modeling Labs – Personal Plus licenses from the Cisco Learning Network Store. For groups, discover CML – Enterprise and CML – Higher Education licensing and call us to find out how Cisco Modeling Labs can energy your NetDevOps transformation.
Join the Cisco Learning Network right now totally free.
Follow Cisco Learning & Certifications
Twitter | Facebook | LinkedIn | Instagram
Use #CiscoCert to hitch the dialog.
References
- https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
- https://github.com/CiscoDevNet/terraform-provider-cml2
- https://registry.terraform.io/providers/CiscoDevNet/cml2
- https://developer.hashicorp.com/terraform/language
- https://direnv.net/
- Image by Dall-E (https://labs.openai.com/)
Share: