Creating a NetAI Playground for Agentic AI Experimentation

0
146

[ad_1]

Hey there, everybody, and welcome to the newest installment of “Hank shares his AI journey.” 🙂 Artificial Intelligence (AI) continues to be all the fashion, and getting back from Cisco Live in San Diego, I used to be excited to dive into the world of agentic AI.

With bulletins like Cisco’s personal agentic AI resolution, AI Canvas, in addition to discussions with companions and different engineers about this subsequent section of AI prospects, my curiosity was piquedWhat does this all imply for us community engineers? Moreover, how can we begin to experiment and find out about agentic AI?

I started my exploration of the subject of agentic AI, studying and watching a variety of content material to achieve a deeper understanding of the topic. I received’t delve into an in depth definition on this weblog, however listed here are the fundamentals of how I give it some thought:

Agentic AI is a imaginative and prescient for a world the place AI doesn’t simply reply questions we ask, nevertheless it begins to work extra independently. Driven by the objectives we set, and using entry to instruments and techniques we offer, an agentic AI resolution can monitor the present state of the community and take actions to make sure our community operates precisely as meant.

Sounds fairly darn futuristic, proper? Let’s dive into the technical facets of the way it works—roll up your sleeves, get into the lab, and let’s study some new issues.

What are AI “tools?”

The very first thing I needed to discover and higher perceive was the idea of “tools” inside this agentic framework. As chances are you’ll recall, the LLM (massive language mannequin) that powers AI techniques is basically an algorithm educated on huge quantities of information. An LLM can “understand” your questions and directions. On its personal, nonetheless, the LLM is proscribed to the information it was educated on. It can’t even search the online for present film showtimes with out some “tool” permitting it to carry out an internet search.

From the very early days of the GenAI buzz, builders have been constructing and including “tools” into AI purposes. Initially, the creation of those instruments was advert hoc and different relying on the developer, LLM, programming language, and the instrument’s purpose.  But lately, a brand new framework for constructing AI instruments has gotten plenty of pleasure and is beginning to develop into a brand new “standard” for instrument growth.

This framework is called the Model Context Protocol (MCP). Originally developed by Anthropic, the corporate behind Claude, any developer to make use of MCP to construct instruments, referred to as “MCP Servers,” and any AI platform can act as an “MCP Client” to make use of these instruments. It’s important to keep in mind that we’re nonetheless within the very early days of AI and AgenticAI; nonetheless, presently, MCP seems to be the strategy for instrument constructing. So I figured I’d dig in and determine how MCP works by constructing my very own very fundamental NetAI Agent.

I’m removed from the primary networking engineer to wish to dive into this house, so I began by studying a few very useful weblog posts by my buddy Kareem Iskander, Head of Technical Advocacy in Learn with Cisco.

These gave me a jumpstart on the important thing subjects, and Kareem was useful sufficient to offer some instance code for creating an MCP server. I used to be able to discover extra by myself.

Creating a neighborhood NetAI playground lab

There is not any scarcity of AI instruments and platforms as we speak. There is ChatGPT, Claude, Mistral, Gemini, and so many extra. Indeed, I make the most of lots of them commonly for varied AI duties. However, for experimenting with agentic AI and AI instruments, I needed one thing that was 100% native and didn’t depend on a cloud-connected service.

A major purpose for this need was that I needed to make sure all of my AI interactions remained completely on my laptop and inside my community. I knew I’d be experimenting in a wholly new space of growth. I used to be additionally going to ship knowledge about “my network” to the LLM for processing. And whereas I’ll be utilizing non-production lab techniques for all of the testing, I nonetheless didn’t like the thought of leveraging cloud-based AI techniques. I’d really feel freer to study and make errors if I knew the chance was low. Yes, low… Nothing is totally risk-free.

Luckily, this wasn’t the primary time I thought of native LLM work, and I had a few potential choices able to go. The first is Ollama, a robust open-source engine for operating LLMs regionally, or a minimum of by yourself server.  The second is LMStudio, and whereas not itself open supply, it has an open supply basis, and it’s free to make use of for each private and “at work” experimentation with AI fashions. When I learn a latest weblog by LMStudio about MCP assist now being included, I made a decision to provide it a strive for my experimentation.

Creating Mr Packets with LMStudioCreating Mr Packets with LMStudio
Creating Mr Packets with LMStudio

LMStudio is a consumer for operating LLMs, nevertheless it isn’t an LLM itself.  It gives entry to numerous LLMs obtainable for obtain and operating. With so many LLM choices obtainable, it may be overwhelming while you get began. The key issues for this weblog put up and demonstration are that you just want a mannequin that has been educated for “tool use.” Not all fashions are. And moreover, not all “tool-using” fashions truly work with instruments. For this demonstration, I’m utilizing the google/gemma-2-9b mannequin. It’s an “open model” constructed utilizing the identical analysis and tooling behind Gemini.

The subsequent factor I wanted for my experimentation was an preliminary concept for a instrument to construct. After some thought, I made a decision a superb “hello world” for my new NetAI undertaking could be a means for AI to ship and course of “show commands” from a community machine. I selected pyATS to be my NetDevOps library of alternative for this undertaking. In addition to being a library that I’m very acquainted with, it has the good thing about automated output processing into JSON by way of the library of parsers included in pyATS. I may additionally, inside simply a few minutes, generate a fundamental Python operate to ship a present command to a community machine and return the output as a place to begin.

Here’s that code:

def send_show_command(
    command: str,
    device_name: str,
    username: str,
    password: str,
    ip_address: str,
    ssh_port: int = 22,
    network_os: Optional[str] = "ios",
) -> Optional[Dict[str, Any]]:

    # Structure a dictionary for the machine configuration that may be loaded by PyATS
    device_dict = {
        "units": {
            device_name: {
                "os": network_os,
                "credentials": {
                    "default": {"username": username, "password": password}
                },
                "connections": {
                    "ssh": {"protocol": "ssh", "ip": ip_address, "port": ssh_port}
                },
            }
        }
    }
    testbed = load(device_dict)
    machine = testbed.units[device_name]

    machine.join()
    output = machine.parse(command)
    machine.disconnect()

    return output

Between Kareem’s weblog posts and the getting-started information for FastMCP 2.0, I discovered it was frighteningly straightforward to transform my operate into an MCP Server/Tool. I simply wanted so as to add 5 traces of code.

from fastmcp import FastMCP

mcp = FastMCP("NetAI Hello World")

@mcp.instrument()
def send_show_command()
    .
    .


if __name__ == "__main__":
    mcp.run()

Well.. it was ALMOST that straightforward. I did must make a couple of changes to the above fundamentals to get it to run efficiently. You can see the full working copy of the code in my newly created NetAI-Learning undertaking on GitHub.

As for these few changes, the adjustments I made have been:

  • A pleasant, detailed docstring for the operate behind the instrument. MCP shoppers use the main points from the docstring to know how and why to make use of the instrument.
  • After some experimentation, I opted to make use of “http” transport for the MCP server relatively than the default and extra widespread “STDIO.” The purpose I went this fashion was to organize for the following section of my experimentation, when my pyATS MCP server would probably run throughout the community lab atmosphere itself, relatively than on my laptop computer. STDIO requires the MCP Client and Server to run on the identical host system.

So I fired up the MCP Server, hoping that there wouldn’t be any errors. (Okay, to be sincere, it took a few iterations in growth to get it working with out errors… however I’m doing this weblog put up “cooking show style,” the place the boring work alongside the way in which is hidden. 😉

python netai-mcp-hello-world.py 

╭─ FastMCP 2.0 ──────────────────────────────────────────────────────────────╮
│                                                                            │
│        _ __ ___ ______           __  __  _____________    ____    ____     │
│       _ __ ___ / ____/___ ______/ /_/  |/  / ____/ __   |___   / __     │
│      _ __ ___ / /_  / __ `/ ___/ __/ /|_/ / /   / /_/ /  ___/ / / / / /    │
│     _ __ ___ / __/ / /_/ (__  ) /_/ /  / / /___/ ____/  /  __/_/ /_/ /     │
│    _ __ ___ /_/    __,_/____/__/_/  /_/____/_/      /_____(_)____/      │
│                                                                            │
│                                                                            │
│                                                                            │
│    🖥️  Server identify:     FastMCP                                             │
│    📦 Transport:       Streamable-HTTP                                     │
│    🔗 Server URL:      http://127.0.0.1:8002/mcp/                          │
│                                                                            │
│    📚 Docs:            https://gofastmcp.com                               │
│    🚀 Deploy:          https://fastmcp.cloud                               │
│                                                                            │
│    🏎️  FastMCP model: 2.10.5                                              │
│    🤝 MCP model:     1.11.0                                              │
│                                                                            │
╰────────────────────────────────────────────────────────────────────────────╯


[07/18/25 14:03:53] INFO     Starting MCP server 'FastMCP' with transport 'http' on http://127.0.0.1:8002/mcp/server.py:1448
INFO:     Started server course of [63417]
INFO:     Waiting for software startup.
INFO:     Application startup full.
INFO:     Uvicorn operating on http://127.0.0.1:8002 (Press CTRL+C to give up)

The subsequent step was to configure LMStudio to behave because the MCP Client and connect with the server to have entry to the brand new “send_show_command” instrument. While not “standardized, “most MCP Clients use a really widespread JSON configuration to outline the servers. LMStudio is one among these shoppers.

Adding the pyATS MCP server to LMStudioAdding the pyATS MCP server to LMStudio
Adding the pyATS MCP server to LMStudio

Wait… when you’re questioning, ‘Where’s the community, Hank? What machine are you sending the ‘show commands’ to?’ No worries, my inquisitive pal: I created a quite simple Cisco Modeling Labs (CML) topology with a few IOL units configured for direct SSH entry utilizing the PATty characteristic.

NetAI Hello World CML NetworkNetAI Hello World CML Network
NetAI Hello World CML Network

Let’s see it in motion!

Okay, I’m certain you’re able to see it in motion.  I do know I certain was as I used to be constructing it.  So let’s do it!

To begin, I instructed the LLM on how to hook up with my community units within the preliminary message.

Telling the LLM about my devicesTelling the LLM about my devices
Telling the LLM about my units

I did this as a result of the pyATS instrument wants the deal with and credential info for the units.  In the longer term I’d like to take a look at the MCP servers for various supply of reality choices like NetBox and Vault so it could “look them up” as wanted.  But for now, we’ll begin easy.

First query: Let’s ask about software program model information.

Short video of the asking the LLM what version of software is running.Short video of the asking the LLM what version of software is running.

You can see the main points of the instrument name by diving into the enter/output display.

Tool inputs and outputsTool inputs and outputs

This is fairly cool, however what precisely is going on right here? Let’s stroll by way of the steps concerned.

  1. The LLM consumer begins and queries the configured MCP servers to find the instruments obtainable.
  2. I ship a “prompt” to the LLM to contemplate.
  3. The LLM processes my prompts. It “considers” the completely different instruments obtainable and in the event that they could be related as a part of constructing a response to the immediate.
  4. The LLM determines that the “send_show_command” instrument is related to the immediate and builds a correct payload to name the instrument.
  5. The LLM invokes the instrument with the right arguments from the immediate.
  6. The MCP server processes the referred to as request from the LLM and returns the outcome.
  7. The LLM takes the returned outcomes, together with the unique immediate/query as the brand new enter to make use of to generate the response.
  8. The LLM generates and returns a response to the question.

This isn’t all that completely different from what you would possibly do when you have been requested the identical query.

  1. You would think about the query, “What software version is router01 running?”
  2. You’d take into consideration the alternative ways you would get the data wanted to reply the query. Your “tools,” so to talk.
  3. You’d determine on a instrument and use it to assemble the data you wanted. Probably SSH to the router and run “show version.”
  4. You’d evaluation the returned output from the command.
  5. You’d then reply to whoever requested you the query with the right reply.

Hopefully, this helps demystify a bit about how these “AI Agents” work below the hood.

How about another instance? Perhaps one thing a bit extra advanced than merely “show version.” Let’s see if the NetAI agent can assist establish which change port the host is related to by describing the essential course of concerned.

Here’s the query—sorry, immediate, that I undergo the LLM:

Prompt asking a multi-step question of the LLM.Prompt asking a multi-step question of the LLM.
Prompt asking a multi-step query of the LLM.

What we must always discover about this immediate is that it’ll require the LLM to ship and course of present instructions from two completely different community units. Just like with the primary instance, I do NOT inform the LLM which command to run. I solely ask for the data I want. There isn’t a “tool” that is aware of the IOS instructions. That information is a part of the LLM’s coaching knowledge.

Let’s see the way it does with this immediate:

The multi-step LLM response.The multi-step LLM response.
The LLM efficiently executes the multi-step plan.

And take a look at that, it was capable of deal with the multi-step process to reply my query.  The LLM even defined what instructions it was going to run, and the way it was going to make use of the output.  And when you scroll again as much as the CML community diagram, you’ll see that it accurately identifies interface Ethernet0/2 because the change port to which the host was related.

So what’s subsequent, Hank?

Hopefully, you discovered this exploration of agentic AI instrument creation and experimentation as fascinating as I’ve. And possibly you’re beginning to see the probabilities on your personal each day use. If you’d prefer to strive a few of this out by yourself, yow will discover every little thing you want on my netai-learning GitHub undertaking.

  1. The mcp-pyats code for the MCP Server. You’ll discover each the easy “hello world” instance and a extra developed work-in-progress instrument that I’m including further options to. Feel free to make use of both.
  2. The CML topology I used for this weblog put up. Though any community that’s SSH reachable will work.
  3. The mcp-server-config.json file that you may reference for configuring LMStudio
  4. A “System Prompt Library” the place I’ve included the System Prompts for each a fundamental “Mr. Packets” community assistant and the agentic AI instrument. These aren’t required for experimenting with NetAI use instances, however System Prompts might be helpful to make sure the outcomes you’re after with LLM.

A few “gotchas” I needed to share that I encountered throughout this studying course of, which I hope would possibly prevent a while:

First, not all LLMs that declare to be “trained for tool use” will work with MCP servers and instruments. Or a minimum of those I’ve been constructing and testing. Specifically, I struggled with Llama 3.1 and Phi 4. Both appeared to point they have been “tool users,” however they didn’t name my instruments. At first, I assumed this was attributable to my code, however as soon as I switched to Gemma 2, they labored instantly. (I additionally examined with Qwen3 and had good outcomes.)

Second, when you add the MCP Server to LMStudio’s “mcp.json” configuration file, LMStudio initiates a connection and maintains an lively session. This signifies that when you cease and restart the MCP server code, the session is damaged, supplying you with an error in LMStudio in your subsequent immediate submission. To repair this situation, you’ll have to both shut and restart LMStudio or edit the “mcp.json” file to delete the server, reserve it, after which re-add it. (There is a bug filed with LMStudio on this downside. Hopefully, they’ll repair it in an upcoming launch, however for now, it does make growth a bit annoying.)

As for me, I’ll proceed exploring the idea of NetAI and the way AI brokers and instruments could make our lives as community engineers extra productive. I’ll be again right here with my subsequent weblog as soon as I’ve one thing new and fascinating to share.

In the meantime, how are you experimenting with agentic AI? Are you excited in regards to the potential? Any ideas for an LLM that works nicely with community engineering information? Let me know within the feedback under. Talk to you all quickly!

Sign up for Cisco U. | Join the  Cisco Learning Network as we speak at no cost.

Learn with Cisco

X | Threads | Facebook | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to affix the dialog.

Share:

LEAVE A REPLY

Please enter your comment!
Please enter your name here