An AI ‘Nerd Knob’ Every Network Engineer Should Know

0
111

[ad_1]

Alright, my pals, I’m again with one other submit based mostly on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In immediately’s submit, I need to share the primary (of what’s going to probably be many) “nerd knobs” that I feel all of us ought to pay attention to and the way they may impression our use of AI and AI instruments. I can already sense the joy within the room. After all, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. Fine-tuning our AI instruments to assist us be simpler.

First up, the requisite disclaimer or two.

  1. There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, if you happen to all like this sort of weblog submit, I’d be pleased to return in different posts the place we have a look at different “knobs” and settings in AI and the way they work. Well, I’d be pleased to return as soon as I perceive them, at the very least. 🙂
  2. Changing any of the settings in your AI instruments can have dramatic results on outcomes. This consists of growing the useful resource consumption of the AI mannequin, in addition to growing hallucinations and lowering the accuracy of the knowledge that comes again out of your prompts. Consider yourselves warned. As with all issues AI, go forth and discover and experiment. But accomplish that in a protected, lab surroundings.

For immediately’s experiment, I’m as soon as once more utilizing LMStudio operating domestically on my laptop computer quite than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, try my final weblog, Creating a NetAI Playground for Agentic AI Experimentation.

Enough of the setup, let’s get into it!

The impression of working reminiscence dimension, a.ok.a. “context”

Let me set a scene for you.

You’re in the midst of troubleshooting a community difficulty. Someone reported, or seen, instability at a degree in your community, and also you’ve been assigned the joyful process of attending to the underside of it. You captured some logs and related debug info, and the time has come to undergo all of it to determine what it means. But you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you already know all of the issues which can be happening proper now.

So, you determine to see if AI can assist you’re employed by way of the information quicker and get to the basis of the difficulty.

You hearth up your native AI assistant. (Yes, native—as a result of who is aware of what’s within the debug messages? Best to maintain all of it protected in your laptop computer.)

You inform it what you’re as much as, and paste within the log messages.

Asking an AI assistant to help debug a network issue.Asking an AI assistant to help debug a network issue.
Asking AI to help with troubleshooting

After getting 120 or so traces of logs into the chat, you hit enter, kick up your toes, attain to your Arnold Palmer for a refreshing drink, and look forward to the AI magic to occur. But earlier than you possibly can take a sip of that iced tea and lemonade goodness, you see this has instantly popped up on the display:

AI Failure! Context length issueAI Failure! Context length issue
AI Failure! “The AI has nothing to say”

Oh my.

“The AI has nothing to say.”!?! How might that be?

Did you discover a query so troublesome that AI can’t deal with it?

No, that’s not the issue. Check out the useful error message that LMStudio has kicked again:

“Trying to keep the first 4994 tokens when context the overflows. However, the model is loaded with context length of only 4096 tokens, which is not enough. Try to load the model with a larger context length, or provide shorter input.”

And we’ve gotten to the basis of this completely scripted storyline and demonstration. Every AI software on the market has a restrict to how a lot “working memory” it has. The technical time period for this working reminiscence is “context length.” If you attempt to ship extra information to an AI software than can match into the context size, you’ll hit this error, or one thing prefer it.

The error message signifies that the mannequin was “loaded with context length of only 4096 tokens.” What is a “token,” you marvel? Answering that may very well be a subject of a wholly completely different weblog submit, however for now, simply know that “tokens” are the unit of dimension for the context size. And the very first thing that’s accomplished whenever you ship a immediate to an AI software is that the immediate is transformed into “tokens”.

So what will we do? Well, the message offers us two doable choices: we will improve the context size of the mannequin, or we will present shorter enter. Sometimes it isn’t an enormous deal to supply shorter enter. But different occasions, like once we are coping with massive log recordsdata, that choice isn’t sensible—the entire information is necessary.

Time to show the knob!

It is that first choice, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.

From inside LMStudio, head over to “My Models” and click on to open up the configuration settings interface for the mannequin.

Accessing Model SettingsAccessing Model Settings
Accessing Model Settings

You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are a variety of them.

Default configuration settingsDefault configuration settings
Default configuration settings

But the one we care about proper now could be the Context Length. We can see that the default size for this mannequin is 4096 tokens. But it helps as much as 8192 tokens. Let’s max it out!

Maxing out the Context LengthMaxing out the Context Length
Maxing out the Context Length

LMStudio gives a useful warning and possible purpose for why the mannequin doesn’t default to the max. The context size takes reminiscence and assets. And elevating it to “a high value” can impression efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I exploit typically has that top of a max), you may not need to simply max it out straight away. Instead, improve it by somewhat at a time to seek out the candy spot: a context size sufficiently big for the job, however not outsized.

As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. This is correct up our alley!

Once you’ve up to date your context size, you’ll have to “Eject” and “Reload” the mannequin for the setting to take impact. But as soon as that’s accomplished, it’s time to make the most of the change we’ve made!

The extra context length allows the AI to analyze the dataThe extra context length allows the AI to analyze the data
AI absolutely analyzes the logs

And have a look at that, with the bigger context window, the AI assistant was in a position to undergo the logs and provides us a pleasant write-up about what they present.

I significantly just like the shade it threw my manner: “…consider seeking assistance from … a qualified network engineer.” Well performed, AI. Well performed.

But bruised ego apart, we will proceed the AI assisted troubleshooting with one thing like this.

AI helps put a timeline of the problem togetherAI helps put a timeline of the problem together
The AI Assistant places a timeline collectively

And we’re off to the races. We’ve been in a position to leverage our AI assistant to:

  1. Process a big quantity of log and debug information to establish doable points
  2. Develop a timeline of the issue (that will likely be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
  3. Identify some subsequent steps we will do in our troubleshooting efforts.

All tales should finish…

And so you may have it, our first AI Nerd Knob—Context Length. Let’s assessment what we discovered:

  1. AI fashions have a “working memory” that’s known as “context length.”
  2. Context Length is measured in “tokens.”
  3. Oftentimes occasions an AI mannequin will help the next context size than the default setting.
  4. Increasing the context size would require extra assets, so make modifications slowly, don’t simply max it out utterly.

Now, relying on what AI software you’re utilizing, you could NOT be capable of modify the context size. If you’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will rely upon the subscription and fashions you may have entry to. However, there most positively IS a context size that can issue into how a lot “working memory” the AI software has. And being conscious of that truth, and its impression on how you need to use AI, is necessary. Even if the knob in query is behind a lock and key. 🙂

If you loved this look beneath the hood of AI and want to find out about extra choices, please let me know within the feedback: Do you may have a favourite “knob” you want to show? Share it with all of us. Until subsequent time!

PS… If you’d wish to study extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively known as Run Your Own LLM Locally For Free and with Ease that may get you began in a short time. Check it out!

 

Sign up for Cisco U. | Join the  Cisco Learning Network immediately free of charge.

Learn with Cisco

X | Threads | Facebook | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to hitch the dialog.

Read subsequent:

Creating a NetAI Playground for Agentic AI Experimentation

Take an AI Break and Let the Agent Heal the Network

Share:

LEAVE A REPLY

Please enter your comment!
Please enter your name here