On November 17, Austin became the hub for network professionals and automation enthusiasts as the AutoCon 4 conference took place. The event featured two days of workshops and three days of keynotes and sessions, encouraging collaboration and knowledge sharing in network automation.
Representing CodiLime, Katarzyna Kurowska, Maciej Łastawiecki, Adam Kułagowski, Tomasz Janaszka, Grzegorz Rycaj (our CEO), and Mateusz Kozioł actively joined discussions, shared expertise, and explored new opportunities for collaboration.
Adam Kułagowski and Tomasz Janaszka led CodiLime’s workshop “From Chaos to Clarity: AI-Driven Network Troubleshooting”, demonstrating how generative AI can assist with troubleshooting over a 200+ node network. Additionally, Katarzyna Kurowska and Aldrin Isaac (eBay), presented their solution, Spectron, focused on practical network orchestration at scale.
CodiLime’s contribution to the conference
Aldrin Isaac (eBay) & Katarzyna Kurowska (CodiLime) - Spectron: network orchestration in practice at scale; what works, what hurts
Aldrin Isaac and Katarzyna Kurowska presented Spectron, a platform built to deliver large-scale network orchestration in real production environments. They began by outlining where Spectron fits within the fabric lifecycle. In a typical workflow that moves from fabric planning and specification to rack procurement, installation, and ultimately zero-touch provisioning, Spectron acts as the orchestration layer that bridges the physical deployment phase with automated configuration.
This represents a sharp contrast to eBay’s earlier automation approach, which they described as clunky, drift-prone, and heavily configuration-centric. Their vision for Spectron focuses on intent-driven operations that are refined, drift-free, flexible, extensible, lightweight, and free from licensing constraints, with built-in support for digital-twin validation.
The presenters then explored Spectron’s technical capabilities and architectural principles. The platform supports full topology planning and design, including emulation through GNS3, and automates day-0 and day-1 configuration tasks. It also enables transparent hardware replacement workflows and offers robust support for heterogeneous, multi-vendor environments, including SONIC-based platforms. Under the hood, Spectron relies on thoughtful abstraction layers and unified configuration models that hide device-level complexity from operators. These abstractions allow Spectron to render configurations using flavour-specific renderers while composing topologies through layered specifications.
They also discussed the challenges encountered along the way and the lessons drawn from them. Early versions of Spectron relied on abstractions that were either too generic or too flat, which created constraints and caused confusion.
This experience taught the team to define their domain carefully, resist the temptation to solve every possible problem at once, and focus instead on clear architectural boundaries. They emphasised designing around primitives and reusable building blocks, embedding network engineers directly into the development cycle to ensure a strong UX, and validating functionality early through digital-twin environments rather than extending prototypes without rigorous testing.
The key takeaway from their talk is that Spectron provides a foundation for closed-loop network orchestration at scale. Achieving this requires introducing abstractions gradually and intuitively, solving repetitive problems in a generic manner while treating unique issues individually and maintaining strong architectural clarity throughout. Their experience shows that scalable network orchestration is not only about tooling but also about careful design, iteration, and close collaboration between engineering and operations teams.
CodiLime’s AutoCon 4 workshop
Adam Kułagowski, Tomasz Janaszka, Monika Antoniak (CodiLime) - From Chaos to Clarity: AI-Driven Network Troubleshooting
At AutoCon 2, our exploration of Large Language Models (LLMs) for network assistance began with a proof-of-concept: a LangChain ReAct–based Net-Chat Assistant powered by gpt-4o-mini LLM. Running on a test topology, it showed that an LLM could interpret natural-language queries and autonomously perform basic networking tasks, including configuration troubleshooting. The concept was well received, though participants also pointed out opportunities for growth: the environment was relatively small, and the troubleshooting scenarios were less complex than those encountered in real-world operations.
One year later, at AutoCon 4, we introduced Net-Inspector, a significant step forward designed for large-scale topologies, richer configurations, and including syslog data. This evolution was made possible by adopting the Model Context Protocol (MCP) and modern agentic frameworks, allowing agents to dynamically discover, orchestrate, and invoke diverse networking tools rather than being hardwired to specific ones. Each participant received their own containerlab environment containing a multi-node (200+) topology, along with a collection of scripts that injected failures and misconfigurations. Their task: identify the root causes using Net-Inspector’s multi-tool ReAct agent, a chat interface capable of querying network devices, interpreting syslog data, and reasoning across network-related data sources. All, with only minimal hints coming from alerts.
The workshop journey guided participants from topology exploration to hands-on debugging with AI-augmented workflows. They navigated routing, data-plane behavior, and control-plane anomalies through a unified visual interface, then leveraged the agent for topology analysis, fault investigation, and syslog correlation. Crucially, they could reconfigure the agent on the fly by selecting different LLMs. Four models were available, each with different reasoning capabilities, context-window sizes, and levels of accuracy. This allowed participants to see firsthand how LLM choice affects diagnostic quality, speed, and interpretability, and where human expertise still outperforms autonomous reasoning.
Finally, the workshop focused on foundational concepts: the mechanics of MCP, how agents coordinate tool invocation, and the broader question of whether today’s LLMs and agentic frameworks are mature enough for production-grade network operations. Through real-world troubleshooting in a controlled lab with intentional faults, participants evaluated both the promise and the limitations of AI-augmented network operations at scale. By the end, they walked away not just having used a powerful network-assistant stack, but with a clearer view of the trade-offs, effort, and future potential of building reliable AI agents for the networks of tomorrow.
Key insights we gained during AutoCon 4 2025
We had the opportunity to attend the majority of AutoCon 4 presentations, which spanned a broad spectrum of topics across network automation and, increasingly, network orchestration as the natural next step in operational maturity. Below, we highlight the talks that made the strongest impression on us, either through their depth of insight, originality of approach, or clear practical relevance.
These sessions collectively underscored how orchestration is becoming a central theme that extends traditional automation with intent-driven workflows, verification loops, and autonomous coordination. Each presentation contributed valuable perspectives that deepened our understanding of both the current challenges and the emerging solutions shaping the future of large-scale network operations.
Jeff Gray (Gluware), Building a Network Automation Business Case that Wins at the Top
Jeff Gray stresses that network automation is not simply a technical exercise but a fundamental enabler of how modern businesses operate. He argues that without automation, organisations fall back on heroic manual effort that may keep the lights on but does not create long-term progress. The real opportunity belongs to teams that can clearly explain why automation matters in business terms. This requires moving beyond the language of scripts and tooling and speaking in the vocabulary used by CXOs. Only by demonstrating measurable business impact can engineering teams secure support from executive leadership.
Jeff then highlights the persistent disconnect between network engineering teams and the leaders who control budgets and strategic direction. He notes that meaningful change requires quantifiable arguments framed in financial terms. Concepts such as net present value, payback period, and internal rate of return become essential tools for communicating value. Automation initiatives gain credibility when they are linked directly to strategic priorities, competitive positioning, and a clear understanding of why action is needed now. This forms the basis of what Gray calls the impact model, a structured approach that turns technical ambition into a compelling business case.
His practical guidance breaks this process down into a repeatable method. Teams must first learn the financial and strategic language used by executives before gathering the key variables that define their environment, such as device counts, downtime estimates, staffing costs, and tool spend. They then define assumptions and create a baseline, model the expected benefits over a multi-year horizon, and calculate projected outcomes using metrics such as NPV and operational efficiency gains. The final step is to validate the model with a phased rollout plan. This approach moves organisations away from automating for its own sake and toward automating because it clearly advances business outcomes.
Jeff concludes by underscoring the urgency behind this shift. He encourages leaders to make stakeholders rationally motivated by showing the opportunity cost of inaction and the risks associated with outdated or inconsistent operations. Motivations such as the desire for gain, fear of loss, security, and protection all shape investment decisions. He also ties automation to defensibility by showing how consistent controls, scalable workflows, improved security posture, and self-operating network capabilities strengthen an organisation’s competitive stance.
The message is clear: understanding and communicating the business value of automation is as important as building the automation itself.
Ryan Shaw (Zscaler), Dinesh Dutt (Stardust Systems), The NAF Network Automation Framework: A Modular Reference Architecture
Dinesh Dutt (stepping in for Ryan Shaw) begins by explaining what the Network Automation Framework is not. It is not an attempt to invent new protocols or mandate specific tools or vendor ecosystems. Instead, it focuses on identifying the characteristics that make automation systems effective and on understanding why those characteristics matter. He stresses that the framework must remain inclusive and informed by a broad set of perspectives across the industry. Network operators, automation engineers, vendors, management teams, and community contributors all shape the design. The goal is to create an architecture that serves the needs of its users rather than forcing them into a pre-defined model.
At the core of the framework is a collection of functional blocks that define how the system operates. These blocks include Intent, Observability, Collector, Executor, Orchestrator, Presentation, and the devices themselves. Each one plays a specific role and carries explicit expectations. Intent captures desired outcomes and must be versioned, extensible, and declarative. Observability provides operational truth and must be timestamped and queryable. The Executor applies configuration changes in a way that is idempotent, multi-protocol, and safe to test through dry runs. The Collector gathers data across vendors and protocols through both push and pull mechanisms. The Orchestrator coordinates the interactions between these components and manages failure and recovery. Presentation defines the human interface, which may take many forms and does not need to rely on any single consolidated view. By structuring the system around these modules, he offers a blueprint that scales with network complexity while avoiding reinvention.
He further argues that these building blocks must be functional, simple, and composable. This ensures that the architecture can evolve as user needs expand and as networks grow in size and complexity. The framework is intentionally flexible and avoids prescribing a single vendor pathway. Instead, it allows organisations to integrate their own mix of tools, systems, and operational practices. This emphasis on interoperability positions NAF as a reference architecture that encourages iterative evolution rather than imposing a rigid or closed stack.
Dinesh concludes by addressing the urgency behind adopting a structured automation architecture. Modern networks are large, heterogeneous, and constantly changing. Automation in this environment cannot rely on ad hoc scripting or isolated tools. It must be designed with modularity, extensibility, and consistency at its core. By following these principles, organisations can reduce vendor lock-in, improve resilience, maintain clear policy enforcement, and deliver predictable operations at scale. His message is clear. Automation succeeds when it is guided by intent, supported by strong observability, and built through modular components that work together seamlessly.
Greg Freeman (Lumen) - The NetDevOps Journey: Manual Firefighting to Agentic Autonomy
Greg starts with outlining the scale and significance of the operational challenge at Lumen Technologies. The company operates one of the largest and most connected networks in the world with hundreds of thousands of fiber route miles, a vast footprint of on-net buildings, and backbone capacity measured in hundreds of terabits per second. In such an environment, manual and reactive operations are no longer sustainable. Greg describes a cultural shift within the organisation away from a traditional tiered support model toward a new structure built around automation engineers, network engineers, and field technicians. The long-term goal is to let machine-to-machine communication handle the majority of operational tasks so human engineers can focus on higher-value work and innovation. This shift requires a deliberate move toward end-to-end workflow orchestration and machine autonomy.
He then turns to the practical aspects of building an automation strategy. His guiding principle is simple: avoid waiting for things to fail, act quickly when they do, and communicate outcomes clearly. Greg revisits the familiar automation cost versus benefit curve, noting that many engineers spend large amounts of time performing the original manual task, writing automation for it, and then debugging and maintaining that automation. In many cases, automation becomes yet another task rather than a solution. Lumen’s approach counters this by starting with small high-impact workflows, measuring return on investment, adopting clear standards such as process description documents and solution design documents, training staff in new skills, and embracing AI technologies, including machine learning, generative models, and agentic systems. The focus is on decisive execution and continuous improvement rather than chasing perfect automation from day one.
A central technical theme in Greg’s talk is the rise of agentic AI in network operations. He explains how automation has evolved from simple task execution to closed-loop orchestration driven by AI-enabled agents capable of autonomous action in response to network events. Key components include the Model Context Protocol, which provides a consistent interface between agents and infrastructure, along with orchestration services, large language models, workflow engines, and various data sources. Greg stresses the importance of high-quality contextual data to ensure reliable decision-making. He also cautions that networks demand deterministic and highly accurate behaviour, which means that the non-deterministic tendencies of AI must be carefully managed through oversight and robust validation.
Greg closes by emphasizing that the biggest obstacle to success in AI-driven operations is not the technology itself but organisational readiness. Studies from MIT and others show that many AI initiatives fail because teams are not prepared to adopt new tools or adjust their workflows. He encourages leaders to invest early in culture, training, and process maturity. Lumen’s five-year NetDevOps journey reflects this approach through the establishment of standards, systematic upskilling, workflow development, performance measurement, and iterative refinement. The end goal is a self-driving network, but the deeper message is that the journey from manual firefighting to agentic autonomy represents a fundamental transformation in mindset and operational design rather than a simple technology upgrade.
Senad Palislamovic (Nvidia) - Building AI with AI
Senad begins by acknowledging the complexity and diversity that define modern network environments. Organisations routinely operate equipment from multiple vendors, each introducing its own abstraction layers and operational nuances. Network automation has evolved far beyond simple scripts and now spans multi-language toolchains, configuration frameworks, and vendor-native execution engines. Senad frames this landscape as a challenge and an opportunity. He encourages the audience to rethink automation not merely as code but as a disciplined, model-driven workflow supported by AI pipelines and vendor-neutral abstractions such as YANG and OpenConfig.
He then draws a sharp distinction between network automation and other AI use cases. While applications like chatbots, recommendation engines, and research tools can tolerate ambiguity or partial accuracy, network operations demand extremely high precision. Even a small error can lead to significant outages. This requirement shapes his argument that AI for networking must prioritise consistency, precision, and auditability. Senad introduces architectural elements such as retrieval-augmented generation, structured LLM pipelines, and rigorous prompt engineering. These elements rely on explicit schemas, deterministic outputs, and validation loops to ensure safe and predictable behaviour. In his view, AI used in networking must be deliberately engineered and tightly constrained.
A substantial part of the presentation focuses on how to build AI-driven pipelines for network automation. Senad outlines the components that make such a system reliable and repeatable. These include prompt construction layers, data ingestion and retrieval mechanisms, metadata and chunking strategies, embedding models, and schema-driven input and output structures. He also highlights the Model Context Protocol as a unifying interface for provisioning, validation, and execution. To illustrate the workflow, he walks through a scenario where a user requests the deployment of an EVPN service in a specific location and rack. The system must classify the request, apply relevant policies, perform contextual retrieval, apply precision checks, generate deterministic configuration, and validate the results before executing any changes. Each step reinforces the need for layered design and predictable behaviour.
Senad concludes by explaining why this approach is essential today. Modern networks are vast, heterogeneous, and increasingly dynamic. Legacy automation methods that rely on ad hoc scripts or manual oversight cannot scale to match current demands. By adopting AI pipelines that support declarative intent, schema-based validation, rich telemetry retrieval, and model-driven integration, network teams can build automation that is robust, scalable, and auditable. He argues that this layered, AI-assisted approach will define the next generation of network operations, where humans specify intent, and the system executes it reliably with minimal room for error.
John Capobianco (Selector) - From CLI to GPT: How AI Is Rewriting the Rules of Network Automation
John begins by tracing the evolution of network operations across several major shifts. Traditional operations relied on syslog and SNMP and required significant manual effort, often reacting to issues rather than preventing them. The rise of DevOps and later NetDevOps introduced the idea that infrastructure should be treated as code and that automation should be incorporated into day-to-day workflows. John explains that we are now entering a new phase that he refers to as AIOps and VibeOps, where machine learning, generative AI, and autonomous agents play central roles in running and maintaining networks. He argues that this transition represents a true technology inflection point similar in scale to the emergence of the internet, mobile computing, and cloud platforms. Modern networks are vast, complex, and dynamic, and he believes that human operators alone can no longer keep up due to technical debt, tool fragmentation, skills gaps, and fatigue.
A central theme of his talk is the argument that AI is no longer optional for network operations. Traditional automation based on shell scripts, static playbooks, and fixed workflows cannot handle the speed and diversity of modern infrastructure. Networks generate enormous volumes of telemetry, and failures occur at a scale that exceeds human cognitive capacity. John stresses that AI-driven operations must focus on determinism, precision, and auditability. Unlike consumer chatbot applications, where occasional ambiguity is acceptable, network systems demand predictable behaviour. To support this, he introduces architectural constructs such as the Model Context Protocol, agent-to-agent communication patterns, retrieval-augmented generation, and domain-specific prompt engineering. These elements create a structured environment where agents can operate reliably and safely.
John also outlines practical steps that network engineers should take now and in the near future. He encourages practitioners to adopt foundational tools such as Git, Python, or Ansible, containerisation technologies, CI/CD practices, and automated testing frameworks. He then points toward emerging capabilities such as AI-augmented IDE copilots, LLM-powered command-line interfaces, ReAct-based agents, and MCP-driven automation pipelines. He recommends exploring retrieval and graph-based reasoning techniques and preparing for on-premises acceleration hardware if relevant. John shares his personal experience of integrating ChatGPT with pyATS in late 2022 and explains how that moment shifted his career toward AI-augmented networking. He predicts that digital coworkers, which are autonomous agents equipped with domain knowledge, will soon become standard practice.
In closing, John emphasises that the shift from CLI-centric operations to AI-driven autonomous networks is not about replacing engineers but augmenting their capabilities. He cites research showing that many AI projects fail due to organisational challenges rather than technological ones. His message to network leaders is to invest in culture, continuous learning, and a willingness to experiment and share knowledge. Successful organisations will learn how to partner with AI systems and integrate them into operational workflows. John concludes with a clear bottom line. AI-enhanced network automation is no longer a future concept but a present reality, and the pace of adoption is accelerating. Those who embrace it will gain significant advantages, while those who resist risk being left behind.
Final thoughts
AutoCon 4 confirmed that network automation is evolving into something broader and more ambitious. Across talks on business cases, modular architectures, agentic AI, and AI-first pipelines, a common pattern emerged: the industry is moving from isolated scripts and brittle tooling toward intent-driven, orchestrated systems that operate across vendors and layers.
Speakers such as Jeff Gray and Dinesh Dutt showed that success now depends as much on financial reasoning and modular design as on technical skill. Others, including Greg Freeman, Senad Palislamovic, and John Capobianco, demonstrated that AI, MCP, and agent frameworks are shifting from edge experiments to production tools. The message was consistent: automation, orchestration, and AI must be engineered together, with observability, safety, and business alignment built in from the start.
For CodiLime, the conference was a validation of our work. Spectron highlighted how carefully designed abstractions and digital twins can deliver closed-loop orchestration at scale. Net-Inspector demonstrated that AI-augmented troubleshooting across large topologies is already achievable with strong models, MCP, and robust multi-tool pipelines.
At the same time, every session reinforced that technology alone is not enough. Organisations must build clear impact models, adopt modular architectures, embrace disciplined experimentation, and foster close collaboration among engineers, developers, and business leaders. Our main conclusion is simple: the future of networking belongs to teams that treat automation, orchestration, and AI as an integrated system, continually iterating and investing in people and practices as much as in tools.
We encourage you to explore the full set of conference materials and recordings, the organizers (Network Automation Forum ) will release soon.


