Blog
Code execution with MCP: How sandboxed Python replaces tool schema bloat in AI agents
As the number of tools connected to an AI agent grows, JSON Schema definitions become a massive scaling bottleneck. Every tool carries a full schema that gets loaded into the LLM’s context window on every turn. Our tests show that replacing these schemas with a...
PyTorch Call Stack Deep Dive: Tracing Tensor Operations from Python to C++ Kernels
Eliminating the ‘Rego tax’: How AI orchestrators automate Kubernetes compliance
Manually writing OPA Rego policies is a significant bottleneck for many platform teams, creating a 'Rego tax' that can slow down development and introduce risk. This article introduces a new approach: a Dynamic Kubernetes Policy Generator that uses a large language...
Zero trust AI agents on Kubernetes: What I learned deploying multi-agent systems on Kagenti
AI agent content focuses on prompt engineering and framework selection. But very little addresses what happens when those agents run in production: Who they are, what they're allowed to call, and whether anyone can tell what they did. I spent 2 weeks (January 2026)...
Zero Trust for autonomous agentic AI systems: Building more secure foundations
AI systems are no longer just single-purpose models. With the rise of agentic AI, software systems designed to carry out complex tasks and solve problems with limited human supervision. It's a step beyond generative AI, which creates content, to an AI that does...
From hand-tuned to generated: A reproducible Triton GPU kernel benchmark across different vendors
In the world of Large Language Models (LLMs), speed is very important. Much of this speed comes from highly specialized functions called GPU kernels which are small, focused routines that instruct the GPU how to perform calculations with the maximum efficiency....
Protecting Triton kernel deployments with cryptographic signatures
Triton is a domain-specific language and compiler for writing high-performance GPU kernels (snippets of compiled GPU code) using a Python-like syntax. It offers fine-grained control over memory and parallelism, making it ideal for custom, architecture-optimized...
Skip the JITters: Fast, trusted model kernels with OCI caching
Triton is a domain-specific language and compiler for writing high-performance GPU kernels in Python. It offers fine-grained control over memory and parallelism, making it ideal for custom, architecture-optimized compute in machine language and high-performance...
Architecting Cloud-Native Ambient Agents: Patterns for Scale and Control
Moving AI from interactive chatbots to autonomous "ambient" agents requires a fundamental shift in system architecture. This article examines the technical implementation of agents that operate asynchronously within an enterprise environment. We detail a practical...
Simplifying Edge AI Builds with Verified GitHub Actions Patterns
As the ecosystem and economy around AI continues to grow and the Internet of Things (IoT) grows smarter and more prolific, a new paradigm of computing is emerging: edge AI. That is, the application of AI technologies to advanced IoT systems. This has all sorts of...
