<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Azalio</title>
	<atom:link href="https://www.azalio.io/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.azalio.io</link>
	<description>Your technology partner</description>
	<lastBuildDate>Thu, 16 Apr 2026 18:59:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.5</generator>

 
	<item>
		<title>[In preview] Generally Available: User and group quota reports in Azure NetApp Files</title>
		<link>https://www.azalio.io/in-preview-generally-available-user-and-group-quota-reports-in-azure-netapp-files/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 18:59:55 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/in-preview-generally-available-user-and-group-quota-reports-in-azure-netapp-files/</guid>

					<description><![CDATA[<p>For organizations leveraging individual user and group quotas in Azure NetApp Files to manage capacity on NFS, SMB, and dual-protocol volumes, the user and group quota reporting feature offers clear visibility into key metrics such as quota limits, used c</p>
<p>The post <a href="https://www.azalio.io/in-preview-generally-available-user-and-group-quota-reports-in-azure-netapp-files/">[In preview] Generally Available: User and group quota reports in Azure NetApp Files</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>For organizations leveraging individual user and group<br />
quotas in Azure NetApp Files to manage capacity on NFS, SMB, and dual-protocol<br />
volumes, the user and group quota reporting feature offers clear<br />
visibility into key metrics such as quota limits, used c</div><p>The post <a href="https://www.azalio.io/in-preview-generally-available-user-and-group-quota-reports-in-azure-netapp-files/">[In preview] Generally Available: User and group quota reports in Azure NetApp Files</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock</title>
		<link>https://www.azalio.io/introducing-anthropics-claude-opus-4-7-model-in-amazon-bedrock/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 14:59:48 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://www.azalio.io/introducing-anthropics-claude-opus-4-7-model-in-amazon-bedrock/</guid>

					<description><![CDATA[<p>Today, we’re announcing Claude Opus 4.7 in Amazon Bedrock, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work. Claude Opus 4.7 is powered by Amazon Bedrock’s next generation inference engine, delivering enterprise-grade infrastructure for production workloads. Bedrock’s new inference engine has brand-new scheduling and scaling logic which dynamically allocates [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/introducing-anthropics-claude-opus-4-7-model-in-amazon-bedrock/">Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<p>Today, we’re announcing <a href="https://aws.amazon.com/bedrock/anthropic/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Claude Opus 4.7 in Amazon Bedrock</a>, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work.</p>
<p><a href="https://www.anthropic.com/claude/opus">Claude Opus 4.7</a> is powered by Amazon Bedrock’s next generation inference engine, delivering enterprise-grade infrastructure for production workloads. Bedrock’s new inference engine has brand-new scheduling and scaling logic which dynamically allocates capacity to requests, improving availability particularly for steady-state workloads while making room for rapidly scaling services. It provides zero operator access—meaning customer prompts and responses are never visible to Anthropic or AWS operators—keeping sensitive data private.</p>
<p>According to Anthropic, Claude Opus 4.7 model provides improvements across the workflows that teams run in production such as agentic coding, knowledge work, visual understanding,long-running tasks. Opus 4.7 works better through ambiguity, is more thorough in its problem solving, and follows instructions more precisely.</p>
<ul>
<li><strong>Agentic coding</strong>: The model extends Opus 4.6’s lead in agentic coding, with stronger performance on long-horizon autonomy, systems engineering, and complex code reasoning tasks. According to Anthropic, the model records high-performance scores with 64.3% on SWE-bench Pro, 87.6% on SWE-bench Verified, and 69.4% on Terminal-Bench 2.0.</li>
<li><strong>Knowledge work</strong>: The model advances professional knowledge work, with stronger performance on document creation, financial analysis, and multi-step research workflows. The model reasons through underspecified requests, making sensible assumptions and stating them clearly, and self-verifies its output to improve quality on the first step. According to Anthropic, the model reaches 64.4% on Finance Agent v1.1.</li>
<li><strong>Long-running tasks</strong>: The model stays on track over longer horizons, with stronger performance over its full 1M token context window as it reasons through ambiguity and self-verifies its output.</li>
<li><strong>Vision</strong>: the model adds high-resolution image support, improving accuracy on charts, dense documents, and screen UIs where fine detail matters.</li>
</ul>
<p>The model is an upgrade from Opus 4.6 but may require prompting changes and harness tweaks to get the most out of the model. To learn more, visit <a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices">Anthropic’s prompting guide</a>.</p>
<p><strong>Claude Opus 4.7 model in action</strong><br /> You can get started with Claude Opus 4.7 model in <a href="https://console.aws.amazon.com/bedrock/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Bedrock console</a>. Choose <strong>Playground</strong> under <strong>Test</strong> menu and choose <strong>Claude Opus 4.7</strong> when you select model. Now, you can test your complex coding prompt with the model.</p>
<p><img decoding="async" class="aligncenter wp-image-103731 size-full" src="http://www.azalio.io/wp-content/uploads/2026/04/2026-bedrock-playground-model-selection.jpg" alt="" width="1800" height="1083"></p>
<p>I run the following prompt example about technical architecture decision:<br /><code>Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions.</code></p>
<p><img decoding="async" loading="lazy" class="aligncenter wp-image-103733 size-full" style="border: solid 1px #ccc" src="http://www.azalio.io/wp-content/uploads/2026/04/2026-bedrock-playground-opus4-7-prompt.jpg" alt="" width="1800" height="960"></p>
<p>You can also access the model programmatically using the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Anthropic Messages API</a> to call the <code>bedrock-runtime</code> through Anthropic SDK or <code>bedrock-mantle</code> endpoints, or keep using the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/inference-invoke.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Invoke</a> and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Converse API</a> on <code>bedrock-runtime</code> through the <a href="https://aws.amazon.com/cli/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank" rel="noopener noreferrer">AWS Command Line Interface (AWS CLI)</a> and <a href="https://aws.amazon.com/developer/tools/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank" rel="noopener noreferrer">AWS SDK</a>.</p>
<p>To get started with making your first API call to Amazon Bedrock in minutes, choose <strong>Quickstart</strong> in the left navigation pane in the console. After choosing your use case, you can generate a short term API key to authenticate your requests as testing purpose.</p>
<p>When you choose the API method such as the OpenAI-compatible Responses API, you can get sample codes to run your prompt to make your inference request using the model.</p>
<p><img decoding="async" loading="lazy" class="aligncenter wp-image-103729 size-full" style="border: solid 1px #ccc" src="http://www.azalio.io/wp-content/uploads/2026/04/2026-bedrock-quickstart.jpg" alt="" width="1604" height="2560"><br /> To invoke the model through the Anthropic Claude Messages API, you can proceed as follows using <code>anthropic[bedrock]</code> SDK package for a streamlined experience:</p>
<pre><code class="lang-python">from anthropic import AnthropicBedrockMantle
# Initialize the Bedrock Mantle client (uses SigV4 auth automatically)
mantle_client = AnthropicBedrockMantle(aws_region=REGION)
# Create a message using the Messages API
message = mantle_client.messages.create(
    model="anthropic.claude-opus-4-7",
    max_tokens=2048,
    messages=[ 
	    {"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions"}
    ]
)
print(message.content[0].text)</code></pre>
<p>You can also run the following command to invoke the model directly to <code>bedrock-runtime</code> endpoint using the AWS CLI and the Invoke API:</p>
<pre><code class="lang-bash">aws bedrock-runtime invoke-model  
 --model-id anthropic.claude-opus-4-7  
 --region us-east-1  
 --body '{"messages": [{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions."}], "max_tokens": 512, "temperature": 0.5, "top_p": 0.9}'  
 --cli-binary-format raw-in-base64-out  
invoke-model-output.txt</code></pre>
<p>For more intelligent reasoning capability, you can use <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/claude-messages-adaptive-thinking.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Adaptive thinking</a> with Claude Opus 4.7, which lets Claude dynamically allocate thinking token budgets based on the complexity of each request.</p>
<p>To learn more, visit the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Anthropic Claude Messages API</a> and check out <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/api-inference-examples-claude-messages-code-examples.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">code examples</a> for multiple use cases and a variety of programming languages.</p>
<p>To learn more, visit the Anthropic Claude Messages API and check out code examples for multiple use cases and a variety of programming languages.</p>
<p><strong>Things to know<br /></strong>Let me share some important technical details that I think you’ll find useful.</p>
<ul>
<li><strong>Choosing APIs</strong>: You can choose from a variety of Bedrock APIs for model inference, as well as the Anthropic Messages API. The Bedrock-native Converse API supports multi-turn conversations and Guardrails integration. The Invoke API provides direct model invocation and lowest-level control.</li>
<li><strong>Scaling and capacity</strong>: Bedrock’s new inference engine is designed to rapidly provision and serve capacity across many different models. When accepting requests, we prioritize keeping steady state workloads running, and ramp usage and capacity rapidly in response to changes in demand. During periods of high demand, requests are queued, rather than rejected. Up to 10,000 requests per minute (RPM) per account per Region are available immediately, with more available upon request.</li>
</ul>
<p><strong><u>Now available</u></strong><br /> Anthropic’s Claude Opus 4.7 model is available today in the US East (N. Virginia), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm) Regions; check the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">full list of Regions</a> for future updates. To learn more, visit the <a href="https://aws.amazon.com/bedrock/anthropic/?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Claude by Anthropic in Amazon Bedrock</a> page and the <a href="https://aws.amazon.com/bedrock/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock pricing</a> page.</p>
<p>Give Anthropic’s Claude Opus 4.7 a try in the <a href="https://console.aws.amazon.com/bedrock?trk=d8ec3b19-0f37-4f8c-8c12-189f913e205c&amp;sc_channel=el">Amazon Bedrock console</a> today and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a> or through your usual AWS Support contacts.</p>
<p>— <a href="https://twitter.com/channyun">Channy</a></p>
</div><p>The post <a href="https://www.azalio.io/introducing-anthropics-claude-opus-4-7-model-in-amazon-bedrock/">Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The two-pass compiler is back – this time, it’s fixing AI code generation</title>
		<link>https://www.azalio.io/the-two-pass-compiler-is-back-this-time-its-fixing-ai-code-generation/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 09:59:25 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/the-two-pass-compiler-is-back-this-time-its-fixing-ai-code-generation/</guid>

					<description><![CDATA[<p>If you came up building software in the 1990s or early 2000s, you remember the visceral satisfaction of determinism. You wrote code. The compiler analyzed it, optimized it, and emitted precisely the machine instructions you expected. Same input, same output. Every single time. There was an engineering rigor to it that shaped how an entire generation [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/the-two-pass-compiler-is-back-this-time-its-fixing-ai-code-generation/">The two-pass compiler is back – this time, it’s fixing AI code generation</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>If you came up building software in the 1990s or early 2000s, you remember the visceral satisfaction of determinism. You wrote code. The compiler analyzed it, optimized it, and emitted precisely the machine instructions you expected. Same input, same output. Every single time. There was an engineering rigor to it that shaped how an entire generation of developers thought about building systems.</p>
<p>Then <a href="https://www.infoworld.com/article/2335213/large-language-models-the-foundations-of-generative-ai.html">large language models</a> (LLMs) arrived and, almost overnight, code generation became a stochastic process. Prompt an AI model twice with identical inputs and you’ll get structurally different outputs—sometimes brilliant, sometimes subtly broken, occasionally hallucinated beyond repair. For quick prototyping that’s fine. For enterprise-grade software—the kind where a misplaced <code>null</code> check costs you a production outage at 2am—it’s a non-starter.</p>
<p>We stared at this problem for a while. And then something clicked. It felt familiar, like a pattern we’d encountered before, buried somewhere in our CS fundamentals. Then it hit us: the two-pass compiler.</p>
<h2 class="wp-block-heading" id="a-quick-refresher">A quick refresher</h2>
<p>Early compilers were single-pass: read source, emit machine code, hope for the best. They were fast but brittle—limited optimization, poor error handling, fragile output. The industry’s answer was the multi-pass compiler, and it fundamentally changed how we build languages. The first pass analyzes, parses, and produces an intermediate representation (IR). The second pass optimizes and generates the final target code. This separation of concerns is what gave us C, C++, Java—and frankly, modern software engineering as we know it.</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" src="http://www.azalio.io/wp-content/uploads/2026/04/2pass-architecture.png" alt="2-pass architecture" class="wp-image-4154640" width="1024" height="669" sizes="auto, (max-width: 1024px) 100vw, 1024px"><figcaption class="wp-element-caption">
<p>The structural parallel between classical two-pass compilation and AI-driven code generation.</p>
</figcaption></figure>
<p class="imageCredit">WaveMaker</p>
</div>
<p>The analogy to AI code generation is almost eerily direct. Today’s LLM-based tools are, architecturally, single-pass compilers. You feed in a prompt, the model generates code, and you get whatever comes out the other end. The quality ceiling is the model itself. There’s no intermediate analysis, no optimization pass, no structural validation. It’s 1970s compiler design with 2020s marketing.</p>
<h2 class="wp-block-heading" id="applying-the-two-pass-model-to-ai-code-generation">Applying the two-pass model to AI code generation</h2>
<p>Here’s where it gets interesting. What if, instead of asking an LLM to go from prompt to production code in one shot, you split the process into two architecturally distinct passes—just like the compilers that built our industry?</p>
<p>Pass 1 is where the LLM does what LLMs are genuinely good at: understanding intent, decomposing design, and reasoning about structure. The model analyzes the design spec, identifies components, maps APIs, resolves layout semantics—and emits an intermediate representation, an IR. Not HTML. Not Angular or React. A well-defined meta-language markup that captures what needs to be built without committing to how.</p>
<p>This is critical. By constraining the LLM’s output to a structured meta-language rather than raw framework code, you eliminate entire categories of failure. The model can’t inject malformed <code><script></script></code> tags if it’s not emitting HTML. It can’t hallucinate nonexistent React hooks if it’s outputting component descriptors. You’ve reduced the stochastic surface area dramatically.</p>
<p>Pass 2 is entirely deterministic. A platform-level code generator—no LLM involved—takes that validated intermediate markup and emits production-grade Angular, React, or React Native code. This is the pass that plugs in battle-tested libraries, enforces security patterns, and applies framework-specific optimizations. Same IR in, same code out. Every time.</p>
<p>First pass gives you speed. Second pass gives you reliability. The separation of concerns is what makes it work.</p>
<h2 class="wp-block-heading" id="why-this-matters-now">Why this matters now</h2>
<p>The advantages of this architecture compound in exactly the ways that matter for enterprise development. The meta-language IR becomes your durable context for iterative development—you’re not re-prompting the LLM from scratch every time you refine a component. Security concerns like script injection and SQL injection are structurally eliminated, not patched after the fact. Hallucinated properties and tokens get caught and stripped at the IR boundary before they ever reach generated code. And because Pass 2 is deterministic, you get reproducible, auditable, deployable output.</p>
<figure class="wp-block-table">
<div class="overflow-table-wrapper">
<table class="has-fixed-layout">
<tbody>
<tr>
<td><strong>Pass 1 — LLM-powered</strong> </p>
<p>• Translates design/spec to structured components and design tokens<br />• Enables iterative dev with meta-markup as persistent context</p>
<p>Eliminates script/SQL injection by design</td>
<td><strong>Pass 2 — Deterministic</strong> </p>
<p>• Generates optimized, secure, performant framework code <br />• Validates and strips hallucinated markup and tokens</p>
<p>Plugs in battle-tested libraries for reliability</td>
</tr>
</tbody>
</table>
</div>
</figure>
<p>If you’ve spent your career building systems where correctness isn’t optional, this should resonate. The industry spent decades learning that single-pass compilation couldn’t produce reliable software at scale. The two-pass architecture wasn’t just an optimization, but an engineering philosophy: separate understanding from generation, validate before you emit, and never let a single phase carry the entire burden of correctness.</p>
<p>We’re at the same inflection point with AI code generation right now. The models are powerful. The architecture around them has been naive. The fix isn’t to wait for a smarter model. It’s to apply the engineering discipline we’ve always known, and build systems where stochastic brilliance and deterministic reliability each do what they do best—in the right pass, at the right time.</p>
<p><a href="https://www.linkedin.com/posts/vikramsrivats_hybridai-deterministicoutcomes-agenticcodegen-activity-7429646735469305857-9GHR/">Deterministic software engineering is cool again</a>. Turns out it never really left.</p>
<p><em>—</em></p>
<p><a href="https://www.infoworld.com/blogs/new-tech-forum"><strong><em>New Tech Forum</em></strong></a><em><strong> provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all </strong></em><em><strong>inquiries to </strong></em><a href="mailto:doug_dineley@foundryco.com"><strong><em>doug_dineley@foundryco.com</em></strong></a><em><strong>.</strong></em></p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/the-two-pass-compiler-is-back-this-time-its-fixing-ai-code-generation/">The two-pass compiler is back – this time, it’s fixing AI code generation</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ease into Azure Kubernetes Application Network</title>
		<link>https://www.azalio.io/ease-into-azure-kubernetes-application-network/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 09:59:23 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/ease-into-azure-kubernetes-application-network/</guid>

					<description><![CDATA[<p>If you’re using Kubernetes, especially a managed version like Azure Kubernetes Service (AKS), you don’t need to think about the underlying hardware. All you need to do is build your application and it should run, its containers managed by the service’s orchestrator. At least that’s the theory. However, implementing a platform that abstracts your code [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/ease-into-azure-kubernetes-application-network/">Ease into Azure Kubernetes Application Network</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>If you’re using <a href="https://www.infoworld.com/article/2266945/what-is-kubernetes-scalable-cloud-native-applications.html">Kubernetes</a>, especially a managed version like <a href="https://www.infoworld.com/article/4058764/smoother-kubernetes-sailing-with-aks-automatic.html">Azure Kubernetes Service</a> (AKS), you don’t need to think about the underlying hardware. All you need to do is build your application and it should run, its containers managed by the service’s orchestrator.</p>
<p>At least that’s the theory. However, implementing a platform that abstracts your code from the servers and network that support it brings its own problems, and a whole new discipline. <a href="https://www.infoworld.com/article/2338225/what-is-platform-engineering-evolving-devops.html">Platform engineers</a> fill the gap between software and hardware, supporting security and networking, as well as managing storage and other key services.</p>
<p>Kubernetes is part of an ecosystem of <a href="https://www.infoworld.com/article/2255318/what-is-cloud-native-the-modern-way-to-develop-software.html">cloud-native services</a> that provide the supporting framework for running and managing scalable distributed systems, including the tools needed to package and deploy applications, as well as components that extend the functionality of Kubernetes’ own nodes and pods.</p>
<p>Key components of this growing ecosystem are the various <a href="https://www.infoworld.com/article/2261159/what-is-a-service-mesh-easier-container-networking.html">service meshes</a>. These offer a way to manage connectivity between nodes and between your applications and the outside network, with tools for handling basic network security. Often implemented as “sidecar” containers, running alongside Kubernetes pods, these network proxies can consume added resources as your applications scale. That means more configuration and management, ensuring that configurations are kept up-to-date and that secrets are secure.</p>
<h2 class="wp-block-heading" id="istio-goes-ambient">Istio goes ambient</h2>
<p>One of the key service mesh implementations, <a href="https://www.infoworld.com/article/2258313/what-is-istio-the-kubernetes-service-mesh-explained.html">Istio</a>, has developed an alternate way of operating, <a href="https://istio.io/latest/docs/ambient/">what the project calls “ambient mode”</a>. Here, instead of having individual sidecars for each pod, your service mesh is implemented as per-node proxies or as a single proxy that supports an entire Kubernetes namespace. It’s an approach that allows you to start implementing a service mesh without increasing the complexity of your platform, making it easy to go from a basic development Kubernetes implementation to a production environment without having to change your application pods.</p>
<p>It’s called ambient mode because there’s no need to add new service mesh elements as your application scales. Instead, the service mesh is always there, and your pods simply join it and take advantage of the existing configuration. The resulting implementation is both easier to use and easier to understand.</p>
<p><a href="https://www.infoworld.com/article/2260999/introducing-the-service-mesh-interface.html?utm=hybrid_search">Microsoft has used Istio as part of Azure Kubernetes Service for many years</a>. Istio is one of a suite of open-source tools that provide the backbone of Azure’s cloud-native computing platform.</p>
<h2 class="wp-block-heading" id="introducing-azure-kubernetes-application-network">Introducing Azure Kubernetes Application Network</h2>
<p>So, it’s not surprising to <a href="https://opensource.microsoft.com/blog/2026/03/24/whats-new-with-microsoft-in-open-source-and-kubernetes-at-kubecon-cloudnativecon-europe-2026/">l</a>earn that Microsoft is <a href="https://opensource.microsoft.com/blog/2026/03/24/whats-new-with-microsoft-in-open-source-and-kubernetes-at-kubecon-cloudnativecon-europe-2026/">using Istio’s ambient mesh as the basis of Azure Kubernetes Application Network</a>. The new service (<a href="https://learn.microsoft.com/en-us/azure/application-network/">available in preview</a>) allows application developers to add managed network services to their applications without needing the support of a platform engineering team to implement a service mesh. It will even help you migrate away from the now-deprecated ingress-nginx by providing access to the recommended Kubernetes Gateway API without needing more sidecars and letting you use your existing ingress-nginx configurations while you complete your migration. </p>
<p><a href="https://learn.microsoft.com/en-us/azure/application-network/overview">Microsoft describes the preview of Azure Kubernetes Application Network</a> as “a fully managed, ambient-based service network solution for Azure Kubernetes Service (AKS).” The underlying data and control planes are managed by AKS, so all you need to do is connect your AKS clusters to an Application Network and AKS will then manage the service mesh for you, without any changes to your applications.</p>
<p>Like other implementations of Istio’s ambient mesh, there are two levels to Application Network: a core set of node-level application proxies that handle connectivity and security for application services, and an optional set of lower-level proxies that support routing and apply network policies, acting as a software-defined network inside your Kubernetes environment.</p>
<p>This approach lets you build and test a Kubernetes application on your local development hardware without using Application Network features, then deploy it to AKS along with the required network configuration — simplifying both development and deployment. It also reduces development overheads, both in compute and developer resources.</p>
<h2 class="wp-block-heading" id="using-azure-kubernetes-application-network">Using Azure Kubernetes Application Network</h2>
<p>Once deployed Application Network connects the services in your application securely, managing encrypted connections automatically and managing the required certificates. It can support unencrypted connections, for when you aren’t sending confidential data and don’t need the associated overhead. As the service is managed by AKS, new pods are automatically provisioned as they are deployed, with the ambient mesh supporting both scale-up and scale-down operations.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/application-network/architecture">The architecture of Application Network is much like that of an Istio ambient mesh</a>. The main difference is that the service’s management and control planes are managed by Azure, with application owners limited to working with the service’s data plane, configuring operations and setting policies for their application workloads. Azure’s control of the management plane automates certificate management, ensuring that connections stay secure and there is little risk of certificate expiration, using the tools built into Azure Key Vault.</p>
<p>The Application Network data plane holds proxies and gateways used by the service mesh, and these are deployed when the service is launched, along with the required Kubernetes configurations. The key to operation is <a href="https://github.com/istio/ztunnel">ztunnel</a>, a proxy that intercepts inter-service requests, secures the connection, and routes requests to another ztunnel running with the destination service. A gateway oversees connections between ztunnels running in remote clusters, allowing your service mesh to scale out with demand.</p>
<h2 class="wp-block-heading" id="building-your-first-ambient-service-mesh-in-aks">Building your first ambient service mesh in AKS</h2>
<p><a href="https://learn.microsoft.com/en-us/azure/application-network/get-started">Getting started with Azure Kubernetes Application Network</a> requires the Azure CLI. If you’re working with an existing AKS cluster, then you will need to enable integration with Microsoft Entra and enable OpenID Connect.</p>
<p>As the Application Network service is in preview, start by registering it in your account. This can take some time, but once it’s registered you can install the AppNet CLI extension that’s used to manage and control Application Network for your AKS clusters. You can now start to set up the ambient service mesh, either creating new clusters to use it, or adding the service mesh to existing AKS deployments.</p>
<p>Starting from scratch is the easiest way, as it ensures that you’re running in the same tenant. AKS clusters and Application Network can be in the same resource group if you want, but it’s not necessary. You’re free to use separate resource groups for management.</p>
<p>The <code>appnet</code> command makes it easy to create an Application Network from the command line; all you need is a name for the network, a resource group, a location, and an identity type. Once you’ve run the command to create your ambient mesh, wait for the mesh to be provisioned before joining a cluster to your network. This again simply needs a resource group, a name for the member cluster, and its resource group and cluster name. At the same time, you define how the network will be managed, i.e. whether you manage upgrades yourself or leave Azure to manage them for you. Additional clusters can be added to the network the same way.</p>
<p>With an Application Network and member clusters in place, the next step is to use Kubernetes’ own tooling to add support for the ambient mesh to your applications. <a href="https://learn.microsoft.com/en-us/azure/application-network/traffic-management-use-cases">Microsoft provides a useful example</a> that shows how to use Application Network with the Kubernetes Gateway API to manage ingress. You need to use <code>kubectl</code> and <code>istioctl</code> commands to enable gateways and verify their operation, adding services and ensuring that they are visible to each other through their respective ztunnels.</p>
<h2 class="wp-block-heading" id="securing-applications-with-policies">Securing applications with policies</h2>
<p>Policies can be used to control access from the application ingress to specific services as well as between services, reducing the risk of breaches and ensuring that you control how traffic is routed in your application. These policies can be locked down to ensure only specific methods can be used, so only allowing HTTP GET operations on a read-only service, and POST where data needs to be delivered. Other options can be used to enforce OpenID Connect authorization at a mesh level.</p>
<p>Not all Azure Kubernetes clusters are supported in the preview, which is only available in Azure’s largest regions. For now, Application Network won’t work with private clusters or with Windows node pools. Once running you can’t switch upgrade modes, and as it’s based on Istio, you can’t enable Istio service meshes in your cluster. These requirements aren’t showstoppers, and you should be able to get started experimenting with the service as it’s still in preview.</p>
<p>AKS Application Network is a powerful tool that helps simplify and secure the process of building and running inter-cluster networks in an AKS application. As it is an ambient service, it’s possible to scale as necessary, and can help provide secure bridges between clusters. By working at a Kubernetes level, it’s possible to use Application Network to provide policy driven production network rules, allowing developers to build and test code in unrestricted environments before moving to test and production clusters.</p>
<p>As Application Network uses familiar Kubernetes and Istio constructions, it’s possible to build configurations into Helm charts and other deployment tools, ensuring configurations are part of your build artifacts and that network configurations and policies are delivered with your code every time you push a new build – without needing platform engineering support.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/ease-into-azure-kubernetes-application-network/">Ease into Azure Kubernetes Application Network</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The agent tier: Rethinking runtime architecture for context-driven enterprise workflows</title>
		<link>https://www.azalio.io/the-agent-tier-rethinking-runtime-architecture-for-context-driven-enterprise-workflows/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 09:59:22 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/the-agent-tier-rethinking-runtime-architecture-for-context-driven-enterprise-workflows/</guid>

					<description><![CDATA[<p>Most large enterprises run on deterministic software foundations. Business rules are embedded within workflows, state transitions are modeled explicitly and escalation paths are defined in advance. System behavior is specified in advance, making outcomes predictable. Meaningful scenarios are encoded as conditional branches and validated before release. For decades, this approach has delivered the reliability and [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/the-agent-tier-rethinking-runtime-architecture-for-context-driven-enterprise-workflows/">The agent tier: Rethinking runtime architecture for context-driven enterprise workflows</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Most large enterprises run on deterministic software foundations. Business rules are embedded within workflows, state transitions are modeled explicitly and escalation paths are defined in advance. System behavior is specified in advance, making outcomes predictable. Meaningful scenarios are encoded as conditional branches and validated before release. For decades, this approach has delivered the reliability and control required for mission-critical operations.</p>
<p>This model assumes most situations can be anticipated and expressed in logic. It works well when variation is limited and conditions remain manageable. If new requirements can be added as workflow branches, the structure holds. It begins to strain when processes must respond to context — not just thresholds, but the broader circumstances of a case.</p>
<p>In my experience, customer onboarding in banking makes this tension visible. Onboarding sits at the intersection of digital channels, fraud detection, regulatory obligations and revenue goals. It must satisfy Know Your Customer (KYC) and Anti-Money Laundering (AML) requirements while minimizing abandonment and resisting synthetic identity attacks.</p>
<p>During my involvement in digital account opening initiatives at a major North American bank, cross-functional design sessions repeatedly surfaced the same trade-off. Product teams pushed to reduce friction and improve conversion while fraud teams responded to bot-driven account creation and mule schemes with additional safeguards. Compliance insisted regulatory standards be met without exception and engineering absorbed each new requirement into the orchestration framework. Individually, these decisions were rational. Collectively, they made the workflow more complex.</p>
<p>The underlying challenge was not a shortage of rules but expressing contextual judgment within a static branching structure. Differentiation occurred only at predefined checkpoints and information was often collected in bulk rather than adapting to known facts. Collect too little and the institution risks regulatory exposure or fraud; collect too much and abandonment rises. Attempt to encode every variation as additional branches and the workflow becomes increasingly fragile.</p>
<p>Adaptive scoring and contextual models can complement deterministic logic. Rather than enumerating every scenario in advance, they help determine whether additional verification is warranted or whether progression can continue with existing evidence. Deterministic workflows still enforce regulatory requirements and final state transitions; the adaptive layer informs how the system navigates toward those outcomes.</p>
<p>Although onboarding illustrates the issue clearly, the same pattern appears in credit adjudication, claims processing and dispute management. As adaptive signals enter these workflows, the architectural question shifts from adding branches to deciding where contextual judgment should reside. In my view, what is missing is not another conditional path but a different runtime model — one that interprets context and determines the next appropriate action within defined limits. This architectural layer, which I refer to as the <strong>Agent Tier</strong>, separates contextual reasoning from deterministic execution.</p>
<h2 class="wp-block-heading" id="introducing-the-agent-tier-separating-execution-from-contextual-judgment">Introducing the agent tier: Separating execution from contextual judgment</h2>
<p>In many enterprises, orchestration logic does not reside in a formal workflow platform. It is embedded in SPA applications, implemented in APIs, supported by rule engines and coordinated through service calls across systems. User journeys are assembled through API calls in predefined sequences, with eligibility or routing conditions evaluated at specific checkpoints.</p>
<p>This approach works well for repeatable, well-understood paths. When inputs are complete, risk signals are low and no exception handling is required, the clean path can be executed deterministically. State transitions are known in advance. Service calls follow predictable patterns. Human tasks are invoked at predefined points.</p>
<p>The difficulty arises when the workflow encounters ambiguity. Inputs may be incomplete. Signals may require interpretation rather than simple threshold comparison. Multiple systems may need to be coordinated in a sequence not explicitly modeled. Attempting to encode every such situation into SPA logic or orchestration APIs leads to increasingly complex condition trees and harder-to-maintain code. Instead of expanding hard-coded branching indefinitely, the runtime separates into two complementary lanes: Repeatable execution and contextual reasoning.</p>
<p>Conceptually, the enterprise runtime evolves into a two-lane structure, illustrated below.</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" src="https://b2b-contenthub.com/wp-content/uploads/2026/04/enterprise-runtime-architecture.png?w=1024" alt="Enterprise runtime architecture: Deterministic execution and agentic reasoning." class="wp-image-4158538" width="1024" height="559" sizes="auto, (max-width: 1024px) 100vw, 1024px"></figure>
<p class="imageCredit">Nitesh Varma</p>
</div>
<p>The deterministic lane retains control over authoritative state changes and rule enforcement. It manages eligibility checks, applies regulatory criteria, invokes known service sequences and finalizes cases in core systems. It continues to handle most predictable scenarios.</p>
<p>The runtime invokes the Agent Tier when contextual judgment is required. This may occur when additional evidence must be gathered before a rule can be evaluated, when multiple signals must be interpreted together rather than independently or when coordination across systems cannot be expressed through a fixed sequence. It evaluates available actions and returns a bounded recommendation that allows deterministic execution to resume.</p>
<p>The movement between lanes is explicit. The deterministic workflow hands off when it reaches a point where static branching is insufficient. The Agent Tier performs synthesis or dynamic coordination. Once the Agent Tier produces a structured result, such as a completed evidence bundle, a validated set of inputs or a recommended next step, control returns to the deterministic lane for controlled progression and final state transition.</p>
<p>This separation allows incremental adoption. Existing SPA logic and orchestration APIs remain intact; ambiguity points can be redirected to the Agent Tier without destabilizing deterministic execution.</p>
<h2 class="wp-block-heading" id="what-happens-inside-the-agent-tier">What happens inside the agent tier</h2>
<p>The Agent Tier is not a single “AI decision.” It is a structured reasoning cycle that combines interpretation with controlled action.</p>
<p>When the deterministic workflow hands off a case, the Agent Tier interprets the current situation by assembling available context — user inputs, existing customer relationships, fraud signals, journey state and relevant policy constraints. Based on that composite view, it selects the next action from an approved set of enterprise capabilities. That action might involve retrieving additional information, invoking a verification service, requesting clarification from the user or coordinating multiple systems in sequence. Once the action completes, the result is evaluated and the cycle continues until deterministic execution can resume.</p>
<p>This alternating pattern of reasoning and action is common in agentic system design. In technical literature, it is often referred to as the <a href="https://arxiv.org/abs/2210.03629"><em>ReAct</em></a> (Reason and Act) pattern, which interleaves reasoning steps with structured action selection. Rather than attempting to reach a final answer in a single pass, the system gathers evidence, reassesses its position and proceeds incrementally. In enterprise settings, this pattern becomes a disciplined way to manage contextual interpretation.</p>
<p>Reasoning in the Agent Tier does not involve free-form system access. It proceeds through approved operations exposed via governed interfaces. In practice, these tools are enterprise primitives such as:</p>
<ul class="wp-block-list">
<li>APIs that retrieve or update enterprise data</li>
<li>event triggers that initiate downstream processing</li>
<li>workflow actions that advance a case</li>
<li>controlled service calls into core or third-party systems</li>
</ul>
<p>Each operation is defined by explicit input/output contracts and permission boundaries and carries metadata describing its purpose and constraints. The runtime selects from this governed catalog — a mechanism commonly referred to as <a href="https://platform.openai.com/docs/guides/function-calling">tool calling</a>. Some <a href="https://devblogs.microsoft.com/semantic-kernel/give-your-agents-domain-expertise-with-agent-skills-in-microsoft-agent-framework/">frameworks</a> further group related tools into higher-level capabilities known as skills, reusable functions for objectives such as identity verification or KYC evidence assembly.</p>
<p>Before control returns to the deterministic lane, the agentic runtime can also perform a structured self-check. It can verify that required conditions are satisfied, confirm alignment with policy constraints and ensure that any necessary approvals have been identified. In technical discussions, this is often described as <a href="https://arxiv.org/abs/2303.11366"><em>reflection</em></a>.</p>
<p>Taken together, these patterns do not introduce unchecked autonomy. They provide a structured way to manage contextual synthesis and dynamic coordination without allowing adaptive logic to diffuse across SPA code and orchestration services. Deterministic systems continue to enforce authoritative state transitions. The Agent Tier prepares the conditions under which those transitions occur.</p>
<p>In many implementations, the Agent Tier does not directly control the workflow. Instead, it recommends the next step based on the available context. The deterministic tier remains responsible for execution. After each step is completed — retrieving evidence, invoking a verification service or preparing a review case — the updated context is returned to the Agent Tier, which evaluates the new state and recommends the next action. In this model, contextual reasoning informs progression while deterministic systems continue to enforce authoritative state transitions.</p>
<p>Returning to the onboarding example, the Agent Tier changes how the journey adapts to each applicant. The deterministic tier still executes core steps such as creating the customer profile, enforcing regulatory checks and committing account state in core systems. The Agent Tier evaluates the evolving context — customer relationships, fraud signals, identity verification results and available documentation — and recommends whether the workflow can proceed along the clean path, trigger additional verification or escalate to manual review. The result is not a new onboarding process but a workflow that adapts its progression dynamically while preserving the deterministic controls required for regulated operations.</p>
<p>Conceptually, the interaction between contextual reasoning and deterministic execution can be understood as a simple runtime loop, as illustrated below.</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" src="https://b2b-contenthub.com/wp-content/uploads/2026/04/context-driven-workflow-loop.png?w=1024" alt="Context-driven workflow loop." class="wp-image-4158539" width="1024" height="557" sizes="auto, (max-width: 1024px) 100vw, 1024px"></figure>
<p class="imageCredit">Nitesh Varma</p>
</div>
<p>The workflow progresses through a continuous loop in which contextual reasoning recommends the next step, deterministic systems execute it and the resulting context feeds back into the next recommendation.</p>
<h2 class="wp-block-heading" id="governing-adaptive-systems-without-losing-control">Governing adaptive systems without losing control</h2>
<p>Separating contextual reasoning from deterministic execution clarifies responsibility but does not eliminate risk. In regulated environments, adaptive sequencing must operate within explicit governance boundaries.</p>
<p>The trust and operations overlay represents cross-cutting controls across the runtime: Audit logging, approval gates, observability, security enforcement and lifecycle management. Within this structure, authoritative state transitions remain deterministic. Core systems continue to create client profiles, enforce limits, record disclosures and apply regulatory thresholds. The Agent Tier may influence progression, but final state changes occur only through controlled interfaces.</p>
<p>This containment boundary preserves explainability. When progression changes — for example, when additional verification is triggered or escalation occurs — institutions must be able to reconstruct why. Which signals were assembled? Which tools were invoked? What reasoning produced the recommendation? Concentrating contextual evaluation within a defined runtime layer makes that traceability possible.</p>
<p>Operational experience reinforces the need for these guardrails. Engineering discussions of <a href="https://shopify.engineering/building-production-ready-agentic-systems">production agent systems</a> emphasize constrained tool access, explicit action catalogs, bounded iteration and strong observability. In enterprise environments, contextual reasoning must likewise operate through governed tools and visible control points.</p>
<p>Approval gates remain part of this structure. High-risk actions such as credit issuance, account restrictions, large payments or regulatory filings may still require human authorization regardless of how the progression was determined. Reflection inside the Agent Tier can validate readiness, but authorization remains explicit.</p>
<p>Lifecycle discipline is equally important. Changes to models, identity providers, tool contracts or orchestration logic can alter workflow behavior. The Agent Tier should therefore operate as a governed platform capability with versioned reasoning logic, controlled tool catalogs and defined testing and rollback mechanisms.</p>
<p>The objective is not to eliminate probabilistic reasoning but to contain it within observable workflows and governed boundaries. As adaptive capabilities expand, the architectural question is not whether contextual reasoning will exist, but whether it is diffused across the stack or concentrated within a controlled runtime layer.</p>
<h2 class="wp-block-heading" id="architectural-leadership-in-an-adaptive-era">Architectural leadership in an adaptive era</h2>
<p>Introducing an Agent Tier adds a new runtime component, but enterprise complexity is not new; it is already dispersed across channel code, orchestration services, rule engines and proliferating conditional branches. The architectural question is not whether complexity exists, but where it resides. As fraud models evolve, verification technologies improve and regulatory expectations shift, adaptive capabilities will continue to expand.</p>
<p>I believe architecture must evolve from enumerating state transitions to defining containment boundaries. Deterministic systems enforce regulatory and operational requirements and remain responsible for authoritative state changes. Adaptive reasoning operates within explicit policy constraints and informs how workflows progress toward those outcomes. Instead of encoding every possible path in advance, enterprises can move toward context-driven workflows in which deterministic execution handles authoritative actions while the Agent Tier determines the next appropriate step based on evolving context.</p>
<p>This evolution does not require wholesale reinvention. It can begin with a single high-impact workflow where contextual variability is already evident. By introducing a disciplined runtime layer that mediates uncertainty while preserving deterministic control, organizations can modernize incrementally. In that sense, the Agent Tier is not simply a new feature; it is a structural response to a changing runtime reality, one that allows adaptive systems to operate within clear architectural and governance boundaries.</p>
<p><strong>This article is published as part of the Foundry Expert Contributor Network.</strong><br /><strong><a href="https://www.infoworld.com/expert-contributor-network/">Want to join?</a></strong></p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/the-agent-tier-rethinking-runtime-architecture-for-context-driven-enterprise-workflows/">The agent tier: Rethinking runtime architecture for context-driven enterprise workflows</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>MuleSoft Agent Fabric adds new ways to keep AI agents in line</title>
		<link>https://www.azalio.io/mulesoft-agent-fabric-adds-new-ways-to-keep-ai-agents-in-line/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 18:59:17 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.13.127.245.118.nip.io/mulesoft-agent-fabric-adds-new-ways-to-keep-ai-agents-in-line/</guid>

					<description><![CDATA[<p>Salesforce first sought to tackle AI agent sprawl last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls. When enterprises adopt multiple agentic [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/mulesoft-agent-fabric-adds-new-ways-to-keep-ai-agents-in-line/">MuleSoft Agent Fabric adds new ways to keep AI agents in line</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Salesforce first sought to tackle AI <a href="https://www.cio.com/article/3987692/new-agentic-ai-tools-bring-new-threat-agent-sprawl.html">agent sprawl</a> last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls.</p>
<p>When enterprises adopt multiple agentic AI products, they can end up redundant or siloed workflows or scattered across teams and platforms, undermining operational efficiency and complicating governance as they try to scale AI safely and responsibly.</p>
<p>Agent Fabric, <a href="https://www.cio.com/article/4063090/mulesoft-launches-agent-fabric-to-tackle-agent-sprawl-and-unify-enterprise-ai-workflows.html">introduced in September 2025</a>, started out as a place for enterprises to register, view, interconnect and govern agents. In January it added a <a href="https://www.cio.com/article/4113617/salesforces-agentforce-recalibration-raises-costs-and-complexity-for-cios.html">deterministic scripting tool</a> and the ability to <a href="https://www.infoworld.com/article/4119830/mulesoft-debuts-agent-scanners-to-rein-in-enterprise-ai-chaos-2.html">scan for new agents</a> and add them to the registry.</p>
<p>But enterprises still need more help to bring their AI agents under control, so Salesforce is adding more features.</p>
<p>First up is an expansion of the deterministic controls in the form of Agent Script for Agent Broker, an intelligent routing service inside Agent Fabric that is designed to connect agents across domains, dynamically matching user tasks with the best-fit agent. Salesforce said the controls will help developers codify workflows in multi-agent systems in order to ensure consistent and reliable outputs.</p>
<p>Rather than leave probabilistic agents to make all the decisions about how to resolve a problem, introducing an element of unpredictability, Agent Script for Agent Broker enables enterprises to steer some of the decision-making according to predetermined rules that require fewer computing resources than running a large language model.</p>
<p>That’s welcome news for <a href="https://www.linkedin.com/in/robert-kramer-58239b22/" target="_blank" rel="noreferrer noopener">Robert Kramer</a>, managing parter at KramerERP.</p>
<p>“Pure autonomous agents don’t necessarily work in production as enterprises need to ensure predictable outcomes. The deterministic controls should facilitate a secure handoff of control and rules while still allowing the model to engage in reasoning when it’s appropriate,” he said. “It’s a balance between control and flexibility, which is the norm for most real deployments.”</p>
<p>For <a href="https://www.linkedin.com/in/rebecca-wettemann/" target="_blank" rel="noreferrer noopener">Rebecca Wettemann</a>, principal analyst at Valoir, providing both deterministic and probabilistic options within Agent Fabric enables developers and agent builders to take the lower-cost route to more accurate and predictable results from agentic systems.</p>
<p>Enterprises will have to wait to put this deterministic orchestration feature into production, though: Still in beta testing, it won’t be generally available until June 2026.</p>
<h2 class="wp-block-heading" id="centralized-llm-governance-tackles-cost">Centralized LLM governance tackles cost</h2>
<p>Beyond orchestration, Salesforce has added a new LLM Governance capability in AI Gateway, the control layer within Agent Fabric that provides centralized visibility of token usage, costs, and data flows for third-party model.</p>
<p>Enterprises will be able to use LLM Governance, now generally available, to help them keep their AI operations on budget, Salesforce said.</p>
<p>This is becoming increasingly important as CIOs seek to bring disparate AI systems under centralized control and justify spiralling AI costs.</p>
<p>Info-Tech Research Group advisory fellow <a href="https://www.infotech.com/profiles/scott-bickley" target="_blank" rel="noreferrer noopener">Scott Bickley</a> warned that without centralized governance like this, different teams around a company may choose different models, negotiate their own <a href="https://www.infoworld.com/article/2269032/what-is-an-api-application-programming-interfaces-explained.html">API</a> contracts, and manage token budgets locally.</p>
<p>“This results in sprawling costs, inconsistent security postures, and no enterprise-wide policy enforcement,” he said. “By positioning AI Gateway as the choke point through which all LLM traffic flows, enterprises gain visibility into AI usage patterns, the models in use, purpose of the usage, and cost data.”</p>
<h2 class="wp-block-heading" id="mcp-additions-simplify-integration">MCP additions simplify integration</h2>
<p>Salesforce is also adding new Model Control Protocol features, including MCP Bridge to make it easier to access legacy APIs, and Informatica-hosted MCPs, that it says will simplify how agents interact with enterprise data and APIs.</p>
<p>These could save developers time and simplify the building of cross-environment, multi-agent systems.</p>
<p>Bickley said MCP Bridge will help enterprises with thousands of legacy APIs (<a href="https://www.infoworld.com/article/2334742/what-is-rest-the-de-facto-web-architecture-standard.html">REST</a>, <a href="https://www.infoworld.com/article/2259022/rest-or-soap-in-a-cloud-native-environment.html">SOAP</a>, <a href="https://www.infoworld.com/article/2267992/what-is-graphql-better-apis-by-design.html">GraphQL</a>) built long before <a href="https://www.infoworld.com/article/4029634/what-is-model-context-protocol-how-mcp-bridges-ai-and-external-services.html">MCP</a> existed.</p>
<p>“Agents speaking MCP cannot call those APIs natively so they require wrappers around the API endpoint; this would be a massive engineering lift. MCP Bridge allows these APIs to be exposed as MCP-compatible tools without modifying the underlying code,” he said.</p>
<p>And Wettemann said Informatica-hosted MCPs will further reduce development overhead by bringing built-in data quality and governance capabilities into agent workflow, particularly critical for enterprises in regulated industries and those with heightened risk concerns.</p>
<p>But Bickley added a note of caution. “APIs can behave oddly and have their own nuanced behavior,” he said. “Enterprises should test how MCP Bridge handles edge cases.”</p>
<p>Informatica-hosted MCPs will not be a miracle solution either, he warned: “Even if the Informatica data quality and governance capabilities are cleanly integrated in the Agent Fabric registry, these are not instantaneous operations. Checking data fields for accuracy, deduplication, and cross-system matching take time and carry latency measured in milliseconds or even multiple seconds, and that is pre-integration.”</p>
<h2 class="wp-block-heading" id="a-pivot-for-mulesoft">A pivot for MuleSoft?</h2>
<p>Bickley sees the updates as a broader strategy for Salesforce to reposition MuleSoft, which it acquired in 2018 for $5.7 billion, from a traditional API integration platform to an infrastructure layer for enterprise AI agents.</p>
<p>By layering orchestration, governance, and connectivity into Agent Fabric, Salesforce appears to be trying to position MuleSoft as the system of record for how agents are discovered, routed, and governed across the enterprise, deepening its role beyond API management into core AI infrastructure, he said.</p>
<p>Not all CIOs will welcome that move.</p>
<p>“If your agent control plane runs on Agent Fabric, switching costs rise materially, and the more agents you register, the more orchestration rules and governance policies defined, the more difficult it becomes to move to an alternative solution,” the analyst said.</p>
<p>As with any critical infrastructure dependency, “CIOs need to ask:  What is the exit path?  What components of Agent Fabric are portable and what is locked in?  What’s the pricing model?  What is the integration depth with non-Salesforce agents and data sources?” he said.</p>
<p>For now, though, enterprises have <a href="https://www.cio.com/article/4138739/21-agent-orchestration-tools-for-managing-your-ai-fleet.html">plenty of AI agent orchestration options</a> to choose from.</p>
</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/mulesoft-agent-fabric-adds-new-ways-to-keep-ai-agents-in-line/">MuleSoft Agent Fabric adds new ways to keep AI agents in line</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Retirement: End of lift reminder of HBv2/HC-Series/NP-Series Azure Virtual Machine in Azure Batch Pool</title>
		<link>https://www.azalio.io/retirement-end-of-lift-reminder-of-hbv2-hc-series-np-series-azure-virtual-machine-in-azure-batch-pool/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 18:00:29 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.13.127.245.118.nip.io/retirement-end-of-lift-reminder-of-hbv2-hc-series-np-series-azure-virtual-machine-in-azure-batch-pool/</guid>

					<description><![CDATA[<p>Microsoft Azure will retire support for HBv2-series, HC-series, and NP-series VMs in Azure Batch pools on May 31, 2027, including: HBv2-series: 120 AMD EPYC 7V12 vCPUs, 480 GB RAM, 200 Gb/s HDR InfiniBand HC-series: 44 Intel Xeon Platinum 8168</p>
<p>The post <a href="https://www.azalio.io/retirement-end-of-lift-reminder-of-hbv2-hc-series-np-series-azure-virtual-machine-in-azure-batch-pool/">Retirement: End of lift reminder of HBv2/HC-Series/NP-Series Azure Virtual Machine in Azure Batch Pool</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>Microsoft Azure will retire support for HBv2-series,<br />
HC-series, and NP-series VMs in Azure Batch pools on May 31, 2027,<br />
including:<br />
 HBv2-series: 120 AMD EPYC 7V12 vCPUs, 480 GB RAM, 200<br />
     Gb/s HDR InfiniBand<br />
 HC-series: 44 Intel Xeon Platinum 8168</div><p>The post <a href="https://www.azalio.io/retirement-end-of-lift-reminder-of-hbv2-hc-series-np-series-azure-virtual-machine-in-azure-batch-pool/">Retirement: End of lift reminder of HBv2/HC-Series/NP-Series Azure Virtual Machine in Azure Batch Pool</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>[Launched] Generally Available: Encrypt Premium SSD v2 and Ultra Disks with Cross Tenant Customer Managed Keys</title>
		<link>https://www.azalio.io/launched-generally-available-encrypt-premium-ssd-v2-and-ultra-disks-with-cross-tenant-customer-managed-keys/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 18:00:11 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.13.127.245.118.nip.io/launched-generally-available-encrypt-premium-ssd-v2-and-ultra-disks-with-cross-tenant-customer-managed-keys/</guid>

					<description><![CDATA[<p>Cross-tenant customer-managed keys (CMK) for Premium SSD v2 and Ultra Disks are now generally available. This capability allows managed disks to be encrypted using a customer-managed key stored in an Azure Key Vault located in a different Microsoft Entra</p>
<p>The post <a href="https://www.azalio.io/launched-generally-available-encrypt-premium-ssd-v2-and-ultra-disks-with-cross-tenant-customer-managed-keys/">[Launched] Generally Available: Encrypt Premium SSD v2 and Ultra Disks with Cross Tenant Customer Managed Keys</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>Cross-tenant customer-managed<br />
keys (CMK) for Premium SSD v2 and Ultra Disks are<br />
now generally available. This capability allows managed disks to be<br />
encrypted using a customer-managed key stored in an Azure Key Vault<br />
located in a different Microsoft Entra</div><p>The post <a href="https://www.azalio.io/launched-generally-available-encrypt-premium-ssd-v2-and-ultra-disks-with-cross-tenant-customer-managed-keys/">[Launched] Generally Available: Encrypt Premium SSD v2 and Ultra Disks with Cross Tenant Customer Managed Keys</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Salesforce launches Headless 360 to support agent‑first enterprise workflows</title>
		<link>https://www.azalio.io/salesforce-launches-headless-360-to-support-agent%e2%80%91first-enterprise-workflows/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 12:59:49 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/salesforce-launches-headless-360-to-support-agent%e2%80%91first-enterprise-workflows/</guid>

					<description><![CDATA[<p>Salesforce is packaging its developer and AI tooling, including its vibe coding environment Agentforce Vibes, into a new platform named Headless 360, designed to help enterprise teams build agent-first workflows. The CRM software provider defines agent-first workflows as enterprise processes in which software agents, rather than human users, carry out tasks by directly invoking APIs, [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/salesforce-launches-headless-360-to-support-agent%e2%80%91first-enterprise-workflows/">Salesforce launches Headless 360 to support agent‑first enterprise workflows</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Salesforce is packaging its developer and AI tooling, including its vibe coding environment Agentforce Vibes, into a new platform named Headless 360, designed to help enterprise teams build agent-first workflows.</p>
<p>The CRM software provider defines agent-first workflows as enterprise processes in which software agents, rather than human users, carry out tasks by directly invoking APIs, tools, and predefined business logic.</p>
<p>To support this approach, Headless 360 exposes Salesforce’s underlying data, workflows, and governance controls as APIs, MCP tools, and CLI commands, via its existing offerings, such as Data 360, Customer 360, and Agentforce, <a href="https://www.linkedin.com/in/joseph-inzerillo-b917791/" target="_blank" rel="noreferrer noopener">Joe Inzerillo</a>, president of AI technology at Salesforce, said during a press briefing.</p>
<p>This allows agents to operate directly on the platform’s existing business logic and datasets, rather than relying on separate integrations or user interfaces, Inzerillo added.</p>
<h2 class="wp-block-heading" id="push-to-become-a-control-layer-for-enterprise-ai-agents">Push to become a control layer for enterprise AI agents</h2>
<p>Analysts, however, see Headless 360 as an effort by Salesforce to position itself as a central layer for managing agent-driven operations across different business functions in enterprises, moving from a system of record to being the system of execution.</p>
<p>“Salesforce knows the center of gravity is moving toward coding agents, conversational interfaces, agent harnesses, and external runtimes, so it is trying to keep Salesforce relevant as the system underneath,” said <a href="https://futurumgroup.com/dion-hinchcliffe/" target="_blank" rel="noreferrer noopener">Dion Hinchcliffe</a>, VP of the CIO practice at The Futurum Group.</p>
<p>With Headless 360, Hinchcliffe added, Salesforce is trying to move its positioning beyond “AI agents inside Salesforce” to framing “Salesforce as a programmable platform for agents operating across external tools, interfaces, and environments.”</p>
<h2 class="wp-block-heading" id="risks-around-lock-in-operational-gaps">Risks around lock-in, operational gaps</h2>
<p>Analysts warn that CIOs need caution before adopting Headless 360.</p>
<p>“Salesforce’s ‘System of’ framework pitch with Headless 360 is the ultimate vendor lock-in architecture,” said <a href="https://www.infotech.com/profiles/scott-bickley" target="_blank" rel="noreferrer noopener">Scott Bickley</a>, advisory fellow at Info-Tech Research Group. </p>
<p>“Context (Data 360), Work (Customer 360), Agency (Agentforce), Engagement (Slack) are all required, according to Salesforce, and only they can provide them in an integrated manner. This is the strategy pitch awaiting CIOs, and frankly, it’s not true,” Bickley noted. </p>
<p>Bickley further pointed out that modern data stacks can replicate much of Headless 360’s functionality with more flexibility and less vendor concentration.</p>
<p>There are other issues that Bickley thinks should worry CIOs: “There is no mention of cost or the underlying licensing model for this ‘headless’ experience.  Are all tools included at no cost?” </p>
<p>“Salesforce’s MO seems to be to announce new capabilities that require SKUs. CIOs should be asking about pricing now, before building in architectural dependencies on features that might land in a premium cost tier,” Bickley cautioned. </p>
<p>Also, the analyst pointed out that Salesforce’s announcement is silent on SLAs for operations such as <a href="https://www.infoworld.com/article/4029634/what-is-model-context-protocol-how-mcp-bridges-ai-and-external-services.html" target="_blank" rel="noopener">MCP</a> tool calls, which matter materially for real-time agent workflows.</p>
<h2 class="wp-block-heading" id="incremental-gains-for-developers-despite-broader-concerns">Incremental gains for developers despite broader concerns</h2>
<p>Despite these concerns, Bickley sees some of the new Headless 360 features, although undifferentiated from the competition, as offering practical benefits for developers in their daily tasks.</p>
<p>The analyst was referring to newer updates, such as new MCP tools that give external coding agents full access to Salesforce’s platform, the DevOps Center MCP, the Agentforce Experience Center, and newer governance features.</p>
<p>Enabling full access to external coding agents, such as <a href="https://www.infoworld.com/article/4154973/enterprise-developers-question-claude-codes-reliability-for-complex-engineering.html">Claude Code</a> and <a href="https://www.infoworld.com/article/3989248/openai-launches-codex-ai-agent-to-tackle-multi-step-coding-tasks.html">Codex</a>, in particular, Bickley said, helps Salesforce to meet the developer where they are or let them continue using the tool of their choice.</p>
<p>“Historically, developers were forced into Salesforce’s proprietary toolchain that included clunky VS Code extensions, painful metadata APIs, and quirky development pipelines that required Salesforce-specific expertise. Expanding the dev environment helps alleviate this pain,” Bickley pointed out.</p>
<p>The other updates, according to Hinchcliffe, should help curtail developer friction by helping avoid frequent switching between development tools, expanding real-time awareness of organization data, reducing the need for custom plumbing to expose business logic, and decreasing the effort needed to move from prototype to deployment.</p>
<p>Focusing specifically on the new DevOps Center MCP, which is a set of AI-powered tools that enable the use of natural language across the entire DevOps lifecycle, Bickley said that it will help developers alleviate pains around <a href="https://www.infoworld.com/article/2269266/what-is-cicd-continuous-integration-and-continuous-delivery-explained.html">CI/CD</a> processes.</p>
<p>“Salesforce development pipelines are notoriously fragile with metadata dependencies, org-specific configurations, artificial limits on work items, and UI response issues, among others,” Bickley added.</p>
<p>The governance tools, specifically the updates to the Testing Center, Custom Scoring Evals, Session Tracing, and A/B Testing API, according to Hinchcliffe, too, address real gaps that enterprise development teams face, especially moving agentic workflows or applications into production.</p>
<p>“Salesforce is correctly identifying that enterprise agent adoption will stall unless buyers can properly measure, govern, debug, and tune agent behavior over time,” the analyst said.</p>
<h2 class="wp-block-heading" id="concerns-around-the-maturity-of-governance-capabilities">Concerns around the maturity of governance capabilities</h2>
<p>However, Bickley cautioned about the efficacy of these tools, as most of these tools are in the very early stages of their release. In fact, the analyst suggested that enterprises should expect to supplement these tools with their own evaluation frameworks for the next 12-18 months.</p>
<p>The analyst also flagged additional concerns around newer components such as the Agentforce Experience Layer, which is a new UI service that allows developers to decouple what an agent does from how it surfaces across various services and applications.</p>
<p>“Ironically, this adds yet another layer to contend with in the development process for what is already considered a painful development experience. Salesforce has a pattern of shipping v1 tools that work great in demos but fall in real-world scenarios,” Bickley said. </p>
<p>“Development teams intending to avail themselves of these new feature sets should insist that Salesforce provide them an extended pilot and sandbox free of charge to validate the maturity level and ease of use of these new features,” Bickley added.</p>
<p>All the updates to Headless 360, Salesforce said, are expected to be released in phases. Generally available features include Agentforce Vibes 2.0, the DevOps Center MCP, Session Tracing, and the Agentforce Experience Layer. Features that are in early access include Custom Scoring Evals. Other features, such as the Testing Center and the Salesforce Catalog, are scheduled for rollout in May and June, respectively.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/salesforce-launches-headless-360-to-support-agent%e2%80%91first-enterprise-workflows/">Salesforce launches Headless 360 to support agent‑first enterprise workflows</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tap into the AI APIs of Google Chrome and Microsoft Edge</title>
		<link>https://www.azalio.io/tap-into-the-ai-apis-of-google-chrome-and-microsoft-edge/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 09:59:39 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<guid isPermaLink="false">https://www.azalio.io/tap-into-the-ai-apis-of-google-chrome-and-microsoft-edge/</guid>

					<description><![CDATA[<p>With every passing year, local AI models get smaller, more efficient, and more comparable in power with their higher-end, cloud-hosted counterparts. You can run many of the same inference jobs on your own hardware, without needing an internet connection or even a particularly powerful GPU. The hard part has been standing up the infrastructure to [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/tap-into-the-ai-apis-of-google-chrome-and-microsoft-edge/">Tap into the AI APIs of Google Chrome and Microsoft Edge</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>With every passing year, local AI models get smaller, more efficient, and more comparable in power with their higher-end, cloud-hosted counterparts. You can run many of the same inference jobs on your own hardware, without needing an internet connection or even a particularly powerful GPU.</p>
<p>The hard part has been standing up the infrastructure to do it. Applications like <a href="https://www.comfy.org/download">ComfyUI</a> and <a href="https://lmstudio.ai/">LM Studio</a> offer ways to run models locally, but they’re big third-party apps that still require their own setup and maintenance. Wouldn’t it be great to run local AI models right in the browser?</p>
<p><a href="https://developer.chrome.com/docs/ai/built-in">Google Chrome</a> and <a href="https://www.microsoft.com/en-us/edge/features/ai?form=MT0160">Microsoft Edge</a> now offer that as a feature, by way of <a href="https://www.infoworld.com/article/4009190/taking-advantage-of-microsoft-edges-built-in-ai.html">an experimental API set</a>. With Chrome and Edge, you can perform a slew of AI-powered tasks, like summarizing a document, translating text between languages, or generating text from a prompt. All of these are accomplished with models downloaded and run locally on demand.</p>
<p>In this article I’ll show a simple example of Chrome and Edge’s experimental local AI APIs in action. While both browsers are in theory based on the same set of experimental APIs, they do support different varieties of functionality, and use different models. For Chrome, it’s Gemini Nano; for Edge, it’s the Phi-4-mini models.</p>
<p>The following demo of the Summarizer API works on both browsers, although the performance may differ between them. In my experience, Summarizer ran significantly slower on Edge.</p>
<h2 class="wp-block-heading" id="the-available-ai-apis-in-chrome-and-edge">The available AI APIs in Chrome and Edge</h2>
<p>Chrome and Edge share a common codebase — the Chromium project — and the AI APIs available to both stem from what that project supports. As of April 2026, the available AI APIs in Chrome are:</p>
<ul class="wp-block-list">
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#translator_api"><strong>Translator API</strong></a>: Translate text from one language to another, assuming a model is available for that language pair.</li>
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#language_detector_api"><strong>Language Detector API</strong></a>: Determine the language for a given input text.</li>
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#summarizer_api"><strong>Summarizer API</strong></a>: Condense text into headlines, summaries, and bullet-point rundowns.</li>
</ul>
<p>All three of these APIs are available immediately to Chrome users. All except the language detector API are also available to Edge users, although that is planned for future support.</p>
<p>Several other APIs, which are in a more experimental state, are available in both browsers on an opt-in basis:</p>
<ul class="wp-block-list">
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#writer_and_rewriter_apis"><strong>Writer API</strong></a>: Generate text from a given prompt.</li>
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#writer_and_rewriter_apis"><strong>Rewriter API</strong></a>: Rewrite an existing text based on instructions from a prompt.</li>
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#prompt_api"><strong>Prompt API</strong></a>: Make natural language requests directly to the model (e.g., “Search the web for up-to-date information about visiting Italy”).</li>
<li><a href="https://developer.chrome.com/docs/ai/built-in-apis#proofreader_api"><strong>Proofreader API</strong></a>: Examine a text for spelling and grammatical errors and suggest corrections.</li>
</ul>
<p>The long-term ambition is to have these APIs accepted as <a href="https://github.com/webmachinelearning/translation-api">general web standards</a>, but for now they’re specific to Chrome and Edge.</p>
<h2 class="wp-block-heading" id="using-the-summarizer-api">Using the Summarizer API</h2>
<p>We’ll use the Summarizer API as an example for how to use these APIs generally. The Summarizer API is available on both Chrome and Edge, and the way it’s used serves as a good model for how the other APIs also work.</p>
<p>First, create a web page which you’ll access through some kind of local web server. If you have Python installed, you can create an <code>index.html</code> file in a directory, open that directory in the terminal, and use <code>py -m http.server</code> to serve the contents on port 8080. You can’t, and shouldn’t, try to open the web page as a local file, as that may cause content-restriction rules to kick in and break things.</p>
<p>Here’s the source code of the page to create:</p>
<pre class="wp-block-code"><code><span class="hljs-tag">div</span> <span class="hljs-attr">style</span>=<span class="hljs-string">"display: flex;"</span>&gt;
    <span class="hljs-tag">textarea</span> <span class="hljs-attr">style</span>=<span class="hljs-string">"width:50%; height:24em"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"input"</span> <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Type text to be summarized"</span>&gt;<span class="hljs-tag"><span class="hljs-name">textarea</span>&gt;</span><span class="hljs-tag">br</span>&gt;
    <span class="hljs-tag">textarea</span> <span class="hljs-attr">style</span>=<span class="hljs-string">"width:50%; height:24em"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"output"</span> <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Summarization results"</span>&gt;<span class="hljs-tag"><span class="hljs-name">textarea</span>&gt;</span><span class="hljs-tag">br</span>&gt;
<span class="hljs-tag"><span class="hljs-name">div</span>&gt;</span>
<span class="hljs-tag">textarea</span> <span class="hljs-attr">style</span>=<span class="hljs-string">"width:100%; height:4em"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"context"</span> <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Additional context"</span>&gt;<span class="hljs-tag"><span class="hljs-name">textarea</span>&gt;</span>
<span class="hljs-tag">label</span> <span class="hljs-attr">for</span>=<span class="hljs-string">"type"</span>&gt;Type of summarization:<span class="hljs-tag"><span class="hljs-name">label</span>&gt;</span>
<span class="hljs-tag">select</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"type"</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"type"</span>&gt;
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"teaser"</span>&gt;Teaser<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"tldr"</span>&gt;tl;dr<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"headline"</span>&gt;Headline<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"key-points"</span>&gt;Key points<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
<span class="hljs-tag"><span class="hljs-name">select</span>&gt;</span>

<span class="hljs-tag">label</span> <span class="hljs-attr">for</span>=<span class="hljs-string">"length"</span>&gt;Length:<span class="hljs-tag"><span class="hljs-name">label</span>&gt;</span>
<span class="hljs-tag">select</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"length"</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"length"</span>&gt;
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"short"</span>&gt;Short<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"medium"</span>&gt;Medium<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
    <span class="hljs-tag">option</span> <span class="hljs-attr">value</span>=<span class="hljs-string">"long"</span>&gt;Long<span class="hljs-tag"><span class="hljs-name">option</span>&gt;</span>
<span class="hljs-tag"><span class="hljs-name">select</span>&gt;</span>

<span class="hljs-tag">button</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"button"</span> <span class="hljs-attr">onclick</span>=<span class="hljs-string">"go();"</span>&gt;Start<span class="hljs-tag"><span class="hljs-name">button</span>&gt;</span>
<span class="hljs-tag">div</span> <span class="hljs-attr">style</span>=<span class="hljs-string">"background-color:beige"</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"log"</span>&gt;<span class="hljs-tag"><span class="hljs-name">div</span>&gt;</span>
<span class="hljs-tag">script</span>&gt;<span class="javascript">
    <span class="hljs-keyword">const</span> $log = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"log"</span>)
    <span class="hljs-keyword">const</span> $input = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"input"</span>)
    <span class="hljs-keyword">const</span> $output = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"output"</span>)
    <span class="hljs-keyword">const</span> $context = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"context"</span>)
    <span class="hljs-keyword">const</span> $type = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"type"</span>)
    <span class="hljs-keyword">const</span> $length = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"length"</span>)

    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">log</span>(<span class="hljs-params">text</span>) </span>{
        $log.innerHTML += text + <span class="hljs-string">"<br>"</span>;
    }
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">summarize</span>() </span>{
        $log.innerHTML = <span class="hljs-string">""</span>;

        <span class="hljs-keyword">if</span> (!<span class="hljs-string">'Summarizer'</span> <span class="hljs-keyword">in</span> self) {
            log(<span class="hljs-string">"Summarizer not available"</span>)
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>
        };

        <span class="hljs-keyword">const</span> availability = <span class="hljs-keyword">await</span> Summarizer.availability();
        log(<span class="hljs-string">`Summarizer status: <span class="hljs-subst">${availability}</span>`</span>);

        <span class="hljs-keyword">const</span> summarizer = <span class="hljs-keyword">await</span> Summarizer.create({
            <span class="hljs-attr">sharedContext</span>: $context.value,
            <span class="hljs-attr">type</span>: $type.value,
            <span class="hljs-attr">length</span>: $length.value,
            <span class="hljs-attr">format</span>: <span class="hljs-string">'markdown'</span>,
            monitor(m) {
                m.addEventListener(<span class="hljs-string">'downloadprogress'</span>, (e) =&gt; {
                    log(<span class="hljs-string">`Downloaded <span class="hljs-subst">${e.loaded * <span class="hljs-number">100</span>}</span>%`</span>);
                });
            }
        });

        log(<span class="hljs-string">"Summarizer created, starting summarization"</span>);

        $output.value = <span class="hljs-string">""</span>;

        <span class="hljs-keyword">const</span> stream = summarizer.summarizeStreaming($input.value)
        <span class="hljs-keyword">for</span> <span class="hljs-keyword">await</span> (<span class="hljs-keyword">const</span> chunk <span class="hljs-keyword">of</span> stream) {
            $output.value += chunk;
        }

        log(<span class="hljs-string">"Finished."</span>)
    }
    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">go</span>() </span>{
        summarize();
    }
</span><span class="hljs-tag"><span class="hljs-name">script</span>&gt;</span>
</code></pre>
<p>Most of what we want to pay attention to is in the <code>summarize()</code> function. Let’s walk through the steps.</p>
<h3 class="wp-block-heading" id="step-1-verify-the-api-is-available">Step 1: Verify the API is available</h3>
<p>The line <code>if (!'Summarizer' in self)</code> will determine if the summarizer API is even available on the browser. The follow-up, <code>const availability = await Summarizer.availability();</code> returns the status of the model required for the API:</p>
<ul class="wp-block-list">
<li><code>downloadable</code>: The model needs to be downloaded, so you’ll want to provide some kind of progress feedback for the download. (The above code has an example of how this could be implemented, via the <code>monitor()</code> function passed to the <code>Summarizer.create()</code> method.)</li>
<li><code>available</code>: The model is on the device and can be used right away.</li>
</ul>
<h3 class="wp-block-heading" id="step-2-create-the-summarizer-object">Step 2: Create the Summarizer object</h3>
<p>The next step is to create the <code>Summarizer</code> object, which can take several parameters:</p>
<ul class="wp-block-list">
<li><code>sharedContext</code>: A text which gives the summarizer additional context for how to do its work (e.g. “Format the output as a bullet list of questions”).</li>
<li><code>type</code>: One of four values that describes the format for the summary. <code>teaser</code> tries to create interest in the text’s contents without revealing full details; <code>tldr</code> provides a quick and concise summary, no more than a sentence or two; <code>headline</code> generates a suitable headline for the text; and <code>key-points</code> produces a bullet list of takeaways.</li>
<li><code>length</code>: One of <code>short</code>, <code>medium</code>, or <code>long</code>; this parameter controls how long the output should be.</li>
<li><code>format</code>: The format of the input text. <code>markdown</code> is the default; another allowed value is <code>plain-text</code>. If you are using HTML as your source, you may want to use <code>.innerText</code> to derive a text-only version of the input. </li>
</ul>
<h3 class="wp-block-heading" id="step-3-stream-and-iterate-over-the-output">Step 3: Stream and iterate over the output</h3>
<p>Most of the time, we want to see the output streamed a token at a time, so we have some sense that the model is working. To do this, we use <code>const stream = summarizer.summarizeStreaming($input.value)</code> to create an object we can iterate over (<code>$input.value</code> is the text to summarize). We then use <code>for await (const chunk of stream){}</code> to iterate over each chunk and add it to the <code>$output</code> field.</p>
<p>Here’s an example of some input and output:</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" src="https://b2b-contenthub.com/wp-content/uploads/2026/04/image_21.png?w=1024" alt="Example output for built-in text summarizer AI model in Chrome and Edge." class="wp-image-4154523" width="1024" height="707" sizes="auto, (max-width: 1024px) 100vw, 1024px"><figcaption class="wp-element-caption">
<p>Example output for built-in text summarizer AI model in Chrome and Edge. The model runs entirely on the device hosting the browser and does not call out to an external service to deliver its results.</p>
</figcaption></figure>
<p class="imageCredit">Foundry</p>
</div>
<h2 class="wp-block-heading" id="caveats-for-using-summarizer-and-other-local-ai-apis">Caveats for using Summarizer (and other local AI APIs)</h2>
<p>The first thing to keep in mind is that the model will take some time to download on first use. The sizes of the models vary, but you can expect them to be in the gigabyte range. That’s why it’s a good idea to provide some kind of UI feedback for the download process. Ideally, you’d want to provide some way to run the model download process and then ping the user when it’s ready for use.</p>
<p>Once models are downloaded, there’s no programmatic interface to how they’re managed — at least, not yet. On Google Chrome there’s a local URL, <code>chrome://on-device-internals/</code>, that shows which models have been loaded and provides statistics about them. You can use this page to remove models manually or inspect their stats for the sake of debugging, but the JavaScript APIs don’t expose any such functionality.</p>
<p>When you start the inference process, there may be a noticeable delay between the time the summarization starts and the appearance of the first token. Right now there’s no way for the API to give us feedback about what’s happening during that time, so you’ll want to at least let the user know the process has started.</p>
<p>Finally, while Chrome and Edge support a small number of local AI APIs now, how the future of browser-based local AI will play out is still open-ended. For instance, we might see a more generic standard emerge for how local models work, rather than the task-specific versions shown here. But you can still get going right now.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/tap-into-the-ai-apis-of-google-chrome-and-microsoft-edge/">Tap into the AI APIs of Google Chrome and Microsoft Edge</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
