<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[DEVELOPERS.DE]]></title><description><![CDATA[Software Development Blog with focus on .NET, Windows, Microsoft Azure powered by daenet]]></description><link>https://developers.de/</link><generator>Ghost 1.21</generator><lastBuildDate>Fri, 03 Apr 2026 22:53:30 GMT</lastBuildDate><atom:link href="https://developers.de/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Office Crash: When Microsoft Word cannot open files on OneDrive, SharePoint & Co.]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id="whenmicrosoftwordcrasheshowathirdpartyplugincanbringitdown">When Microsoft Word Crashes: How a Third-Party Plugin Can Bring It Down</h1>
<p>Microsoft Word or other Office Application is usually a very stable application. When it starts crashing repeatedly, many users assume the problem lies with Office itself, Windows updates, or corrupted documents. In practice, a very common cause is</p></div>]]></description><link>https://developers.de/2026/02/03/my-office-applications-cannot-open-files-on-onedrive-sharepoint/</link><guid isPermaLink="false">69808948e8c0b11b9c3d615a</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 03 Feb 2026 12:28:18 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="whenmicrosoftwordcrasheshowathirdpartyplugincanbringitdown">When Microsoft Word Crashes: How a Third-Party Plugin Can Bring It Down</h1>
<p>Microsoft Word or other Office Application is usually a very stable application. When it starts crashing repeatedly, many users assume the problem lies with Office itself, Windows updates, or corrupted documents. In practice, a very common cause is <strong>third-party Office plugins</strong> that integrate deeply into Word.<br>
You might blame Microsoft or Windows for this, but as in most cases it is not Microsoft. You update office, but some other plugins are not compatible with the new version. So, what to do?</p>
<p>One frequent example is a crash caused by the <em>Seclore FileSecure</em> Office plugin.</p>
<p>This article explains:</p>
<ul>
<li>How to locate this kind of issue</li>
<li>Why Word crashes even though it is not the real cause</li>
<li>How to safely disable the problematic plugin</li>
</ul>
<p>Note, the same issue can be fixed for all other office applications!</p>
<hr>
<h2 id="thecrashsymptoms">The Crash Symptoms</h2>
<p>Users typically report:</p>
<ul>
<li>Office application cannot open the file at the remote location.</li>
<li>Office application crashing when opening</li>
<li>Repeated crashes</li>
</ul>
<p>In <strong>Windows Event Viewer</strong>, the error often looks like this:</p>
<pre><code class="language-text">Faulting application name: WINWORD.EXE
Faulting module name: Office2016x64Plugin.dll
Exception code: 0xc0000409
Faulting module path: ...\Seclore\FileSecure\Desktop Client\...
</code></pre>
<p>At first glance, it appears that <strong>Microsoft Word</strong> (or other office applicaiton) is at fault — but that is misleading.</p>
<hr>
<h2 id="understandingtherootcause">Understanding the Root Cause</h2>
<p>The most important line in the crash report is:</p>
<pre><code class="language-text">Faulting module name: Office2016x64Plugin.dll
</code></pre>
<p>This DLL belongs to <strong>Seclore</strong>, an enterprise Information Rights Management (IRM) solution that integrates directly into Microsoft Office.</p>
<p>What is happening internally:</p>
<ul>
<li>Word loads the Seclore plugin during startup</li>
<li>The plugin performs low-level operations (file protection, encryption, policy enforcement)</li>
<li>Due to a bug, incompatibility, or outdated version, the plugin triggers a <strong>memory violation</strong></li>
<li>Windows terminates Word immediately to protect system integrity</li>
</ul>
<p>The exception code <code>0xc0000409</code> usually indicates:</p>
<ul>
<li>Stack buffer overrun</li>
<li>Memory corruption</li>
<li>Unsafe or incompatible plugin code</li>
</ul>
<p>In short: <strong>Word crashes because the plugin crashes inside Word’s process</strong>.</p>
<hr>
<h2 id="whythisproblemoftenappearssuddenly">Why This Problem Often Appears Suddenly</h2>
<p>This issue commonly starts after:</p>
<ul>
<li>A <strong>Microsoft Office update</strong></li>
<li>A <strong>Windows update</strong></li>
<li>An update mismatch between Office and the Seclore client</li>
<li>Security software not being updated in sync with Office</li>
</ul>
<p>Reinstalling Office alone usually does <strong>not</strong> fix the issue, because the faulty plugin remains installed.</p>
<hr>
<h2 id="howtodisablethesecloreplugininmicrosoftword">How to Disable the Seclore Plugin in Microsoft Word</h2>
<p>If your organization allows it, disabling the plugin is the fastest way to confirm and resolve the problem.</p>
<h3 id="method1disableviawordoptionsrecommended">Method 1: Disable via Word Options (Recommended)</h3>
<ol>
<li>Open <strong>Microsoft Word</strong></li>
<li>Go to <strong>File → Options</strong></li>
<li>Select <strong>Add-ins</strong></li>
<li>At the bottom, next to <strong>Manage</strong>, choose <strong>COM Add-ins</strong></li>
<li>Click <strong>Go</strong></li>
<li>Locate the <strong>Seclore</strong> add-in (or similar)</li>
<li><strong>Uncheck</strong> the plugin</li>
<li>Click <strong>OK</strong></li>
<li>Restart Word</li>
</ol>
<p>If Word opens normally afterward, the plugin was the cause.</p>
<hr>
<h3 id="method2startwordinsafemodediagnostic">Method 2: Start Word in Safe Mode (Diagnostic)</h3>
<p>This method does not disable the plugin permanently, but it helps confirm the root cause.</p>
<ol>
<li>Press <strong>Win + R</strong></li>
<li>Run:<pre><code class="language-text">winword /safe
</code></pre>
</li>
<li>If Word works correctly in Safe Mode, the issue is <strong>definitely an add-in</strong></li>
</ol>
<hr>
<h3 id="method3updateorreinstallseclorebestlongtermfix">Method 3: Update or Reinstall Seclore (Best Long-Term Fix)</h3>
<p>If the plugin is required by company policy:</p>
<ul>
<li>Update the <strong>Seclore Desktop Client</strong> to the latest version</li>
<li>If already updated, reinstall it</li>
<li>Verify compatibility with the current Office build</li>
</ul>
<p>In managed corporate environments, this step should be handled by IT support.</p>
<hr>
<h2 id="keytakeaway">Key Takeaway</h2>
<p>When Office APplications crashe:</p>
<ul>
<li>The <em>faulting application</em> is not always the <em>root cause</em></li>
<li>Third-party Office plugins run inside Word’s process</li>
<li>A single buggy DLL can crash the entire application</li>
</ul>
<p>In this case, <strong>Word is the victim — not the problem</strong>.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2026/2/21127_carshin%20word%20plugin.png" alt="21127_carshin%20word%20plugin"></p>
</div>]]></content:encoded></item><item><title><![CDATA[?Migrating All Azure Resources Between Subscriptions with Azure CLI & PowerShell?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Migrating resources between Azure subscriptions is a <strong>common but risky task</strong>. Whether you’re reorganizing tenants, separating billing, or preparing for a handover, doing this manually is slow and error-prone.<br>
The issue here is that Azure Portal does not provide an option to migrate all resources at once.</p>
<p>For this</p></div>]]></description><link>https://developers.de/2026/01/23/migrating-all-azure-resources-between-subscriptions-with-azure-cli/</link><guid isPermaLink="false">69723816c62a6e11f4d8554a</guid><category><![CDATA[Azure]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 23 Jan 2026 09:57:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Migrating resources between Azure subscriptions is a <strong>common but risky task</strong>. Whether you’re reorganizing tenants, separating billing, or preparing for a handover, doing this manually is slow and error-prone.<br>
The issue here is that Azure Portal does not provide an option to migrate all resources at once.</p>
<p>For this reason, I created a <strong>PowerShell + Azure CLI script</strong> that <strong>automatically migrates all resource groups and their resources</strong> from one subscription to another.</p>
<h2 id="whatthisscriptdoes">? What This Script Does</h2>
<p>The script performs the following steps:</p>
<ol>
<li>Connects to a <strong>source Azure subscription</strong></li>
<li>Retrieves <strong>all resource groups</strong></li>
<li>Switches to a <strong>target subscription</strong></li>
<li>Creates missing resource groups</li>
<li>Moves <strong>all supported resources</strong> to the target subscription</li>
<li>Logs success and failure clearly</li>
</ol>
<p>It uses native <strong>Azure Resource Manager (ARM) move operations</strong>, meaning:</p>
<ul>
<li>Resource IDs remain intact</li>
<li>No redeployment is required</li>
<li>Downtime is minimized (but not zero)</li>
</ul>
<hr>
<h2 id="prerequisites">? Prerequisites</h2>
<p>Before running the script, ensure:</p>
<ul>
<li>Azure CLI is installed (<code>az version</code>)</li>
<li>You are logged in (<code>az login</code>)</li>
<li>You have <strong>Owner</strong> or <strong>Contributor</strong> permissions on both subscriptions</li>
<li>Resources are in a <strong>movable state</strong></li>
</ul>
<blockquote>
<p>⚠️ Not all Azure resources support cross-subscription moves (e.g., classic resources, some networking dependencies).</p>
</blockquote>
<hr>
<h2 id="scriptconfiguration">⚙️ Script Configuration</h2>
<p>Update the following values before running:</p>
<pre><code class="language-powershell">$sourceSubId = &quot;***&quot;     # Source subscription ID
$targetSubId = &quot;***&quot;     # Target subscription ID
$location = &quot;westeurope&quot; # Location for new resource groups
</code></pre>
<p>The <code>$location</code> value is only used when <strong>creating missing resource groups</strong> in the target subscription.</p>
<hr>
<p><img src="https://images.unsplash.com/photo-1515879218367-8466d910aaa4?q=80&amp;w=1600&amp;auto=format&amp;fit=crop" alt="PowerShell automation terminal"></p>
<hr>
<h2 id="scriptwalkthrough">? Script Walkthrough</h2>
<h3 id="1switchtosourcesubscription">1️⃣ Switch to Source Subscription</h3>
<pre><code class="language-powershell">az account set --subscription $sourceSubId
</code></pre>
<p>This ensures all resource discovery happens in the <strong>correct source context</strong>.</p>
<hr>
<h3 id="2retrieveallresourcegroups">2️⃣ Retrieve All Resource Groups</h3>
<pre><code class="language-powershell">$resourceGroups = az group list --query &quot;[].name&quot; -o tsv
</code></pre>
<p>This fetches <strong>every resource group name</strong> in the source subscription.</p>
<hr>
<h3 id="3ensureresourcegroupsexistintarget">3️⃣ Ensure Resource Groups Exist in Target</h3>
<pre><code class="language-powershell">az group exists --name $rg
</code></pre>
<p>If the resource group doesn’t exist in the target subscription, it is automatically created:</p>
<pre><code class="language-powershell">az group create --name $rg --location $location
</code></pre>
<p>✔ Prevents failures during resource moves<br>
✔ Keeps naming consistent</p>
<hr>
<h3 id="4collectresourceids">4️⃣ Collect Resource IDs</h3>
<pre><code class="language-powershell">$ids = az resource list --resource-group $rg --query &quot;[].id&quot; -o tsv
</code></pre>
<p>Azure requires <strong>resource IDs</strong> for move operations, not names.</p>
<hr>
<h3 id="5moveresourcesacrosssubscriptions">5️⃣ Move Resources Across Subscriptions</h3>
<pre><code class="language-powershell">az resource move `
  --destination-group $rg `
  --destination-subscription-id $targetSubId `
  --ids $ids
</code></pre>
<p>This is the <strong>core operation</strong>:</p>
<ul>
<li>Moves resources</li>
<li>Keeps them in the same resource group name</li>
<li>Preserves configuration and metadata</li>
</ul>
<hr>
<h3 id="6errorhandlinglogging">6️⃣ Error Handling &amp; Logging</h3>
<pre><code class="language-powershell">if ($LASTEXITCODE -eq 0) {
    Write-Host &quot;SUCCESS&quot;
} else {
    Write-Host &quot;FAILED - see error above&quot;
}
</code></pre>
<p>Failures usually occur due to:</p>
<ul>
<li>Unsupported resource types</li>
<li>Dependency constraints</li>
<li>Resources spanning multiple resource groups</li>
</ul>
<hr>
<p><img src="https://images.unsplash.com/photo-1556155092-490a1ba16284?q=80&amp;w=1600&amp;auto=format&amp;fit=crop" alt="Azure error diagnostics dashboard"></p>
<hr>
<h2 id="importantlimitations">⚠️ Important Limitations</h2>
<p>Be aware of these Azure constraints:</p>
<p>❌ Some resources <strong>cannot be moved</strong></p>
<ul>
<li>Classic resources</li>
<li>Certain App Service plans</li>
<li>Managed identities with dependencies</li>
</ul>
<p>❌ Resources <strong>must move together</strong></p>
<ul>
<li>VNets + subnets</li>
<li>NICs + VMs</li>
<li>Disks + VMs</li>
</ul>
<p>✔ Azure will <strong>block the move</strong> if dependencies are violated</p>
<hr>
<h2 id="bestpracticesbeforerunning">✅ Best Practices Before Running</h2>
<p>✔ Test on a <strong>single resource group first</strong><br>
✔ Export ARM templates as a backup<br>
✔ Run during a <strong>maintenance window</strong><br>
✔ Validate networking dependencies<br>
✔ Monitor activity logs during execution</p>
<hr>
<h2 id="whenshouldyouusethisscript">? When Should You Use This Script?</h2>
<p>This approach is ideal for:</p>
<ul>
<li>Subscription consolidation</li>
<li>Tenant separation</li>
<li>Environment restructuring (Dev → Prod)</li>
<li>M&amp;A cloud migrations</li>
<li>Billing realignment</li>
</ul>
<hr>
<h2 id="finalthoughts">? Final Thoughts</h2>
<p>This script provides a <strong>clean, repeatable, and safe</strong> way to migrate Azure resources at scale using native tooling from <strong>Microsoft Azure</strong>.</p>
<p>It’s not magic—but with proper preparation, it can save <strong>hours or days of manual work</strong>.</p>
<p>If you found this useful, feel free to:<br>
? Like<br>
? Repost<br>
? Share your migration war stories</p>
<p>Happy migrating ☁️?</p>
<p>----- SCRIPT ------</p>
<pre><code>Write-Host &quot;======================================&quot;
$sourceSubId = &quot;***&quot;
$targetSubId = &quot;***&quot;
$location = &quot;westeurope&quot;               

Write-Host $sourceSubId
Write-Host $targetSubId

# Switch to source subscription
Write-Host Setting + $targetSubId
az account set --subscription $sourceSubId

# Get list of all resource group names
$resourceGroups = az group list --query &quot;[].name&quot; -o tsv

foreach ($rg in $resourceGroups) {
    Write-Host &quot;======================================&quot; -ForegroundColor Cyan
    Write-Host &quot;Processing source RG: $rg&quot; -ForegroundColor Yellow

    # Check if same-name RG already exists in TARGET subscription
    az account set --subscription $targetSubId

    $exists = az group exists --name $rg --output tsv

    if ($exists -eq &quot;false&quot;) {
        Write-Host &quot;  Creating target RG '$rg' in location $location ...&quot; -ForegroundColor Green
        az group create --name $rg --location $location --subscription $targetSubId
    } else {
        Write-Host &quot;  Target RG '$rg' already exists - skipping creation&quot; -ForegroundColor Green
    }

    # Switch back to source to list resources
    az account set --subscription $sourceSubId

    # Get resource IDs from source RG
    $ids = az resource list --resource-group $rg --query &quot;[].id&quot; -o tsv

    if ($ids) {
        Write-Host &quot;  Moving resources from '$rg' ...&quot; -ForegroundColor Yellow
        az resource move `
            --destination-group $rg `
            --destination-subscription-id $targetSubId `
            --ids $ids

        if ($LASTEXITCODE -eq 0) {
            Write-Host &quot;  SUCCESS&quot; -ForegroundColor Green
        } else {
            Write-Host &quot;  FAILED - see error above. Often due to dependencies or unsupported resource types.&quot; -ForegroundColor Red
        }
    } else {
        Write-Host &quot;  No resources found in '$rg' - skipping move&quot; -ForegroundColor Gray
    }
}

Write-Host &quot;======================================&quot; -ForegroundColor Cyan
Write-Host &quot;All resource groups processed.&quot; -ForegroundColor White
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)]]></title><description><![CDATA[<div class="kg-card-markdown"><h2 id="introduction">Introduction</h2>
<p>Integrating a custom Model Context Protocol (MCP) server with Copilot Studio agents opens up new possibilities for organizations leveraging Microsoft 365 Copilot and Teams. This article guides you through the process of creating, deploying, and extending an AI agent with specialized skills, using MCP and Copilot Studio as the</p></div>]]></description><link>https://developers.de/2025/10/16/building-smarter-agents-with-copilot-studio-model-context-protocol-mcp/</link><guid isPermaLink="false">68e904fa9afb880dd816c1d2</guid><category><![CDATA[AI]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[MCP]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[asp.net]]></category><dc:creator><![CDATA[Heiko Luxenhofer]]></dc:creator><pubDate>Thu, 16 Oct 2025 08:07:21 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2025/10/161014_post_image%201%201.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><h2 id="introduction">Introduction</h2>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/161014_post_image%201%201.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"><p>Integrating a custom Model Context Protocol (MCP) server with Copilot Studio agents opens up new possibilities for organizations leveraging Microsoft 365 Copilot and Teams. This article guides you through the process of creating, deploying, and extending an AI agent with specialized skills, using MCP and Copilot Studio as the foundation. Whether you are a low-code enthusiast or a professional developer, you’ll find practical steps and insights to help you build robust, action-oriented agents tailored to your enterprise needs.</p>
<h2 id="understandingagenttypesincopilotstudio">Understanding Agent Types in Copilot Studio</h2>
<p>The diagram below illustrates the main types of agents you can build in Copilot Studio. Each type is tailored for specific tasks, ranging from answering questions and providing recommendations to executing automated actions and guiding users through processes. This simplified visual overview helps you quickly identify which agent type best fits your business.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/101311_agent_types.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"><br>
In this post, we focus on the action-oriented agent type, which leverages an existing Azure DevOps MCP server implementation to extend capabilities and automate workflows. You can find the source code and deployment instructions for the MCP server in the following GitHub repository:<br>
<a href="https://github.com/heluxenhofer/mcp-server-azure-devops">https://github.com/heluxenhofer/mcp-server-azure-devops</a></p>
<h2 id="hostingyourmcpserverinazure">Hosting Your MCP Server in Azure</h2>
<p>To make your MCP server accessible to Copilot Studio agents, you first need to prepare your server code for deployment in Azure. Using the Azure Developer CLI, you can deploy your MCP server to an Azure Container App Environment. The process is straightforward: clone the example repository from GitHub, follow the instructions in the README, and configure your deployment. The resulting architecture includes managed identity, a container registry, Azure Key Vault, and Application Insights for monitoring. This setup ensures your MCP server is secure, scalable, and ready for enterprise use.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/13825_infrastructure.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"></p>
<h2 id="agentwithmcpincopilotstudio">Agent with MCP in Copilot Studio</h2>
<p>Once your MCP server is running in Azure, the next step is to integrate it with Copilot Studio. Begin by manually configuring your agent in Copilot Studio, enabling generative AI orchestration mode to allow dynamic tool selection. For best results, disable general knowledge and web search features so your agent remains focused on its core tasks.<br>
When defining your agent’s instructions, clarity is key. For example, you might specify that your agent is an Azure DevOps assistant, designed to help developers perform specific tasks using Azure DevOps tools. The agent should be able to retrieve available MCP tools, summarize them for users, and guide them through actions such as creating new branches. It’s important to ensure the agent politely declines unrelated requests and maintains a friendly, engaging tone throughout interactions.<br>
Example instructions can be found in GitHub Repo <a href="https://github.com/heluxenhofer/mcp-server-azure-devops/blob/ac12db18f7cf5b9af95fbc77d3c400c65e47e596/docs/copilotstudio_instructions.md">here</a>.<br>
<strong>Tip</strong>: Take care to name referenced tools in your instructions like in configuration.<br>
If you aren`t familiar with creating agents in Copilot Studio, just follow instructions on Microsoft documentation <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/fundamentals-get-started">Quickstart: Create and deploy an agent - Microsoft Copilot Studio | Microsoft Learn</a>.</p>
<h3 id="monitoringandinsights">Monitoring and Insights</h3>
<p>Monitoring is essential for maintaining high-quality, production-ready agents. Azure Application Insights provides comprehensive visibility into both your MCP server and Copilot Studio agents. By integrating Application Insights, you can track performance metrics, user interactions, and potential issues, enabling continuous improvement and reliable operation.  Application Insights can be set up under Settings-Advanced</p>
<h3 id="integratingthemcptoolincopilotstudio">Integrating the MCP Tool in Copilot Studio</h3>
<h4 id="initiatingthemcptoolintegration">Initiating the MCP Tool Integration</h4>
<p>Begin by opening your agent configuration in Copilot Studio and selecting the option to add a new tool. Choose “Model Context Protocol (MCP)” as the integration type. The onboarding wizard will guide you through the required steps. For a comprehensive walkthrough, refer to the official documentation:<br>
<a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/mcp-add-existing-server-to-agent">https://learn.microsoft.com/en-us/microsoft-copilot-studio/mcp-add-existing-server-to-agent</a></p>
<h4 id="configuringthemcpserverendpoint">Configuring the MCP Server Endpoint</h4>
<p>During setup, you will be prompted to provide the endpoint of your MCP server. For this post, the integration leverages an existing Azure DevOps MCP server implementation, available at:<br>
<a href="https://github.com/heluxenhofer/mcp-server-azure-devops">https://github.com/heluxenhofer/mcp-server-azure-devops</a><br>
Ensure your MCP server is deployed and accessible from the Copilot Studio environment, see section <a href="#Hosting-Your-MCP-Server-in-Azure">Hosting</a>.</p>
<h4 id="settingupsecureauthentication">Setting Up Secure Authentication</h4>
<p>Security is essential when integrating enterprise tools. Select Microsoft Entra ID (OAuth 2.0) as the authentication method. You will need to enter the Client ID, Client Secret, Authorization URL, Token / Refresh URL and the required scopes from your Azure Entra ID App Registration. You can use existing App Registration which was created for MCP-Server, described in <a href="https://github.com/heluxenhofer/mcp-server-azure-devops">GitHub Repository</a>.<br>
<strong>Tip</strong>: You can find these endpoints in the Azure portal under Azure Active Directory → App registrations → [Your App] → Overview → Endpoints.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/1390_Mcp_wizard.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"></p>
<ul>
<li>Set the “Authorization URL” to the value labeled OAuth 2.0 authorization endpoint (v2).</li>
<li>Set the “Token URL” and “Refresh URL” to the value labeled OAuth 2.0 token endpoint (v2).</li>
<li>Scopes are defined under Expose an API in your App Registration.</li>
</ul>
<p>After completing these fields, Copilot Studio will generate a Redirect URL. Add this URL to your registered app’s Redirect URIs in Azure Entra ID to enable the authorization flow.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/1390_mcp_redirecturl.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"></p>
<h4 id="finalizingandvalidatingtheintegration">Finalizing and Validating the Integration</h4>
<p>Once configuration is complete, Copilot Studio will prompt you to authenticate using your Entra ID credentials. Open the Connections Manager if necessary, verify your credentials, and reauthenticate as needed. After successful authentication, your agent will be able to invoke MCP tools for Azure DevOps automation.<br>
It is recommended to validate the integration using Copilot Studio’s Test pane. Enter prompts that trigger MCP tool actions and confirm that your agent can successfully connect and perform the intended operations.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/131059_agent_testing.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"></p>
<h4 id="publishingandsharingyouragent">Publishing and Sharing Your Agent</h4>
<p>With your agent configured and authenticated, you’re ready to publish it in Copilot Studio. Add channels such as Microsoft Teams to make your agent accessible across your organization. Before rolling out the agent company-wide, test it in your chosen channels to ensure everything works as expected. Copilot Studio’s publishing workflow makes it easy to manage availability and distribution, supporting both targeted testing and broad deployment.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/13110_agent_publish.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"><br>
Read more about publishing and deploying your agent on Microsoft documentation.<br>
<a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/publication-fundamentals-publish-channels?tabs=web">Key concepts - Publish and deploy your agent - Microsoft Copilot Studio | Microsoft Learn</a></p>
<p>After publishing the agent and adding channels, you can try out your agent in Microsoft Teams by clicking on “Availability options” and copy / paste the link into your browser and try your agent inside Microsoft Teams.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/10/13111_teams_testing.png" alt="Building Smarter Agents with Copilot Studio + Model Context Protocol (MCP)"><br>
<strong>Note</strong>: At the time of writing this post, there might be a bug when clicking on “See agent in Teams” and agent wasn`t working in Teams chat like expected. Click on “Availability options” and copy link to test agent instead.</p>
<h2 id="summary">Summary</h2>
<p>Integrating a custom MCP server with Copilot Studio enables organizations to build powerful, action-oriented AI agents that streamline workflows and enhance productivity. By leveraging Azure Container Apps for hosting and Copilot Studio’s orchestration and authentication features, teams can deploy scalable, secure, and flexible solutions tailored to their business needs.</p>
<h2 id="bestpracticesrecommendations">Best Practices &amp; Recommendations</h2>
<ul>
<li>Use separate environments for development, testing, and production in Copilot Studio to ensure stability and manage changes effectively.</li>
<li>Design agents with clear objectives and leverage built-in topics for extensibility.</li>
<li>Implement additional security measures for your MCP server in production, such as restricting access and monitoring usage.</li>
<li>Set up CI/CD pipelines for automated deployments and updates.</li>
<li>Regularly monitor agent performance and collect user feedback to continuously refine capabilities and address potential issues.</li>
</ul>
<h2 id="references">References</h2>
<p>GitHub Repository custom MCP Server: <a href="https://github.com/heluxenhofer/mcp-server-azure-devops">https://github.com/heluxenhofer/mcp-server-azure-devops</a><br>
Copilot Studio Documentation: <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/">https://learn.microsoft.com/en-us/microsoft-copilot-studio/</a><br>
Model Context Protocol Integration: <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/agent-extend-action-mcp">https://learn.microsoft.com/en-us/microsoft-copilot-studio/agent-extend-action-mcp</a><br>
Azure Container App Deployment: <a href="https://learn.microsoft.com/en-us/azure/developer/ai/build-mcp-server-ts">https://learn.microsoft.com/en-us/azure/developer/ai/build-mcp-server-ts</a><br>
Authentication with Entra ID: <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/configuration-authentication-azure-ad">https://learn.microsoft.com/en-us/microsoft-copilot-studio/configuration-authentication-azure-ad</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[Building AI-Ready, Discoverable Tools with Model Context Protocol (MCP) in .NET 9 & C#]]></title><description><![CDATA[Learn how to implement Model Context Protocol (MCP) in .NET 9 and C#. Build discoverable, schema-driven tools for AI assistants, automation, and DevOps workflows. This guide covers benefits, architecture, elicitation, and a quick start to future-proof your integrations.]]></description><link>https://developers.de/2025/08/26/model-context-protocol-mcp-with-net-9/</link><guid isPermaLink="false">68ad9c37ca9a670b48eb5c3a</guid><category><![CDATA[AI]]></category><category><![CDATA[.NET Core]]></category><category><![CDATA[MCP]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[LLM]]></category><dc:creator><![CDATA[Heiko Luxenhofer]]></dc:creator><pubDate>Tue, 26 Aug 2025 12:13:41 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2025/8/26126_mcp-diagram.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2025/8/26126_mcp-diagram.png" alt="Building AI-Ready, Discoverable Tools with Model Context Protocol (MCP) in .NET 9 & C#"><p>This article presents a hands-on showcase of the Model Context Protocol (MCP) implemented with .NET and C#. See how MCP can be used to expose, discover, and orchestrate server-side tools in a standardized way—making integration smarter, more flexible, and ready for modern automation scenarios.</p>
<h1 id="introduction">Introduction</h1>
<p>Ever wondered how to make your tools and services more discoverable, interoperable, and easy to automate? MCP (Model Context Protocol) is designed for exactly that. In this showcase project, we use .NET and C# to demonstrate how MCP can turn ordinary APIs into intelligent, composable tools—ready for integration with clients, AI assistants, and automation platforms.</p>
<h2 id="whatismcp">What is MCP?</h2>
<p>MCP (Model Context Protocol) is an open protocol for exposing server-side operations as standardized, discoverable tools. According to the <a href="https://modelcontextprotocol.io/docs/getting-started/intro">official MCP specification</a>, MCP enables clients—such as developer tools, automation platforms, or AI assistants—to dynamically discover available operations, understand their input/output schemas, and invoke them in a consistent way.<br>
The diagrams below illustrate the difference between integrating APIs with and without MCP. This illustration alone demonstrates the advantages of using MCP.<br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/8/261146_rest-api-diagram.png" alt="Building AI-Ready, Discoverable Tools with Model Context Protocol (MCP) in .NET 9 & C#"><em>Integration of existing APIs without MCP</em><br>
<img src="https://developersde.blob.core.windows.net/usercontent/2025/8/261148_mcp-diagram.png" alt="Building AI-Ready, Discoverable Tools with Model Context Protocol (MCP) in .NET 9 & C#"><em>Integration of existing APIs with MCP</em></p>
<h3 id="benefitsofusingmcp">Benefits of Using MCP</h3>
<ul>
<li><strong>Discoverability</strong>: Clients can query the server to list all available tools and their capabilities, reducing the need for hardcoded API knowledge.</li>
<li><strong>Interoperability</strong>: MCP provides a common language for tools and clients, making integration across platforms and technologies straightforward.</li>
<li><strong>Extensibility</strong>: New tools and operations can be added to the server without breaking existing clients.</li>
<li><strong>Schema Transparenc</strong>y: Each tool exposes its input and output schema, enabling robust validation and easier client development.</li>
<li><strong>Security</strong>: MCP supports modern authentication and authorization mechanisms, such as OAuth 2.0 and Microsoft Entra ID.</li>
<li><strong>Automation &amp; AI Readiness</strong>: MCP is designed for orchestration by automation scripts and AI agents, supporting conversational and intelligent workflows.</li>
</ul>
<h3 id="elicitationwithmcpinteractiveuserworkflows">Elicitation with MCP: Interactive User Workflows</h3>
<p>A unique feature of MCP is its support for elicitation—interactive user input collection. In this project, elicitation is used to prompt users for required information (e.g., selecting a parent branch when creating a new branch). The MCP server sends a schema describing the expected input, and the client (or LLM) collects the user’s response, ensuring a guided and error-resistant workflow.<br>
Example json request/response with using an enum schema:</p>
<pre><code class="language-json">{
    &quot;type&quot;: &quot;object&quot;,
    &quot;properties&quot;: {
        &quot;ParentBranch&quot;: {
            &quot;type&quot;: &quot;string&quot;,
            &quot;enum&quot;: [
                &quot;main&quot;
            ]
        }
    },
    &quot;required&quot;: [
        &quot;ParentBranch&quot;
    ]
}
</code></pre>
<p><em>Request schema</em></p>
<pre><code class="language-json">{
    &quot;action&quot;: &quot;accept&quot;,
    &quot;content&quot;: {
        &quot;ParentBranch&quot;: &quot;main&quot;
    },
    &quot;_meta&quot;: null
}
</code></pre>
<p><em>Response schema</em></p>
<p>This approach makes user interaction smarter and more flexible, allowing for dynamic, context-aware prompts and responses.</p>
<h2 id="architectureoverview">Architecture Overview</h2>
<p>This showcase project on <a href="https://github.com/heluxenhofer/mcp-server-azure-devops">GitHub</a> is built with .NET 9 and C#, using MCP to abstract a set of Azure DevOps operations as MCP tools. The diagram below illustrates the key components and their interactions:</p>
<ul>
<li><strong>MCP Server</strong>:  Hosts the MCP tools and exposes a standardized discovery endpoint. Implements authentication via Microsoft Entra ID.</li>
<li><strong>MCP Tools</strong>: Each tool represents a server-side operation (e.g., create branch, list repositories). Tools expose input/output schemas for validation and interoperability. Requests operations to Azure DevOps REST API.</li>
<li><strong>Client/LLM</strong>: Discovers available tools dynamically and invokes them using the MCP protocol. Supports elicitation for interactive workflows.</li>
<li><strong>Authentication Layer</strong>:  Uses OAuth 2.0 and Entra ID for secure access control. Uses on-behalf-flow for accessing Azure DevOps REST API.</li>
</ul>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/8/261154_architecture.png" alt="Building AI-Ready, Discoverable Tools with Model Context Protocol (MCP) in .NET 9 & C#"><em>Components of current architecture</em></p>
<h2 id="technicalimplementation">Technical Implementation</h2>
<p>This project is a practical demonstration of MCP’s power when combined with .NET and C#:</p>
<ul>
<li><strong>Modern</strong> .NET: Built with .NET 9 for reliability and performance.</li>
<li><strong>C# Best Practices</strong>: Clean, maintainable code that’s easy to extend.</li>
<li><strong>MCP Tooling</strong>: Azure DevOps operations are exposed as MCP tools, with clear schemas and discoverable endpoints.</li>
<li><strong>Secure by Design</strong>: Authentication via Entra ID and secure configuration management. Using on-behalf authentication flow for accessing Azure DevOps REST API in a secure way.</li>
</ul>
<p>More technical details, instructions and examples can be found in <a href="https://github.com/heluxenhofer/mcp-server-azure-devops/blob/main/README.md">readme</a> file of <a href="https://github.com/heluxenhofer/mcp-server-azure-devops">GitHub project</a></p>
<h2 id="whytrymcpwithnetandc">Why Try MCP with .NET and C#?</h2>
<p>If you’re a developer, architect, or automation enthusiast, this showcase is for you. MCP makes it easy to build smarter, more flexible integrations—whether you’re connecting tools, enabling AI, or future-proofing your APIs. With .NET and C#, you get a robust, enterprise-ready foundation for your MCP server.</p>
<h2 id="references">References</h2>
<p><a href="https://github.com/heluxenhofer/mcp-server-azure-devops">Project GitHub Repository</a><br>
<a href="https://github.com/modelcontextprotocol/csharp-sdk">MCP C# SDK</a><br>
<a href="https://github.com/modelcontextprotocol/modelcontextprotocol">MCP GitHub (Official)</a><br>
<a href="https://modelcontextprotocol.io/docs/getting-started/intro">MCP Specification</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[Cache Strategy Considerations and the Role of Redis]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Before deploying Redis, it's important to evaluate whether it is truly needed for the application in question.</p>
<p>Redis is typically used in scenarios where an application must handle a very high volume of concurrent users often in the range of hundreds of thousands. In our case, this level of demand</p></div>]]></description><link>https://developers.de/2025/06/29/cache-strategy-considerations-and-the-role-of-redis/</link><guid isPermaLink="false">6861332a34452217444e6ec1</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Sun, 29 Jun 2025 12:43:36 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Before deploying Redis, it's important to evaluate whether it is truly needed for the application in question.</p>
<p>Redis is typically used in scenarios where an application must handle a very high volume of concurrent users often in the range of hundreds of thousands. In our case, this level of demand does not apply.</p>
<p>If caching is required, we generally have two options:</p>
<h4 id="inmemorycaching">In-Memory Caching</h4>
<p>This is the fastest option but has limitations in clustered environments (e.g., state synchronization, failover).</p>
<h4 id="dedicatedcacheservices">Dedicated Cache Services</h4>
<p>These include Redis, Memcached, and others, which are external systems accessed over the network.</p>
<p>When considering a cache service, the next logical question is: Which one should we use?</p>
<p>All services, whether EMail server, SQL databases, MongoDB, or Redis are accessed over specific protocols (e.g., TDS for SQL Server, TCP for MongoDB or Redis). Performance comparisons often assume Redis is the fastest option, but that assumption can be misleading.</p>
<p><strong>SQL Server</strong>: Often provides the fastest response times, especially for indexed lookups on structured data. Believe me or not, the JETDB (MSAccess) is the fastest DB as long the single user is connected. :)</p>
<p><strong>MongoDB</strong>: Offers strong performance, particularly in distributed cloud-native environments like Azure Cosmos DB.</p>
<p><strong>Redis</strong>: While not inherently the fastest, it excels at horizontal scalability thanks to its built-in partitioning and protocol-level load balancing. This makes Redis suitable for very high-scale scenarios, where clients need to be routed directly to the node holding the relevant data.</p>
<p>So why is Redis frequently used?</p>
<p>Mostly, a lack of Architectural Insight: Many teams adopt Redis without properly researching if it's the best fit.</p>
<p>Scalability Needs: Redis shines in systems that require linear scaling across many users and nodes, which is often not the case in smaller or mid-sized applications.</p>
<h3 id="conclusion">Conclusion</h3>
<p>For systems with fewer than ~1,000 (this needs to be measured for every application!!) concurrent users, it is often more efficient and maintainable to leverage SQL tables directly, avoiding the added complexity and operational overhead of Redis.<br>
I'm not saying, use SQL or MS Access for cache in general. I'm saying, it is smart to understand problem and do required performance measurment before making decisions. Believe me, you will be supried.</p>
<p>Redis is often used as a synonym for caching, just as Docker and Kubernetes are commonly associated with microservices. However, none of these associations are entirely accurate in a generalized context.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Model Performance]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Evaluating large language models (LLMs) is becoming increasingly difficult. One major challenge is test set contamination, where benchmark questions unintentionally end up in a model’s training data—skewing results and making once-reliable benchmarks quickly outdated. While newer benchmarks try to avoid this by using crowdsourced questions or LLM-based evaluations,</p></div>]]></description><link>https://developers.de/2025/04/18/model-performance/</link><guid isPermaLink="false">6802768a2531c70d884c2f62</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[GPT]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 18 Apr 2025 16:18:12 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Evaluating large language models (LLMs) is becoming increasingly difficult. One major challenge is test set contamination, where benchmark questions unintentionally end up in a model’s training data—skewing results and making once-reliable benchmarks quickly outdated. While newer benchmarks try to avoid this by using crowdsourced questions or LLM-based evaluations, these methods come with their own problems, like bias and difficulty in judging complex tasks.</p>
<p>That’s where <a href="https://openreview.net/forum?id=sKYHBTAxVa">LiveBench</a> comes in.</p>
<p>LiveBench is some kind of a benchmark designed to address these issues head-on. It features regularly updated questions sourced from fresh content—like math competitions, academic papers, and news articles—and scores answers automatically using objective ground-truth values. It covers a wide range of tough tasks, including math, coding, reasoning, and instruction following, pushing LLMs to their limits.</p>
<p>With questions refreshed monthly and difficulty scaling over time, LiveBench is built not just for today’s models but for the next wave of AI breakthroughs. Top models currently score below 70%, showing just how challenging—and necessary—this benchmark is.</p>
<p>I put toghether few benchmarks.</p>
<h3 id="modelperformancebyaveragescore">Model Performance by Average score.</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/181558_output.png" alt="181558_output"></p>
<h3 id="modelreasoningperformance">Model Reasoning Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/181616_output%20(4).png" alt="181616_output%20(4)"></p>
<h3 id="modelcodingperformance">Model Coding Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/18166_output%20(2).png" alt="18166_output%20(2)"></p>
<h3 id="modellanguageperformance">Model Language Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/18169_output%20(3).png" alt="18169_output%20(3)"></p>
<h3 id="recap">Recap</h3>
<p>I have created all diagrams by using GPT-4o based on data obtained from <a href="https://livebench.ai/#/?Coding=a">https://livebench.ai/#/?Coding=a</a><br>
If some of models are missing in diagrams, please forgive me (It was omitted by GPT Diagram Generation :)).</p>
</div>]]></content:encoded></item><item><title><![CDATA[Recommended AI Sessions]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Dear all, here is the list of recommended resources related to AI Sessions.<br>
It is a great foundation to start learning about AI.</p>
<ol>
<li>
<p>BRK440: Getting started with Generative AI in Azure<br>
<a href="https://github.com/microsoft/aitour-generative-ai-in-azure">https://github.com/microsoft/aitour-generative-ai-in-azure</a></p>
</li>
<li>
<p>BRK441: Build AI Solutions with Azure AI Foundry<br>
<a href="https://github.com/microsoft/aitour-concept-to-creation-ai-studio">https://github.com/microsoft/aitour-concept-to-creation-ai-studio</a></p>
</li>
<li>
<p>BRK443:</p></li></ol></div>]]></description><link>https://developers.de/2025/03/25/recommended-ai-sessions/</link><guid isPermaLink="false">67e01ff4e62fcc1d54ff428c</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[GPT]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 25 Mar 2025 07:07:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Dear all, here is the list of recommended resources related to AI Sessions.<br>
It is a great foundation to start learning about AI.</p>
<ol>
<li>
<p>BRK440: Getting started with Generative AI in Azure<br>
<a href="https://github.com/microsoft/aitour-generative-ai-in-azure">https://github.com/microsoft/aitour-generative-ai-in-azure</a></p>
</li>
<li>
<p>BRK441: Build AI Solutions with Azure AI Foundry<br>
<a href="https://github.com/microsoft/aitour-concept-to-creation-ai-studio">https://github.com/microsoft/aitour-concept-to-creation-ai-studio</a></p>
</li>
<li>
<p>BRK443: Build your code-first app with Azure AI Agent Service<br>
<a href="https://github.com/microsoft/aitour-azure-openai-assistants">https://github.com/microsoft/aitour-azure-openai-assistants</a></p>
</li>
<li>
<p>BRK444: Getting started with AI Agents in Azure<br>
<a href="https://github.com/microsoft/aitour-getting-started-with-ai-agents">https://github.com/microsoft/aitour-getting-started-with-ai-agents</a></p>
</li>
<li>
<p>BRK450: Prompty, AI Studio and practical E2E development<br>
<a href="https://github.com/microsoft/aitour-llmops-with-gen-ai-tools">https://github.com/microsoft/aitour-llmops-with-gen-ai-tools</a></p>
</li>
<li>
<p>BRK451: Code-first GenAIOps from prototype to production<br>
<a href="https://github.com/microsoft/aitour-llmops-with-gen-ai-tools">https://github.com/microsoft/aitour-llmops-with-gen-ai-tools</a></p>
</li>
<li>
<p>BRK452: Operationalize AI responsibly with Azure AI Studio<br>
<a href="https://github.com/microsoft/aitour-operate-ai-responsibly-with-ai-studio">https://github.com/microsoft/aitour-operate-ai-responsibly-with-ai-studio</a></p>
</li>
<li>
<p>BRK453: Explore cutting-edge models: LLMs, SLMs and more<br>
<a href="https://github.com/microsoft/aitour-exploring-cutting-edge-models">https://github.com/microsoft/aitour-exploring-cutting-edge-models</a></p>
</li>
</ol>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/2411_4c313b96-61cf-4ec7-8dfd-ed97a87c7d06.png" alt="2411_4c313b96-61cf-4ec7-8dfd-ed97a87c7d06"></p>
</div>]]></content:encoded></item><item><title><![CDATA[(Iterative) Retrieval-Augmented Generation]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Right now, it seems that most of the community if fixed on the RAG (excluding Prompt Engineering). However, there is a technique called <strong>Iterative RAG</strong> (Ma et al., 2023; Li et al., 2024; Chan et al., 2024; Shi et al., 2024).</p>
<p>This is a more advanced approach in natural language</p></div>]]></description><link>https://developers.de/2025/01/14/iterative-retrieval-augmented-generation/</link><guid isPermaLink="false">67854f8104230a1e58966b2e</guid><category><![CDATA[LLM]]></category><category><![CDATA[GPT]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 14 Jan 2025 09:01:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Right now, it seems that most of the community if fixed on the RAG (excluding Prompt Engineering). However, there is a technique called <strong>Iterative RAG</strong> (Ma et al., 2023; Li et al., 2024; Chan et al., 2024; Shi et al., 2024).</p>
<p>This is a more advanced approach in natural language processing and generative AI that enhances the interaction between information retrieval and generation by refining outputs through multiple iterations.</p>
<h2 id="1whatisrag">1. What is RAG?</h2>
<p>RAG integrates two main components:</p>
<ul>
<li><strong>Retriever</strong>: Finds relevant documents or data from an external knowledge base. This is typically the task of some connector.</li>
<li><strong>Generator</strong>: Generates a response or output based on the retrieved information.This is covered by the model itself.</li>
</ul>
<p>The aim is to use external knowledge for factually grounded and contextually relevant outputs.The data can be stored in some database, can be retrieved from an external service, read from documents etc.</p>
<h2 id="2whatmakesiterativeragdifferent">2. What Makes Iterative RAG Different?</h2>
<p>Iterative RAG improves upon standard RAG by performing multiple cycles of refinement. It iteratively improves the quality of output by revisiting retrieval and generation steps.</p>
<h3 id="iterationprocessandfeedbackloop">Iteration Process and Feedback Loop:</h3>
<ol>
<li><strong>Initial Retrieval</strong>: Retrieve a set of documents or data points. (Same as RAG).</li>
<li><strong>Generation</strong>: Produce an output based on the retrieved information.(Same as RAG)</li>
<li><strong>Feedback Loop</strong>: Analyze the output to identify gaps or areas for improvement.</li>
<li><strong>Refinement Retrieval</strong>: Use feedback to refine the search for better data.(Same as RAG)</li>
<li><strong>Regeneration</strong>: Generate a new output based on the refined retrieval.(Same as RAG)</li>
</ol>
<p>This loop continues until:</p>
<ul>
<li>The output meets a predefined quality threshold, or</li>
<li>A maximum number of iterations is reached.</li>
</ul>
<h2 id="3advantagesofiterativerag">3. Advantages of Iterative RAG</h2>
<ul>
<li><strong>Improved Accuracy</strong>: Addresses errors or missing information through iterations.</li>
<li><strong>Contextual Relevance</strong>: Refines context to better align the final response with the query.</li>
<li><strong>Dynamic Adaptation</strong>: Adjusts retrieval and generation strategies dynamically.</li>
</ul>
<p>This process seems to be evolved over time. I guess, that reasoning might be a bit inspired by iterative process of the feedback introduced by the <em>Iterative RAG</em>.</p>
<h2 id="4applications">4. Applications</h2>
<ul>
<li><strong>Question Answering</strong>: Produces detailed, factually accurate answers by refining retrieved knowledge.</li>
<li><strong>Document Summarization</strong>: Ensures summaries include all relevant information.</li>
<li><strong>Conversational AI</strong>: Enhances dialogue coherence by refining context and revisiting prior responses.</li>
</ul>
<h2 id="5challenges">5. Challenges</h2>
<ul>
<li><strong>Computational Cost</strong>: Iterations increase latency and resource usage.</li>
<li><strong>Optimization Complexity</strong>: Balancing retrieval and generation across iterations can be very tricky task.</li>
<li><strong>Risk of Overfitting</strong>: Excessive iterations might lead to overly specific or biased outputs.</li>
</ul>
<h3 id="recap">Recap</h3>
<p>Iterative RAG is a significant advancement in combining retrieval and generation systems, offering a robust way to handle complex queries and generate high-quality, accurate responses.Although RAG methods achieve strong performance on multi-hop tasks like HotpotQA, there are huge limitations.<br>
For example, RAG is chunk-based and it struggles with knowledge-intensive tasks (Wang et al., 2024a), because chunks contain excessive text noise and do not capture the relation betwen information. With this limitation LLMs cannot effectively use augmented knowledge.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building Intelligent Workflows with Semantic Kernel Pipelines]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When it comes to automating workflows, breaking down complex tasks into smaller, modular steps can make the process more efficient and maintainable. Semantic Kernel (SK) provides a powerful way to achieve this through pipelines. In this post, we’ll explore how to create and execute a pipeline that processes a</p></div>]]></description><link>https://developers.de/2024/12/30/building-intelligent-workflows-with-semantic-kernel-pipelines/</link><guid isPermaLink="false">6772915fba29d61118ffacab</guid><category><![CDATA[LLM]]></category><category><![CDATA[.NET]]></category><category><![CDATA[GPT]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 30 Dec 2024 12:33:27 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When it comes to automating workflows, breaking down complex tasks into smaller, modular steps can make the process more efficient and maintainable. Semantic Kernel (SK) provides a powerful way to achieve this through pipelines. In this post, we’ll explore how to create and execute a pipeline that processes a number through parsing, arithmetic, truncation, and humanization. Pleae note that the SK pipeline described in this example is not the state-full machine designed for long running processes. If you want to leverage such scenarios, I recommend using Azure Durable Functions.</p>
<h3 id="whyusesemantickernelpipelines">Why Use Semantic Kernel Pipelines?</h3>
<p>Semantic Kernel pipelines allow you to:</p>
<p>Modularize functionality into reusable components.<br>
Chain together functions to handle complex workflows.<br>
Integrate with AI capabilities such as prompt-based generation.<br>
Let’s dive into a practical example where we build a pipeline to take a string, process it numerically, and then convert it into a spelled-out English phrase.</p>
<h3 id="theworkflow">The Workflow</h3>
<p>In this example, I use the pipeline to solve the following problem:</p>
<ol>
<li>Parse a string representation of a number (e.g., &quot;123.456&quot;) into a double.</li>
<li>Multiply the double by another double (e.g., 78.90).</li>
<li>Truncate the resulting value to an integer.</li>
<li>Convert the integer into its English word representation (e.g., &quot;nine thousand seven hundred forty&quot;).</li>
</ol>
<pre><code class="language-csharp">    public async Task DemoPipelineAsync()
    {
        IKernelBuilder builder = Kernel.CreateBuilder();
        builder.AddOpenAIChatCompletion(
            TestConfiguration.OpenAI.ChatModelId,
            TestConfiguration.OpenAI.ApiKey);
        builder.Services.AddLogging(c =&gt; c.AddConsole().SetMinimumLevel(LogLevel.Trace));
        Kernel kernel = builder.Build();

            KernelFunction parseDouble = KernelFunctionFactory.CreateFromMethod((string s) =&gt; double.Parse(s, CultureInfo.InvariantCulture), &quot;parseDouble&quot;);
            KernelFunction multiplyByN = KernelFunctionFactory.CreateFromMethod((double i, double n) =&gt; i * n, &quot;multiplyByN&quot;);
            KernelFunction truncate = KernelFunctionFactory.CreateFromMethod((double d) =&gt; (int)d, &quot;truncate&quot;);
            KernelFunction humanize = KernelFunctionFactory.CreateFromPrompt(new PromptTemplateConfig()
            {
                Template = &quot;Spell out this number in English: {{$number}}&quot;,
                InputVariables = [new() { Name = &quot;number&quot; }],
            });
            KernelFunction pipeline = KernelFunctionCombinators.Pipe([parseDouble, multiplyByN, truncate, humanize], &quot;pipeline&quot;);

            KernelArguments args = new()
            {
                [&quot;s&quot;] = &quot;123.456&quot;,
                [&quot;n&quot;] = (double)78.90,
            };

            // - The parseInt32 function will be invoked, read &quot;123.456&quot; from the arguments, and parse it into (double)123.456.
            // - The multiplyByN function will be invoked, with i=123.456 and n=78.90, and return (double)9740.6784.
            // - The truncate function will be invoked, with d=9740.6784, and return (int)9740, which will be the final result.
            Console.WriteLine(await pipeline.InvokeAsync(kernel, args));
}
</code></pre>
<h4 id="creatingthepipeline">Creating the Pipeline</h4>
<p>Step 1: Define the Functions<br>
We start by defining individual functions for each step in the workflow. Using Semantic Kernel’s KernelFunctionFactory, we create these modular functions:</p>
<p>Parsing a String into a Double: Converts the string &quot;123.456&quot; into the numeric value 123.456.<br>
Multiplication: Multiplies the parsed number by a given multiplier.<br>
Truncation: Truncates the result to an integer.<br>
Humanization: Converts the integer into a spelled-out English string using a prompt-based function.</p>
<p>Step 2: Combine Functions into a Pipeline<br>
With the functions ready, we use KernelFunctionCombinators.Pipe to chain them together into a pipeline. The output of one function feeds directly into the next, ensuring a seamless data flow.</p>
<p>Step 3: Provide Input Arguments<br>
The pipeline takes input in the form of KernelArguments. For our example, we provide:</p>
<p>&quot;123.456&quot; as the string to parse.<br>
78.90 as the multiplier.</p>
<p>Step 4: Execute the Pipeline<br>
Finally, the pipeline is invoked with the input arguments. Each function is executed sequentially, producing the final human-readable result.</p>
<h3 id="wrapup">Wrap-up</h3>
<p>Semantic Kernel pipelines make it easy to build intelligent workflows that combine traditional logic with AI capabilities. Whether you’re processing numbers, analyzing text, or orchestrating complex tasks, pipelines offer a structured and efficient approach to solving problems.</p>
<p>If you’re looking to build smarter applications for the new software era, try Semantic Kernel! With a little creativity, the possibilities are endless.</p>
</div>]]></content:encoded></item><item><title><![CDATA[How to calculate the Cosine Similarity in C#?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Cosine similarity measures the cosine of the angle between two non-zero vectors in an n-dimensional space. Its value ranges from -1 to 1:</p>
<ul>
<li><strong>A cosine similarity of 1</strong> implies that the vectors are identical.</li>
<li><strong>A cosine similarity of 0</strong> implies that the vectors are orthogonal (no similarity).</li>
<li><strong>A cosine similarity</strong></li></ul></div>]]></description><link>https://developers.de/2024/12/09/how-to-calculate-the-cosine-similarity-in-c/</link><guid isPermaLink="false">6755a9f8e06b310bdc3dc9ba</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 09 Dec 2024 10:32:00 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2024/12/81426_Designer.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81426_Designer.png" alt="How to calculate the Cosine Similarity in C#?"><p>Cosine similarity measures the cosine of the angle between two non-zero vectors in an n-dimensional space. Its value ranges from -1 to 1:</p>
<ul>
<li><strong>A cosine similarity of 1</strong> implies that the vectors are identical.</li>
<li><strong>A cosine similarity of 0</strong> implies that the vectors are orthogonal (no similarity).</li>
<li><strong>A cosine similarity of -1</strong> implies that the vectors are diametrically opposed.</li>
</ul>
<p>In the context of this post, the calculation has following partial parts:</p>
<ul>
<li><strong>Dot Product</strong>: This is calculated by multiplying corresponding components of the two vectors and summing these products.</li>
<li><strong>Magnitude</strong>: The magnitude (or length) of each vector is computed as the square root of the sum of the squares of its components.</li>
<li><strong>Dividing the Dot Product by the Product of the Magnitudes</strong>: This gives the cosine of the angle between the two vectors, which serves as the similarity measure. The more this value approaches 1, the closer the vectors are aligned.</li>
</ul>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81423_cosine.png" alt="How to calculate the Cosine Similarity in C#?"></p>
<p>Following method compares two vectors of the same dimension and calculates the cosine similarity as used inside Large Language Models.</p>
<pre><code class="language-csharp">  /// &lt;summary&gt;
  /// Calculates the cosine similarity.
  /// &lt;/summary&gt;
  /// &lt;param name=&quot;embedding1&quot;&gt;&lt;/param&gt;
  /// &lt;param name=&quot;embedding2&quot;&gt;&lt;/param&gt;
  /// &lt;returns&gt;&lt;/returns&gt;
  /// &lt;exception cref=&quot;ArgumentException&quot;&gt;&lt;/exception&gt;
  public double CalculateSimilarity(float[] embedding1, float[] embedding2)
  {
      if (embedding1.Length != embedding2.Length)
      {
          return 0;
          //throw new ArgumentException(&quot;embedding must have the same length.&quot;);
      }

      double dotProduct = 0.0;
      double magnitude1 = 0.0;
      double magnitude2 = 0.0;

      for (int i = 0; i &lt; embedding1.Length; i++)
      {
          dotProduct += embedding1[i] * embedding2[i];
          magnitude1 += Math.Pow(embedding1[i], 2);
          magnitude2 += Math.Pow(embedding2[i], 2);
      }

      magnitude1 = Math.Sqrt(magnitude1);
      magnitude2 = Math.Sqrt(magnitude2);

      if (magnitude1 == 0.0 || magnitude2 == 0.0)
      {
          throw new ArgumentException(&quot;embedding must not have zero magnitude.&quot;);
      }

      double cosineSimilarity = dotProduct / (magnitude1 * magnitude2);

      return cosineSimilarity;
  }
</code></pre>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81421_Designer.png" alt="How to calculate the Cosine Similarity in C#?"></p>
<p>Visit: <a href="https://daenet.com">https://daenet.com</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[DevOps issue when building NUGET package with .NET application]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When working with .NET and Azure DevOps, we encountered an interesting issue. The pipeline failed, and the log does not show any meaningful information. The only issue in the log was this one:</p>
<pre><code>&quot;D:\a\1\s\src\YOURPROJECT.Api.csproj&quot; (pack target) (1:7) -&gt;
       (GenerateNuspec</code></pre></div>]]></description><link>https://developers.de/2024/05/24/devops-issue-when-building/</link><guid isPermaLink="false">66503a4c9a1d2d16acce5376</guid><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 24 May 2024 07:28:45 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When working with .NET and Azure DevOps, we encountered an interesting issue. The pipeline failed, and the log does not show any meaningful information. The only issue in the log was this one:</p>
<pre><code>&quot;D:\a\1\s\src\YOURPROJECT.Api.csproj&quot; (pack target) (1:7) -&gt;
       (GenerateNuspec target) -&gt; 
         C:\hostedtoolcache\windows\dotnet\sdk\8.0.300\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5026: The file 'D:\a\1\s\src\YOURPROJECT\bin\release\net8.0\YOURPROJECT.dll' to be packed was not found on disk. 
</code></pre>
<p>The reson is that we activated inside the .csproj file autogeneration of the package:</p>
<pre><code>&lt;GeneratePackageOnBuild&gt;True&lt;/GeneratePackageOnBuild&gt;
</code></pre>
<p>This is not supported within the Azure DevOps pipeline. When discussing unsupported features, you should also be aware of the following. If your project file performs any file copy operation, that might be an issue for the pipeline.</p>
<pre><code>&lt;Target Name=&quot;CopyPackage&quot; AfterTargets=&quot;Pack&quot;&gt;
	&lt;Copy SourceFiles=&quot;$(OutputPath)..\$(PackageId).$(PackageVersion).nupkg&quot; DestinationFolder=&quot;$(SolutionDir)..\nuget&quot; /&gt;
&lt;/Target&gt;
</code></pre>
<p>Hope this helps.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Modular Layered Architecture of Backend Applications]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In the world of backend software development, the architecture you choose can greatly impact the flexibility, scalability, and usability of your applications. One such practical and efficient architecture is the modular layered architecture.</p>
<p>The modular layered architecture breaks down an application into separate modules - each with specific functions. The</p></div>]]></description><link>https://developers.de/2024/03/27/modular-layered-architecture-of-backend-applications/</link><guid isPermaLink="false">66046f6e7e14e8140cc91f0e</guid><category><![CDATA[Azure]]></category><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Wed, 27 Mar 2024 19:15:37 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2024/3/272155_blog%20modules.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2024/3/272155_blog%20modules.png" alt="Modular Layered Architecture of Backend Applications"><p>In the world of backend software development, the architecture you choose can greatly impact the flexibility, scalability, and usability of your applications. One such practical and efficient architecture is the modular layered architecture.</p>
<p>The modular layered architecture breaks down an application into separate modules - each with specific functions. The higher level of organization facilitates a cleaner, more maintainable codebase. In this architecture, the Application Domain or API layer generally holds the reins - it knows the entirety or at least the crux of what the application is supposed to do.</p>
<p>Consider an example, a command like dealing with some hardware</p>
<p><code>api.SwitchOnLight(green);</code></p>
<p>or dealing with a vector database</p>
<p><code>api.UpsertDataSourceAsync(string context, string url);</code>.</p>
<p>Here, the API knows about the functions it's supposed to perform, that is, turn on the light to the green color or update an existing vector in the vector database.</p>
<p>When the API is requested to perform these operations, it communicates with underlying modules to carry out the task. For anything related to database operations, there comes the Data Access Layer (DAL). This concept is consistent with the Repository Pattern as interpreted by developers.</p>
<p>However, a question arises: what about when the API has to deal with external systems like a lighting or a hardware system? Here, we need an architecture that seamlessly integrates all kinds of external services - a more specialized layer such as a Hardware Access Layer (HAL) for hardware interaction along with DAL for database interaction. So, we rather talk about Service Layers than Repository Pattern, which is a special case in more simple applications.</p>
<p>Peek into the realms of Windows, and you may encounter HAL - an age-old concept that continues to deliver. From the perspective of the API, to switch on the light, the code might look like this:</p>
<pre><code class="language-csharp">SwitchOnLight(color)
{
   _hal.SendMessage(new Message{ clr = color, intensity = default});
}
</code></pre>
<p>The HAL implementations include HttpAccessLayer, ZigbeeAccessLayer, etc., each designed to communicate effectively with a particular set of hardware. The only thing they need to know is how to speak the hardware's language, not anything specific about the application.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/3/272156_blog%20modules.png" alt="Modular Layered Architecture of Backend Applications"></p>
<p>However, note that a design flaw often seen is letting the DAL or HAL know too much about our application. Continuing with our earlier example, the following design would not be ideal:</p>
<pre><code class="language-csharp">UpsertDataSourceAsync(string context, string url)
{
    _dal.UpsertDataSourceAsync(context, url);
}
</code></pre>
<p>In this case, the DAL knows about url and contexts. These are application artefacts. It's essential that layers are as ignorant about the application as possible, meaning completely ignorant. The main idea is to make these layers as reusable as possible. Independent on reusability, it is good to follow single responsibility pattern in each component. Consider transporting this layer to another application that needs to work with cars and houses data - having to handle context, url or any application specific detail might not be the best approach.</p>
<p>A better approach for DAL design would look like this:</p>
<pre><code class="language-csharp">UpsertDataSourceAsync(string context, string url)
{
        _dal.UpsertVectorAsync(dataSourceCollectionName, 
        new Payload{ url = url});
}
</code></pre>
<p>Here, the DAL takes the responsibility of creating the payload in the given collection of the vector database. The API that implements <em>UpsertDataSourceAsync</em> is the only player here that needs to understand the bigger picture (context and url), allowing the DAL and HAL to remain efficient, simple, and reusable.</p>
<p>To conclude, the modular layered architecture truly shines when it comes to separating concretes from the abstracts, enabling the creation of a versatile, reusable, and maintainable backend architecture.</p>
</div>]]></content:encoded></item><item><title><![CDATA[What is the proper way to read configuration and settings?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>.Net applications have a standard process for handling application configuration and settings. In my code reviews, I've observed that developers often approach configuration in a variety of unconventional or &quot;creative&quot; ways, which is largely incorrect. It's crucial to ensure your code can consistently load the configuration from specific</p></div>]]></description><link>https://developers.de/2024/01/13/what-is-the-proper-way-to-read-configuration-and-settings/</link><guid isPermaLink="false">62693ddae03af60a94dd1b55</guid><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Sat, 13 Jan 2024 11:31:39 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>.Net applications have a standard process for handling application configuration and settings. In my code reviews, I've observed that developers often approach configuration in a variety of unconventional or &quot;creative&quot; ways, which is largely incorrect. It's crucial to ensure your code can consistently load the configuration from specific locations, such as Environment Variables, Command-Line Arguments, and the application settings file, known as appsettings.json.<br>
To achieve this, adhere to the following initialization process:</p>
<pre><code>private static IConfigurationRoot InitializeConfiguration(string[] args)
{
    var builder = new ConfigurationBuilder()
         .SetBasePath(Directory.GetCurrentDirectory())
         .AddJsonFile(&quot;appsettings.json&quot;, optional: false, 
          reloadOnChange: true)
         .AddCommandLine(args)
         .AddEnvironmentVariables();

    return builder.Build();
}
</code></pre>
<p>This code generates the instance of a builder that allows you to manage configuration values independently of their origins. Why is this significant? Generally, we aren't certain about the packaging conditions of your library (code), or how it will be executed. For instance, your code could be operative within ASP.NET, functioning as a console application, running in a Docker container, and so on. All of these application types can utilize a variety of methods for providing the configuration, which can offer unique advantages and disadvantages depending on where the code is executed. For example, if your code is running as a console application, it's beneficial to have the settings within the appsettings.json file. However, if the same code is deployed within a Docker container, supplying the configuration as environment variables could be more effective. Therefore, it's ideal to design your code to handle all possibilities, allowing the DevOps team to make the final decision on how to provide the configuration.</p>
<p>Following code samples demonstrate how to read imple and complex configuration values (settings).</p>
<pre><code class="language-csharp">    //
    // Following are comming from command line.
    var color = configuration[&quot;color&quot;];
    Console.WriteLine(&quot;{0}&quot;, color);

    var fontSize = configuration[&quot;fontSize&quot;];
    var state = configuration[&quot;state&quot;];

    //
    // From root of appsettings.json
    var setting1 = configuration[&quot;Setting1&quot;];
    var setting2 = configuration[&quot;Setting2&quot;];
    var setting3 = configuration[&quot;Setting3&quot;];
    var sleepyState = configuration[&quot;SleepyState&quot;];
    var aaa = configuration[&quot;AAAA&quot;];
    var speed = configuration[&quot;Speed&quot;];

    float i = float.Parse(setting3, CultureInfo.InvariantCulture);
    //
    // Demonstrates how to read settings from sub section.
    var section = configuration.GetSection(&quot;MySubSettings&quot;);
    var subSetting1 = section[&quot;Setting1&quot;];
    var subSetting2 = section[&quot;Setting2&quot;];
    var subSetting3 = section[&quot;Setting3&quot;];
</code></pre>
<p>Following code shows how to read environment variables from the configuration. Please not the code has not any direct dependency to Environment class.</p>
<pre><code class="language-csharp">
    var machineName = configuration[&quot;COMPUTERNAME&quot;];
    var processor = configuration[&quot;PROCESSOR_IDENTIFIER&quot;];
    
</code></pre>
<p>The variable COMPUTERNAME might also be specified inside appsettings.json file or provided as a command line argument.</p>
<p>More complex configuration is read as shown in following example:</p>
<pre><code class="language-csharp">
   MySettings mySettings = new MySettings();
   configuration.GetSection(&quot;MySetting&quot;).Bind(mySettings);

</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[What if "Semantic search is not enabled for this service."?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When consuming the Azure OpenAI service, following error might occure:</p>
<blockquote>
<p>{&quot;error&quot;: {&quot;requestid&quot;: &quot;194182cc-cdc0-400a-8914-87c3e6fd7fe2&quot;, &quot;code&quot;: 400, &quot;message&quot;: &quot;An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message='Server responded with status 400. Error message: {&quot;error&quot;</p></blockquote></div>]]></description><link>https://developers.de/2023/12/11/what-if-semantic-search-is-not-enabled-for-this-service/</link><guid isPermaLink="false">6576f2e11dee311b782de7e4</guid><category><![CDATA[GPT]]></category><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 11 Dec 2023 11:36:29 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2023/12/111136_SemanticRanger.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2023/12/111136_SemanticRanger.png" alt="What if "Semantic search is not enabled for this service."?"><p>When consuming the Azure OpenAI service, following error might occure:</p>
<blockquote>
<p>{&quot;error&quot;: {&quot;requestid&quot;: &quot;194182cc-cdc0-400a-8914-87c3e6fd7fe2&quot;, &quot;code&quot;: 400, &quot;message&quot;: &quot;An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message='Server responded with status 400. Error message: {&quot;error&quot;:{&quot;code&quot;:&quot;FeatureNotSupportedInService&quot;,&quot;message&quot;:&quot;Semantic search is not enabled for this service.\\r\\nParameter name: queryType&quot;,&quot;details&quot;:[{&quot;code&quot;:&quot;SemanticQueriesNotAvailable&quot;,&quot;message&quot;:&quot;Semantic search is not enabled for this service.&quot;}]}}', url=URL('<a href="https://host.search.windows.net/indexes/semantic-index-with-embeddings/docs/search?api-version=2023-07-01-Preview">https://host.search.windows.net/indexes/semantic-index-with-embeddings/docs/search?api-version=2023-07-01-Preview</a>')\nCall to Azure Search instance failed.\nAPI Users: Please ensure you are using the right instance, index_name and provide admin_key as the api_key.\n&quot;}}</p>
</blockquote>
<p>The error happens if the Semantic Plan is NOT activated in the cognitive search servide. To enable the plan, plese select the <em>Semantic Ranker</em> and then activate the plan.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/12/111133_SemanticRanger.png" alt="What if "Semantic search is not enabled for this service."?"></p>
</div>]]></content:encoded></item><item><title><![CDATA[How to group files together in Visual Studio]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When working on large projects, we usually design the API(s) to implement the most requirements. Sometimes, the API might contain a lot of methods. In such cases, it is recommended to split the methods of the API into multiple classes. However, there is no rule that defines the exact</p></div>]]></description><link>https://developers.de/2023/10/25/how-to-group-files-together-in-visual-studio/</link><guid isPermaLink="false">6536083583cd651a6c1ea277</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Wed, 25 Oct 2023 07:30:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When working on large projects, we usually design the API(s) to implement the most requirements. Sometimes, the API might contain a lot of methods. In such cases, it is recommended to split the methods of the API into multiple classes. However, there is no rule that defines the exact threshold for the number of methods inside the API to start splitting the API class into multiple classes.</p>
<p>For example, a complex <strong>MyApi</strong> might be split into classes <strong>MyApi1</strong>, <strong>MyApi2</strong>, and so on. This sounds simple, but splitting into multiple APIs might also have many disadvantages. If so, you keep the implementation in <strong>MyApi</strong>, but that one will be complex to manage inside the team.</p>
<p>One interesting solution that we use in projects is to allow <strong>MyApi</strong> to grow, but to split the implementation into multiple files:</p>
<pre><code class="language-csharp">MyApi.cs
MyApiPart1.cs
MyApiPart2.cs
</code></pre>
<p>In Visual Studio these files look like:</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/10/23554_vsfilesgrouping1.png" alt="23554_vsfilesgrouping1"></p>
<p>To have a better structure of files in the solution explorer we group files together.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/10/23618_vsfilesgrouping2.png" alt="23618_vsfilesgrouping2"></p>
<p>To achive this, following must be done in <em>.csproj</em> file.</p>
<pre><code class="language-xml">  &lt;ItemGroup&gt;

    &lt;Content Include=&quot;MyApi.cs&quot; /&gt;
    &lt;Content Include=&quot;MyApiPart1.cs&quot;&gt;
      &lt;DependentUpon&gt;MyApi.cs&lt;/DependentUpon&gt;
    &lt;/Content&gt;
    &lt;Content Include=&quot;MyApiPart2.cs&quot;&gt;
      &lt;DependentUpon&gt;MyApi.cs&lt;/DependentUpon&gt;
    &lt;/Content&gt;
  &lt;/ItemGroup&gt;
</code></pre>
</div>]]></content:encoded></item></channel></rss>