From b8b423daaa10f949016019377ad0ae10d182b349 Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Sun, 14 Sep 2025 23:17:18 +0300 Subject: [PATCH 01/13] Update interactivity/using-ai-powered-insights-rest-service.md docs: make the ai-related snippets use tabs and add introductions for each of them docs: add custom aiclient article docs: add demo links Update interactivity/configuring-ai-powered-insights.md Update interactivity/custom-iclient.md Co-Authored-By: Copilot <175728472+Copilot@users.noreply.github.com> --- interactivity/AI-powered-insights.md | 410 ---------------- interactivity/ai-powered-insights-overview.md | 46 ++ .../configuring-ai-powered-insights.md | 446 ++++++++++++++++++ interactivity/custom-iclient.md | 178 +++++++ .../using-ai-powered-insights-rest-service.md | 79 ++++ 5 files changed, 749 insertions(+), 410 deletions(-) delete mode 100644 interactivity/AI-powered-insights.md create mode 100644 interactivity/ai-powered-insights-overview.md create mode 100644 interactivity/configuring-ai-powered-insights.md create mode 100644 interactivity/custom-iclient.md create mode 100644 interactivity/using-ai-powered-insights-rest-service.md diff --git a/interactivity/AI-powered-insights.md b/interactivity/AI-powered-insights.md deleted file mode 100644 index 34c2d6bf2..000000000 --- a/interactivity/AI-powered-insights.md +++ /dev/null @@ -1,410 +0,0 @@ ---- -title: AI-Powered Insights -page_title: AI-Powered Insights in Report Preview -description: "Learn how to implement an AI-powered prompt UI as part of any web-based report viewer." -slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights -tags: telerik, reporting, ai -published: True -position: 1 ---- - -# AI-Powered Insights Overview - -**AI Insights** is an AI-powered feature available during the report preview. It enables users to execute predefined or custom prompts on the core data of the previewed report, uncovering valuable insights, generating summaries, or answering specific questions. The feature also supports fine-tuning of the embedded Retrieval-Augmented Generation (RAG) algorithms, optimizing them to deliver accurate responses while minimizing token consumption. - ->tip For a working example of this functionality, check the [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights). - -![The UI of the AI system after configuration.](images/angular-report-viewer-with-ai-insights.png) - -## Feature Concept - -To bring the power of Generative AI (GenAI) into reporting workflows, we are introducing an **AI Prompt** dialog that integrates seamlessly in the report viewers. The dialog provides a convenient UI for sending predefined or custom prompts to an AI model, configured in the Reporting REST Service. The prompts and responses returned from the AI model are displayed in the Output panel of the dialog, allowing for easier tracking of the conversation. - -The AI conversation maintains context throughout user's interaction with a specific report. All previous questions and responses are preserved and sent to the AI model as context, enabling more coherent and contextually relevant conversations. However, this context is automatically cleared when report parameters are changed or when navigating to a different report, ensuring that each report session starts with a fresh conversation thread. - -The feature is supported by all [web report viewers]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/web-application/html5-report-viewer/overview%}) and by the [WPF Report Viewer]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/wpf-application/overview%}) connected to a remote Reporting REST Service. - -### Key Features: - -- **Retrieval-Augmented Generation (RAG)** - When enabled, the option activates an algorithm that filters out the irrelevant report data, producing accurate responses with reduced token usage. By default, the feature is enabled. - - When enabled, you may configure the RAG through the [AIClient ragSettings element]({%slug telerikreporting/aiclient-element%}##attributes-and-elements). - - You can disable the feature by setting the _AIClient allowRAG_ attribute to _false_. - -- **Predefined Summary Prompts** - Users can choose from a set of predefined prompts tailored for common tasks like summarization, explanation, and data insights—boosting efficiency with minimal effort. - -- **Custom AI Prompts** - Besides the predefined prompts, users can create and use custom prompts through the UI. - -- **End-User Consent for Data Sharing** - To ensure transparency and compliance, the AI Prompt requests explicit consent from users before sharing any data with GenAI services. - -Image of the Prompt UI - -## User Consent - -Before using the AI Prompt dialog, users must give consent for the AI to process their provided text. This ensures transparency and user control over their data. - -User Consent for AI Summaries - -## Configuration - -To enable the AI-powered insights functionality, you must provide a valid configuration that defines the AI client, model, and other essential details such as authentication credentials. This configuration also allows you to customize various aspects of the AI functionality, including user consent requirements, custom prompt permissions, and Retrieval-Augmented Generation (RAG) settings. The AI configuration is managed through the [report engine configuration]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}). For a complete list of available settings, check the table below. For an example configuration, check the [Example](#example) section. - -| Setting | Description | -| ------ | ------ | -|friendlyName|This setting specifies the name corresponding to the type of AI client you wish to use. For example, setting friendlyName to "MicrosoftExtensionsAzureOpenAI" indicates that the Azure OpenAI client is being utilized.| -|model|This setting specifies the AI model to be used for generating responses. For example, setting the model to "gpt-4o-mini" indicates that the GPT-4o mini model variant is being utilized.| -|endpoint|This setting specifies the URL of the AI service endpoint.| -|credential|This setting specifies the authentication credentials required to access the AI service. It ensures that the AI client can securely connect to the specified endpoint.| -|requireConsent|A boolean configuration option that determines whether users must explicitly consent to the use of AI models before the AI report insights features can be utilized within the application.| -|allowCustomPrompts|This setting is set to true by default. If you set it to `false`, users will only be able to use the predefined prompts and will not be allowed to ask custom prompts.| -|predefinedPrompts|This setting specifies a list of predefined prompts that the AI client can use. Each prompt is defined by a text attribute, which contains the prompt's content.| -|allowRAG|This setting specifies whether the [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) is allowed. The default value is _true_. Available only on projects targeting .NET8 or higher.| -|ragSettings|These settings specify the configuration of the [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) when allowed by the _allowRAG_ setting. Available only on projects targeting .NET8 or higher.| - -__AI clients__ - -There are four available options for the `friendlyName` setting: - -| Client Type | Friendly Name | -| ------ | ------ | -|Microsoft.Extensions.AI.AzureAIInference|"MicrosoftExtensionsAzureAIInference"| -|Microsoft.Extensions.AI.OpenAI + Azure.AI.OpenAI|"MicrosoftExtensionsAzureOpenAI"| -|Microsoft.Extensions.AI.Ollama|"MicrosoftExtensionsOllama"| -|Microsoft.Extensions.AI.OpenAI|"MicrosoftExtensionsOpenAI"| - -Depending on which option will be used, a corresponding `Telerik.Reporting.Telerik.Reporting.AI.Microsoft.Extensions.{name}` NuGet package must be installed in the project. In other words, please install one of the following packages before continuing with the configuration: - -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` -- `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` -- `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - -### Example - -Below is an example of how to configure the project for the `AzureOpenAI` option. - -````JSON -{ - "telerikReporting": { - "AIClient": { - "friendlyName": "MicrosoftExtensionsAzureOpenAI", - "model": "gpt-4o-mini", - "endpoint": "https://ai-explorations.openai.azure.com/", - "credential": "...", - "requireConsent": false, - "allowCustomPrompts": false, - "allowRAG": true, - "predefinedPrompts": [ - { "text": "Generate a summary of the report." }, - { "text": "Translate the report into German." } - ], - "ragSettings": { - "tokenizationEncoding": "Set Encoding Name Here", - "modelMaxInputTokenLimit": 15000, - "maxNumberOfEmbeddingsSent": 15, - "maxTokenSizeOfSingleEmbedding": 0, - "splitTables": true - } - } - } -} -```` -````XML - - - - - - - - -```` - -## Customization - -The workflow of instantiating the AI client and passing a request to it can be customized by overriding the following methods of the [ReportsController](/api/telerik.reporting.services.webapi.reportscontrollerbase) class: -* [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) - called when the AI Prompt dialog is to be displayed. In this method, the AI client is instantiated either using the settings provided in the application configuration file, or by using the `AIClientFactory` instance provided with the Reporting REST Service Configuration (see [Extensibility]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}#extensibility) below). Providing custom logic in the method allows to control the UI properties of the AI Prompt dialog: changing or disabling the consent message, enabling/disabling custom prompts, etc. This logic can be based on the currently previewed report, represented by the property `ClientReportSource`. - - * .NET - - ````C# -/// - /// Overrides the default , adding verification depending on the passed parameter. - /// - /// - public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) - { - if (reportSource.Report == "report-with-disabled-ai-insights.trdp") - { - return StatusCode( - StatusCodes.Status403Forbidden, - new - { - message = "An error has occurred.", - exceptionMessage = "AI Insights functionality is not allowed for this report.", - exceptionType = "Exception", - stackTrace = (string?)null - } - ); - } - - return base.CreateAIThread(clientID, instanceID, reportSource); - } -```` - - - * .NET Framework - - ````C# -/// - /// Overrides the default , adding verification depending on the passed parameter. - /// - /// - public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) - { - if (reportSource.Report == "SampleReport.trdp") - { - var errorResponse = new - { - message = "An error has occurred.", - exceptionMessage = "AI Insights functionality is not allowed for this report.", - exceptionType = "Exception", - stackTrace = (string)null - }; - - return this.Request.CreateResponse(HttpStatusCode.Forbidden, errorResponse); - } - - return base.CreateAIThread(clientID, instanceID, reportSource); -} -```` - - -* [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) - called internally during the execution of the `CreateAIThread()` method. Provides easier access to the predefined prompts, allowing to alter or disable them based on custom logic like the role of the currently logged user, or on the currently previewed report, represented by the property `ClientReportSource`. - - * .NET - - ````C# -/// - /// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. - /// - /// - /// - protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) - { - if (reportSource.Report == "report-suitable-for-markdown-output.trdp") - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } - - base.UpdateAIPrompts(reportSource, aiThreadInfo); - } -```` - - - * .NET Framework - - ````C# -/// - /// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. - /// - /// - /// - protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) - { - if (reportSource.Report == "report-suitable-for-markdown-output.trdp") - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } - - base.UpdateAIPrompts(reportSource, aiThreadInfo); -} -```` - - -* [GetAIResponse(string, string, string, string, AIQueryArgs)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_GetAIResponse_System_String_System_String_System_String_System_String_Telerik_Reporting_Services_Engine_AIQueryArgs_) - called every time when a prompt is sent to the AI model. Allows for examining or altering the prompt sent from the client, inspecting the state of the RAG optimization, or checking the estimated amount of tokens that the prompt will consume, by implementing a callback function assigned to the [ConfirmationCallback](/api/telerik.reporting.services.engine.aiqueryargs#collapsible-Telerik_Reporting_Services_Engine_AIQueryArgs_ConfirmationCallBack) property. Below, you will find several examples of how to override the `GetAIResponse` method to handle different scenarios. - - * .NET - - ````C# -/// - /// Modifies the prompt sent from the client before passing it to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.Query += $"{Environment.NewLine}Keep your response concise."; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - const int MAX_TOKEN_COUNT = 500; - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) - { - return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines whether the RAG optimization is applied for the current prompt. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) - { - System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - * .NET Framework - - ````C# -/// - /// Modifies the prompt sent from the client before passing it to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.Query += $"{Environment.NewLine}Keep your response concise."; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - const int MAX_TOKEN_COUNT = 500; - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) - { - return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines whether the RAG optimization is applied for the current prompt. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) - { - System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - -## Extensibility - -If necessary, the Reporting engine can use a custom `Telerik.Reporting.AI.IClient` implementation, which can be registered in the Reporting REST Service configuration: - -* .NET - - ````C# -builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = GetCustomAIClient, - // ... - }); - - static Telerik.Reporting.AI.IClient GetCustomAIClient() - { - return new MyCustomAIClient(...); - } -```` - - -* .NET Framework - - ````C# -public class CustomResolverReportsController : ReportsControllerBase - { - static ReportServiceConfiguration configurationInstance; - - static CustomResolverReportsController() - { - configurationInstance = new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = GetCustomAIClient, - // ... - }; - } - } - - static Telerik.Reporting.AI.IClient GetCustomAIClient() - { - return new MyCustomAIClient(...); - } -```` - - -## See Also - -* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) -* [AIClient Element Overview]({%slug telerikreporting/aiclient-element%}) -* [Interface IClient](https://docs.telerik.com/reporting/api/telerik.reporting.ai.iclient) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md new file mode 100644 index 000000000..98fff1de5 --- /dev/null +++ b/interactivity/ai-powered-insights-overview.md @@ -0,0 +1,46 @@ +--- +title: AI-Powered Insights +page_title: AI-Powered Insights in Report Preview +description: "Learn about the AI insights feature of Reporting, which allow users to execute predefined or custom prompts on the core data of the previewed report, receiving responses from an AI model." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights +tags: telerik, reporting, ai +published: True +position: 1 +--- + +# AI-Powered Insights Overview + +**AI Insights** is an AI-powered feature available during the report preview. It enables users to execute predefined or custom prompts on the core data of the previewed report, uncovering valuable insights, generating summaries, or answering specific questions through an AI model. The feature also supports fine-tuning of the embedded Retrieval-Augmented Generation (RAG) algorithms, optimizing them to deliver accurate responses while minimizing token consumption. + +>tip For a working example of this functionality, check the [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights). + +![The UI of the AI system after configuration.](images/angular-report-viewer-with-ai-insights.png) + +## How Does It Work? + +To bring the power of Generative AI (GenAI) into reporting workflows, we are introducing an **AI Prompt** dialog that integrates seamlessly in the report viewers. The dialog provides a convenient UI for sending predefined or custom prompts to an external AI model (for example, GPT-5), configured in the Reporting REST Service. The prompts and responses returned from the AI model are displayed in the **Output** panel of the dialog, allowing for easier involvement in the conversation. + +The AI conversation maintains context throughout user's interaction with a specific report. All previous questions and responses are preserved and sent to the AI model as context, enabling more coherent and contextually relevant conversations. However, this context is automatically cleared when report parameters are changed or when navigating to a different report, ensuring that each report session starts with a fresh conversation thread. + +The feature is supported by all [web report viewers]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/web-application/html5-report-viewer/overview%}) and by the [WPF Report Viewer]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/wpf-application/overview%}) connected to a remote Reporting REST Service. + +### Key Features + +- Retrieval-Augmented Generation (RAG)—When enabled, the AI insights feature uses an algorithm that filters out the irrelevant report data, producing more accurate responses with reduced token usage. + +- Predefined Summary Prompts—Users can choose from a set of predefined prompts tailored for common tasks like summarization, explanation, and data insights—boosting efficiency with minimal effort. + +- Custom AI Prompts—Besides the predefined prompts, users can create custom prompts to ask more specific queries. + + Image of the Prompt UI + +- End-User Consent for Data Sharing—To ensure transparency and compliance, the AI Prompt requests explicit consent from users before sending their prompts to the AI models. + + User Consent for AI Summaries + +## See Also + +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) +* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) +* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md new file mode 100644 index 000000000..e8ede2354 --- /dev/null +++ b/interactivity/configuring-ai-powered-insights.md @@ -0,0 +1,446 @@ +--- +title: Customizing the AI-Powered Insights +page_title: How to Customize the AI-Powered Insights +description: "Learn how to configure the AI-powered insights functionality to handle common and not so much use cases." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights +tags: telerik, reporting, ai, configuration +published: True +position: 3 +--- + +This article outlines the different ways to customize the AI-powered insights functionality to handle different use cases. They are listed as follows: +- [Configuring the Report Engine](#configuring-the-report-engine) - Declarative configuration through application settings. +- [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods) - Programmatic customization with custom logic. + +## Configuring the Report Engine + +As the [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) article explains, to enable the AI-powered insights functionality, you need to configure the [AIClient element]({%slug telerikreporting/aiclient-element%}) within the report engine configuration in your application's config file. This step is essential for the report engine to connect to the LLM provider. For instance, here is a sample configuration for Azure OpenAI: + +````JSON +{ + "telerikReporting": { + "AIClient": { + "friendlyName": "MicrosoftExtensionsAzureOpenAI", + "model": "gpt-4o-mini", + "endpoint": "https://ai-explorations.openai.azure.com/", + "credential": "YOUR_API_KEY" + } + } +} +```` +````XML + + + + +```` + +This is a base configuration, but it can be further extended to handle specific scenarios, as explained in the upcoming sections. + +### User Consent Configuration + +By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy + + User Consent for AI Summaries + +In enterprise environments where AI usage policies are already established or when working with trusted internal models, you may want to streamline the user experience by disabling this consent requirement. In these cases, you can set the `requireConsent` option to `false`: + +````JSON +{ + "telerikReporting": { + "AIClient": { + // ...base configuration... + "requireConsent": false + } + } +} +```` +````XML + + + requireConsent="false"> + + +```` + +### Prompts Configuration + +By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you might want to provide the users with predefined prompts that are designed to handle specific tasks. + +To restrict users to predefined prompts only, you can set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option: + +````JSON +{ + "telerikReporting": { + "AIClient": { + // ...base configuration... + "requireConsent": false, + "allowCustomPrompts": false, + "predefinedPrompts": [ + { "text": "Generate a summary of the report." }, + { "text": "Translate the report into German." } + ], + } + } +} +```` +````XML + + + requireConsent="false" + allowCustomPrompts="false"> + + + + + + +```` + +You can also add predefined prompts without disabling custom ones, giving users both curated options and the flexibility to create their own queries. + +### Retrieval-Augmented Generation (RAG) Configuration + +By default, the AI-powered insights functionality uses [Retrieval-Augmented Generation (RAG)](https://aws.amazon.com/what-is/retrieval-augmented-generation/) algorithm to filter out the irrelevant report data before sending it to the AI model. This approach significantly improves the accuracy and relevance of the AI-generated response while optimizing token usage. + +> RAG is available only in .NET and .NET Standard. Therefore, the options that are listed below are not supported in .NET Framework configurations. + +If needed, you can disable this algorithm by setting `allowRAG` to `false`. + +You can also configure the RAG behavior through the `ragSettings` option: +- `modelMaxInputTokenLimit` - Limits the maximum input tokens the AI model can process in a single request. The default value is `15000`. +- `maxNumberOfEmbeddingsSent` - Limits how many embeddings (chunks of retrieved content) are sent to the model in a single request. The default value is `15`. +- `maxTokenSizeOfSingleEmbedding` - Limits token size of each individual embedding, which prevents large chunks from dominating the prompt. The default value is `0` (no limit). +- `tokenizationEncoding` - Specifies tokenization scheme used to estimate the tokens usage before sending the request to the LLM model. By default, the encoding is determined automatically based on the specified model, which is recommended to ensure accurate token counting. Incorrect encoding may lead to miscalculations in token limits, causing either premature truncation of context or exceeding the model’s input capacity. +- `splitTables` - Indicates whether tables should be split during Retrieval-Augmented Generation (RAG) processing. When the splitting is allowed, only the relevant table cells will be taken into account, significantly reducing the number of tokens. The default value is `true`. + +Below is an example that takes advantage of the table splitting and automatic encoding inference, but reduces the token limits: + +````JSON +"telerikReporting": { + "AIClient": { + // ...base configuration... + "requireConsent": false, + "allowCustomPrompts": false, + "predefinedPrompts": [ + { "text": "Generate an executive summary of this report." }, + { "text": "Translate the document into German." } + ], + "ragSettings": { + "modelMaxInputTokenLimit": 12000, + "maxNumberOfEmbeddingsSent": 10, + "maxTokenSizeOfSingleEmbedding": 2000 + } + } +} +```` + +For a complete reference of all available `AIClient` options, check the article [AIClient Element Overview]({%slug telerikreporting/aiclient-element%}). + +## Overriding ReportsControllerBase Methods + +While declarative configuration handles most common scenarios, some advanced use cases require programmatic customization. You can achieve this by overriding specific methods of the [ReportsControllerBase](/api/telerik.reporting.services.webapi.reportscontrollerbase) class in your `ReportsController`. This approach allows you to implement dynamic logic based on user context, report properties, or business rules. + +The following methods can be overridden to customize different aspects of the AI-powered insights workflow: + +### CreateAIThread(string, string, ClientReportSource) + +The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to control the UI properties of the dialog, such as configuring the user consent message, as well as setting up custom and predefined prompts. You can also override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` property, which allows for dynamic adjustments outside of the basic configuration. Below, you can find some examples based on common use cases. + +#### .NET + +````Disabling·AI·Insights·Dynamically +/// +/// Disables the AI-powered insights functionality dynamically depending on the passed parameter. +/// +/// +public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + if (reportSource.Report == "report-with-disabled-ai-insights.trdp") + { + return StatusCode( + StatusCodes.Status403Forbidden, + new + { + message = "An error has occurred.", + exceptionMessage = "AI Insights functionality is not allowed for this report.", + exceptionType = "Exception", + stackTrace = (string?)null + } + ); + } + + return base.CreateAIThread(clientID, instanceID, reportSource); +} +```` +````Changing·Consent·Message +/// +/// Overrides the default user consent message. +/// +/// +public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + var result = base.CreateAIThread(clientID, instanceID, reportSource); + + if (result is JsonResult jsonResult && jsonResult.Value is AIThreadInfo aiThreadInfo) + { + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; + } + + return result; +} +```` +````Setting·Predefined·Prompts·Dynamically +/// +/// Sets predefined prompts dynamically depending on the passed parameter. +/// +/// +public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + var result = base.CreateAIThread(clientID, instanceID, reportSource); + + if (reportSource.Report == "report-suitable-for-markdown-output.trdp" && + result is JsonResult jsonResult && + jsonResult.Value is AIThreadInfo aiThreadInfo) + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + return result; +} +```` + + +#### .NET Framework + +````Disabling·AI·Insights·Dynamically +/// +/// Disables the AI-powered insights functionality dynamically depending on the passed parameter. +/// +/// +public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + if (reportSource.Report == "SampleReport.trdp") + { + var errorResponse = new + { + message = "An error has occurred.", + exceptionMessage = "AI Insights functionality is not allowed for this report.", + exceptionType = "Exception", + stackTrace = (string)null + }; + + return this.Request.CreateResponse(HttpStatusCode.Forbidden, errorResponse); + } + + return base.CreateAIThread(clientID, instanceID, reportSource); +} +```` +````Changing·Consent·Message +/// +/// Overrides the default user consent message. +/// +/// +public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + var result = base.CreateAIThread(clientID, instanceID, reportSource); + + if (result.TryGetContentValue(out AIThreadInfo aiThreadInfo)) + { + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; + } + + return result; +} +```` +````Setting·Predefined·Prompts·Dynamically +/// +/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// +/// +public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + var result = base.CreateAIThread(clientID, instanceID, reportSource); + + if (reportSource.Report == "report-suitable-for-markdown-output.trdp" && + result.TryGetContentValue(out AIThreadInfo aiThreadInfo)) + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + return result; +} +```` + + +### UpdateAIPrompts(ClientReportSource, AIThreadInfo) + +The [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) method is called internally during the execution of the `CreateAIThread()`. It provides easier access to the `AIThreadInfo` object, which allows you to change the predefined prompts directly. The example below demonstrate how to add a Markdown-specific predefined prompt only for a particular report. + +#### .NET + +````C# +/// +/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// +/// +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + if (reportSource.Report == "report-suitable-for-markdown-output.trdp") + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` + +#### .NET Framework + +````C# +/// +/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// +/// +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + if (reportSource.Report == "report-suitable-for-markdown-output.trdp") + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` + + +### GetAIResponse(string, string, string, string, AIQueryArgs) + +The [GetAIResponse(string, string, string, string, AIQueryArgs)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_GetAIResponse_System_String_System_String_System_String_System_String_Telerik_Reporting_Services_Engine_AIQueryArgs_) method is called every time a prompt is sent to the AI model. This method provides control over the AI request workflow, allowing you to intercept, modify, and validate requests before they reach the LLM. Below are examples of common customization scenarios. + +#### .NET + +````Modifying·Outgoing·Prompts +/// +/// Modifies the prompt sent from the client before passing it to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.Query += $"{Environment.NewLine}Keep your response concise."; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````Token·Usage·Validation +/// +/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + const int MAX_TOKEN_COUNT = 500; + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) + { + return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````RAG·Optimization·Monitoring +/// +/// Examines whether the RAG optimization is applied for the current prompt. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) + { + System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` + +#### .NET Framework + +````Modifying·Outgoing·Prompts +/// +/// Modifies the prompt sent from the client before passing it to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.Query += $"{Environment.NewLine}Keep your response concise."; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````Token·Usage·Validation +/// +/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + const int MAX_TOKEN_COUNT = 500; + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) + { + return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````RAG·Optimization·Monitoring +/// +/// Examines whether the RAG optimization is applied for the current prompt. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) + { + System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) +* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md new file mode 100644 index 000000000..15a858ce0 --- /dev/null +++ b/interactivity/custom-iclient.md @@ -0,0 +1,178 @@ +--- +title: Creating Custom AI Client Implementation +page_title: How to Create a Custom AI Client Implementation +description: "Learn how to create a custom IClient implementation to integrate unsupported LLM providers with Telerik Reporting AI-powered insights." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation +tags: telerik, reporting, ai, custom, implementation +published: True +position: 4 +--- + +# Creating Custom AI Client Implementation + +While Telerik Reporting provides built-in support for popular LLM providers like Azure OpenAI, OpenAI, and Ollama, you may need to integrate with other AI services or implement custom logic. This article shows how to create a custom `IClient` implementation to connect any LLM provider to the AI-powered insights functionality. + +## Enabling Custom AI Client + +To enable a custom AI client implementation, follow these steps: + +1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider: + +````C# +using Azure.AI.OpenAI; +using Microsoft.Extensions.AI; +using System.ClientModel; +using Telerik.Reporting.AI; + +namespace WebApplication1.AI; + +public class CustomAIClient : IClient +{ + public string Model { get; } = "gpt-4o-mini"; + + public bool SupportsSystemPrompts => false; + + private readonly IChatClient chatClient; + + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; + + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); + } + + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + { + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) + { + ChatRole chatRole = message.Role switch + { + MessageRole.System => ChatRole.System, + MessageRole.Assistant => ChatRole.Assistant, + MessageRole.User => ChatRole.User, + _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") + }; + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole = responseMessage.Role.Value switch + { + "system" => MessageRole.System, + "assistant" => MessageRole.Assistant, + "user" => MessageRole.User, + _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") + }; + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Message(messageRole, contents)); + } + + return resultMessages; + } + + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } +} +```` + +1. Register the custom client in your `ReportServiceConfiguration`: + + * .NET + + ````C# +builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration +{ + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... +}); +```` + + * .NET Framework + + ````C# +public class CustomResolverReportsController : ReportsControllerBase +{ + static ReportServiceConfiguration configurationInstance; + + static CustomResolverReportsController() + { + configurationInstance = new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }; + } +} +```` + +You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). + +## Understanding the IClient Interface + +The `Telerik.Reporting.AI.IClient` interface defines the contract for AI service integration: + +```csharp +public interface IClient +{ + string Model { get; } + bool SupportsSystemPrompts { get; } + Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken); +} +``` + +### Key Properties and Methods + +- **Model**—Specifies the model name used for tokenization encoding. This should match the actual model being used for accurate token counting. For more information on its impact, check the `tokenizationEncoding` option in the [RAG Configuration]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}#retrieval-augmented-generation-rag-configuration) section. +- **SupportsSystemPrompts**—Indicates whether the LLM supports system role messages. When `false`, all messages in the `query` argument from the `GetResponseAsync` method are converted to user role to prevent invalid message types from being unintentionally passed to the LLM client during type conversion. +- **GetResponseAsync**—The core method that processes AI queries and returns responses. + +### Implementation Details + +The `IChatClient` in the example above is not mandatory—it is used to simplify interaction with the Azure OpenAI service. You can implement the interface using any client that communicates with your chosen LLM provider. + +When RAG (Retrieval-Augmented Generation) is enabled via the `allowRAG` configuration option, the `GetResponseAsync` method is called twice per user prompt: + +1. **RAG Evaluation Call**—Determines if the prompt is suitable for RAG optimization. The `query` parameter contains instructions for RAG applicability assessment and the user's question. +1. **Main Query Call**—Processes the request with the report data. The `query` parameter includes response instructions, report metadata (may be filtered based on the RAG evaluation), and the user's question. + +This dual-call approach optimizes token usage by first determining RAG suitability, then filtering report data only when the evaluation indicates RAG optimization is beneficial. + +When RAG is disabled, the method is called only once without the report metadata being pre-filtered. + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) +* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/using-ai-powered-insights-rest-service.md b/interactivity/using-ai-powered-insights-rest-service.md new file mode 100644 index 000000000..b55687f02 --- /dev/null +++ b/interactivity/using-ai-powered-insights-rest-service.md @@ -0,0 +1,79 @@ +--- +title: Using AI-Powered Insights with a REST service +page_title: How to Use AI-Powered Insights with a REST service +description: "Learn how to implement an AI-powered prompt UI as part of any web-based report viewer." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service +tags: telerik, reporting, ai, rest +published: True +position: 2 +--- + +This tutorial shows how to enable and configure AI-powered insights with a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) so end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. + +> If you use a [Telerik Report Server](https://docs.telerik.com/report-server/introduction) instead of a standalone Telerik Reporting REST service, check the article [AI-Powered Features Settings](https://docs.telerik.com/report-server/implementer-guide/configuration/ai-settings) instead. + +## Prerequisites + +To follow the steps from this tutorial, you must have: + +- A running application that hosts a Reporting REST service. +- A report viewer connected to that REST service. +- An active subscription (or local runtime) for an LLM model provider with API access. The supported out of the box ones are: + - [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview) + - [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview#how-do-i-get-access-to-azure-openai) + - [OpenAI](https://platform.openai.com/docs/models) + - [Ollama](https://docs.ollama.com/quickstart) + +>tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [{%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}]. + +## Using AI-Powered Insights with a REST service + +To enable the AI-powered insights, follow these steps: + +1. Install exactly one of the following NuGet packages, depending on the LLM provider you use: + +- `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` - for Azure AI Foundry +- `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` - for Azure OpenAI resources +- `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - for OpenAI +- `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` - for Ollama + +1. Add the [AIClient element]({%slug telerikreporting/aiclient-element%}) to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration: + +````JSON +{ + "telerikReporting": { + "AIClient": { + "friendlyName": "MicrosoftExtensionsAzureOpenAI", + "model": "gpt-4o-mini", + "endpoint": "https://ai-explorations.openai.azure.com/", + "credential": "YOUR_API_KEY" + } + } +} +```` +````XML + + + + +```` + +>tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. + +Note that the `friendlyName` attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements: + +- Azure OpenAI: Use `MicrosoftExtensionsAzureOpenAI`. Requires `model`, `endpoint`, and `credential`. +- Azure AI Foundry: Use `MicrosoftExtensionsAzureAIInference`. Requires `model`, `endpoint`, and `credential`. +- OpenAI: Use `MicrosoftExtensionsOpenAI`. Requires only `model` and `credential` (uses the default OpenAI API endpoint). +- Ollama: Use `MicrosoftExtensionsOllama`. Requires only `model` and `endpoint` (no authentication needed for local deployments). + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) From 8246eb6ed4b458c4d594d4024c33271c496f1280 Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Mon, 20 Oct 2025 19:16:09 +0300 Subject: [PATCH 02/13] docs: review the articles --- interactivity/ai-powered-insights-overview.md | 2 +- .../configuring-ai-powered-insights.md | 53 +---- interactivity/custom-iclient.md | 182 +++++++++--------- .../using-ai-powered-insights-rest-service.md | 58 +++--- 4 files changed, 130 insertions(+), 165 deletions(-) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md index 98fff1de5..fb06df3cb 100644 --- a/interactivity/ai-powered-insights-overview.md +++ b/interactivity/ai-powered-insights-overview.md @@ -1,5 +1,5 @@ --- -title: AI-Powered Insights +title: AI-Powered Insights Overview page_title: AI-Powered Insights in Report Preview description: "Learn about the AI insights feature of Reporting, which allow users to execute predefined or custom prompts on the core data of the previewed report, receiving responses from an AI model." slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md index e8ede2354..4ff48727f 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/configuring-ai-powered-insights.md @@ -1,5 +1,5 @@ --- -title: Customizing the AI-Powered Insights +title: Customizing AI-Powered Insights page_title: How to Customize the AI-Powered Insights description: "Learn how to configure the AI-powered insights functionality to handle common and not so much use cases." slug: telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights @@ -8,7 +8,9 @@ published: True position: 3 --- -This article outlines the different ways to customize the AI-powered insights functionality to handle different use cases. They are listed as follows: +# Customizing AI-Powered Insights + +This article explains how to customize the AI-powered insights functionality for different use cases. There are two distinct ways to achieve this: - [Configuring the Report Engine](#configuring-the-report-engine) - Declarative configuration through application settings. - [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods) - Programmatic customization with custom logic. @@ -45,7 +47,7 @@ This is a base configuration, but it can be further extended to handle specific By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy - User Consent for AI Summaries +User Consent for AI Summaries In enterprise environments where AI usage policies are already established or when working with trusted internal models, you may want to streamline the user experience by disabling this consent requirement. In these cases, you can set the `requireConsent` option to `false`: @@ -70,9 +72,9 @@ In enterprise environments where AI usage policies are already established or wh ### Prompts Configuration -By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you might want to provide the users with predefined prompts that are designed to handle specific tasks. +By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you can provide the users with predefined prompts that are designed to handle specific tasks. -To restrict users to predefined prompts only, you can set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option: +To restrict users to predefined prompts only, you set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option: ````JSON { @@ -396,47 +398,6 @@ public override async Task GetAIResponse(string clientID, s return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); } ```` -````Token·Usage·Validation -/// -/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. -/// -/// -public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) -{ - const int MAX_TOKEN_COUNT = 500; - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) - { - return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); -} -```` -````RAG·Optimization·Monitoring -/// -/// Examines whether the RAG optimization is applied for the current prompt. -/// -/// -public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) -{ - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) - { - System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); -} -```` ## See Also diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index 15a858ce0..8bcdac469 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -18,123 +18,123 @@ To enable a custom AI client implementation, follow these steps: 1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider: -````C# -using Azure.AI.OpenAI; -using Microsoft.Extensions.AI; -using System.ClientModel; -using Telerik.Reporting.AI; - -namespace WebApplication1.AI; - -public class CustomAIClient : IClient -{ - public string Model { get; } = "gpt-4o-mini"; - - public bool SupportsSystemPrompts => false; + ````C# + using Azure.AI.OpenAI; + using Microsoft.Extensions.AI; + using System.ClientModel; + using Telerik.Reporting.AI; - private readonly IChatClient chatClient; + namespace WebApplication1.AI; - public CustomAIClient() + public class CustomAIClient : IClient { - string endpoint = "https://ai-explorations.openai.azure.com/"; - string credential = "YOUR_API_KEY"; - string model = "gpt-4o-mini"; + public string Model { get; } = "gpt-4o-mini"; - chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) - .GetChatClient(model) - .AsIChatClient(); - } + public bool SupportsSystemPrompts => false; - public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) - { - // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage - var chatMessages = new List(); - foreach (var message in query) - { - ChatRole chatRole = message.Role switch - { - MessageRole.System => ChatRole.System, - MessageRole.Assistant => ChatRole.Assistant, - MessageRole.User => ChatRole.User, - _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") - }; + private readonly IChatClient chatClient; - // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI - var textContents = message.Contents - .OfType() - .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) - .Cast() - .ToList(); + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; - chatMessages.Add(new ChatMessage(chatRole, textContents)); + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); } - // Call Azure OpenAI - var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); - - // Convert response back to Telerik.Reporting.AI IMessage - var resultMessages = new List(); - foreach (var responseMessage in response.Messages) + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) { - MessageRole messageRole = responseMessage.Role.Value switch + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) { - "system" => MessageRole.System, - "assistant" => MessageRole.Assistant, - "user" => MessageRole.User, - _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") - }; - - // Convert back to Telerik.Reporting.AI content - var contents = responseMessage.Contents - .OfType() - .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) - .Cast() - .ToList(); - - resultMessages.Add(new Message(messageRole, contents)); + ChatRole chatRole = message.Role switch + { + MessageRole.System => ChatRole.System, + MessageRole.Assistant => ChatRole.Assistant, + MessageRole.User => ChatRole.User, + _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") + }; + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole = responseMessage.Role.Value switch + { + "system" => MessageRole.System, + "assistant" => MessageRole.Assistant, + "user" => MessageRole.User, + _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") + }; + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Message(messageRole, contents)); + } + + return resultMessages; } - return resultMessages; - } - - public static IClient GetCustomAIClient() - { - return new CustomAIClient(); + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } } -} -```` + ```` 1. Register the custom client in your `ReportServiceConfiguration`: * .NET ````C# -builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration -{ - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... -}); -```` + builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }); + ```` * .NET Framework ````C# -public class CustomResolverReportsController : ReportsControllerBase -{ - static ReportServiceConfiguration configurationInstance; - - static CustomResolverReportsController() + public class CustomResolverReportsController : ReportsControllerBase { - configurationInstance = new ReportServiceConfiguration + static ReportServiceConfiguration configurationInstance; + + static CustomResolverReportsController() { - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... - }; + configurationInstance = new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }; + } } -} -```` + ```` You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). @@ -170,6 +170,8 @@ This dual-call approach optimizes token usage by first determining RAG suitabili When RAG is disabled, the method is called only once without the report metadata being pre-filtered. +> RAG is available only in .NET and .NET Standard. + ## See Also * [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) diff --git a/interactivity/using-ai-powered-insights-rest-service.md b/interactivity/using-ai-powered-insights-rest-service.md index b55687f02..ac1da1e6c 100644 --- a/interactivity/using-ai-powered-insights-rest-service.md +++ b/interactivity/using-ai-powered-insights-rest-service.md @@ -8,9 +8,11 @@ published: True position: 2 --- -This tutorial shows how to enable and configure AI-powered insights with a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) so end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. +# Using AI-Powered Insights With a REST Service -> If you use a [Telerik Report Server](https://docs.telerik.com/report-server/introduction) instead of a standalone Telerik Reporting REST service, check the article [AI-Powered Features Settings](https://docs.telerik.com/report-server/implementer-guide/configuration/ai-settings) instead. +This tutorial shows how to enable and configure AI-powered insights with a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) so that end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. + +> If you use a [Telerik Report Server](https://docs.telerik.com/report-server/introduction) instead of a standalone Telerik Reporting REST service, check the Report Server article [AI-Powered Features Settings](https://docs.telerik.com/report-server/implementer-guide/configuration/ai-settings) instead. ## Prerequisites @@ -24,7 +26,7 @@ To follow the steps from this tutorial, you must have: - [OpenAI](https://platform.openai.com/docs/models) - [Ollama](https://docs.ollama.com/quickstart) ->tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [{%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}]. +>tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). ## Using AI-Powered Insights with a REST service @@ -32,39 +34,39 @@ To enable the AI-powered insights, follow these steps: 1. Install exactly one of the following NuGet packages, depending on the LLM provider you use: -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` - for Azure AI Foundry -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` - for Azure OpenAI resources -- `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - for OpenAI -- `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` - for Ollama + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` - for Azure AI Foundry + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` - for Azure OpenAI resources + - `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - for OpenAI + - `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` - for Ollama 1. Add the [AIClient element]({%slug telerikreporting/aiclient-element%}) to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration: -````JSON -{ - "telerikReporting": { - "AIClient": { - "friendlyName": "MicrosoftExtensionsAzureOpenAI", - "model": "gpt-4o-mini", - "endpoint": "https://ai-explorations.openai.azure.com/", - "credential": "YOUR_API_KEY" + ````JSON + { + "telerikReporting": { + "AIClient": { + "friendlyName": "MicrosoftExtensionsAzureOpenAI", + "model": "gpt-4o-mini", + "endpoint": "https://ai-explorations.openai.azure.com/", + "credential": "YOUR_API_KEY" + } } } -} -```` -````XML - - - - -```` + ```` + ````XML + + + + + ```` >tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. -Note that the `friendlyName` attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements: +In this case, the `friendlyName` attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements: - Azure OpenAI: Use `MicrosoftExtensionsAzureOpenAI`. Requires `model`, `endpoint`, and `credential`. - Azure AI Foundry: Use `MicrosoftExtensionsAzureAIInference`. Requires `model`, `endpoint`, and `credential`. From aad9020b5b634982123d0e88a613dcecf8fb07f7 Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Tue, 21 Oct 2025 19:28:18 +0300 Subject: [PATCH 03/13] chore: fix snippet formatting --- interactivity/custom-iclient.md | 186 +++++++++--------- .../using-ai-powered-insights-rest-service.md | 40 ++-- 2 files changed, 113 insertions(+), 113 deletions(-) diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index 8bcdac469..6ccf7ef67 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -19,122 +19,122 @@ To enable a custom AI client implementation, follow these steps: 1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider: ````C# - using Azure.AI.OpenAI; - using Microsoft.Extensions.AI; - using System.ClientModel; - using Telerik.Reporting.AI; +using Azure.AI.OpenAI; +using Microsoft.Extensions.AI; +using System.ClientModel; +using Telerik.Reporting.AI; - namespace WebApplication1.AI; +namespace WebApplication1.AI; - public class CustomAIClient : IClient - { - public string Model { get; } = "gpt-4o-mini"; +public class CustomAIClient : IClient +{ + public string Model { get; } = "gpt-4o-mini"; - public bool SupportsSystemPrompts => false; + public bool SupportsSystemPrompts => false; - private readonly IChatClient chatClient; + private readonly IChatClient chatClient; - public CustomAIClient() - { - string endpoint = "https://ai-explorations.openai.azure.com/"; - string credential = "YOUR_API_KEY"; - string model = "gpt-4o-mini"; + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; - chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) - .GetChatClient(model) - .AsIChatClient(); - } + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); + } - public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + { + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) { - // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage - var chatMessages = new List(); - foreach (var message in query) - { - ChatRole chatRole = message.Role switch - { - MessageRole.System => ChatRole.System, - MessageRole.Assistant => ChatRole.Assistant, - MessageRole.User => ChatRole.User, - _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") - }; - - // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI - var textContents = message.Contents - .OfType() - .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) - .Cast() - .ToList(); - - chatMessages.Add(new ChatMessage(chatRole, textContents)); - } - - // Call Azure OpenAI - var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); - - // Convert response back to Telerik.Reporting.AI IMessage - var resultMessages = new List(); - foreach (var responseMessage in response.Messages) + ChatRole chatRole = message.Role switch { - MessageRole messageRole = responseMessage.Role.Value switch - { - "system" => MessageRole.System, - "assistant" => MessageRole.Assistant, - "user" => MessageRole.User, - _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") - }; - - // Convert back to Telerik.Reporting.AI content - var contents = responseMessage.Contents - .OfType() - .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) - .Cast() - .ToList(); - - resultMessages.Add(new Message(messageRole, contents)); - } - - return resultMessages; + MessageRole.System => ChatRole.System, + MessageRole.Assistant => ChatRole.Assistant, + MessageRole.User => ChatRole.User, + _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") + }; + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); } - public static IClient GetCustomAIClient() + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) { - return new CustomAIClient(); + MessageRole messageRole = responseMessage.Role.Value switch + { + "system" => MessageRole.System, + "assistant" => MessageRole.Assistant, + "user" => MessageRole.User, + _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") + }; + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Message(messageRole, contents)); } + + return resultMessages; + } + + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); } - ```` +} +```` 1. Register the custom client in your `ReportServiceConfiguration`: * .NET ````C# - builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... - }); - ```` +builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration +{ + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... +}); +```` - * .NET Framework +* .NET Framework - ````C# - public class CustomResolverReportsController : ReportsControllerBase - { - static ReportServiceConfiguration configurationInstance; +````C# +public class CustomResolverReportsController : ReportsControllerBase +{ + static ReportServiceConfiguration configurationInstance; - static CustomResolverReportsController() + static CustomResolverReportsController() + { + configurationInstance = new ReportServiceConfiguration { - configurationInstance = new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... - }; - } + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }; } - ```` +} +```` You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). @@ -142,14 +142,14 @@ You can further customize the AI client to enable additional features like RAG o The `Telerik.Reporting.AI.IClient` interface defines the contract for AI service integration: -```csharp +````C# public interface IClient { string Model { get; } bool SupportsSystemPrompts { get; } Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken); } -``` +```` ### Key Properties and Methods diff --git a/interactivity/using-ai-powered-insights-rest-service.md b/interactivity/using-ai-powered-insights-rest-service.md index ac1da1e6c..a52f48d84 100644 --- a/interactivity/using-ai-powered-insights-rest-service.md +++ b/interactivity/using-ai-powered-insights-rest-service.md @@ -41,28 +41,28 @@ To enable the AI-powered insights, follow these steps: 1. Add the [AIClient element]({%slug telerikreporting/aiclient-element%}) to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration: - ````JSON - { - "telerikReporting": { - "AIClient": { - "friendlyName": "MicrosoftExtensionsAzureOpenAI", - "model": "gpt-4o-mini", - "endpoint": "https://ai-explorations.openai.azure.com/", - "credential": "YOUR_API_KEY" - } +````JSON +{ + "telerikReporting": { + "AIClient": { + "friendlyName": "MicrosoftExtensionsAzureOpenAI", + "model": "gpt-4o-mini", + "endpoint": "https://ai-explorations.openai.azure.com/", + "credential": "YOUR_API_KEY" } } - ```` - ````XML - - - - - ```` +} +```` +````XML + + + + +```` >tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. From 9ef425fa0e3480e6876f60e74861ea99d373abfc Mon Sep 17 00:00:00 2001 From: Petar Todorov <109748926+petar-i-todorov@users.noreply.github.com> Date: Thu, 23 Oct 2025 16:21:25 +0300 Subject: [PATCH 04/13] Update interactivity/configuring-ai-powered-insights.md Co-authored-by: Yordan <60105689+yordan-mitev@users.noreply.github.com> --- interactivity/configuring-ai-powered-insights.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md index 4ff48727f..7773c1108 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/configuring-ai-powered-insights.md @@ -109,7 +109,7 @@ You can also add predefined prompts without disabling custom ones, giving users ### Retrieval-Augmented Generation (RAG) Configuration -By default, the AI-powered insights functionality uses [Retrieval-Augmented Generation (RAG)](https://aws.amazon.com/what-is/retrieval-augmented-generation/) algorithm to filter out the irrelevant report data before sending it to the AI model. This approach significantly improves the accuracy and relevance of the AI-generated response while optimizing token usage. +By default, the AI-powered insights functionality uses a [Retrieval-Augmented Generation (RAG)](https://aws.amazon.com/what-is/retrieval-augmented-generation/) algorithm to filter out the irrelevant report data before sending it to the AI model. This approach significantly improves the accuracy and relevance of the AI-generated response while optimizing token usage. > RAG is available only in .NET and .NET Standard. Therefore, the options that are listed below are not supported in .NET Framework configurations. From 5f22de80941239ef9bc6df96121cd8c49ddf598e Mon Sep 17 00:00:00 2001 From: Petar Todorov <109748926+petar-i-todorov@users.noreply.github.com> Date: Thu, 23 Oct 2025 16:22:03 +0300 Subject: [PATCH 05/13] Update interactivity/configuring-ai-powered-insights.md Co-authored-by: Yordan <60105689+yordan-mitev@users.noreply.github.com> --- interactivity/configuring-ai-powered-insights.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md index 7773c1108..5e07e1164 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/configuring-ai-powered-insights.md @@ -147,7 +147,7 @@ For a complete reference of all available `AIClient` options, check the article ## Overriding ReportsControllerBase Methods -While declarative configuration handles most common scenarios, some advanced use cases require programmatic customization. You can achieve this by overriding specific methods of the [ReportsControllerBase](/api/telerik.reporting.services.webapi.reportscontrollerbase) class in your `ReportsController`. This approach allows you to implement dynamic logic based on user context, report properties, or business rules. +While the [declarative configuration](#configuring-the-report-engine) handles most common scenarios, some advanced use cases require programmatic customization. You can achieve this by overriding specific methods of the [ReportsControllerBase](/api/telerik.reporting.services.webapi.reportscontrollerbase) class in your `ReportsController`. This approach allows you to implement dynamic logic based on user context, report properties, or business rules. The following methods can be overridden to customize different aspects of the AI-powered insights workflow: From 9bee7ede070a0fa5ca9a6ade7dac117765ddbf2b Mon Sep 17 00:00:00 2001 From: Petar Todorov <109748926+petar-i-todorov@users.noreply.github.com> Date: Thu, 23 Oct 2025 16:22:41 +0300 Subject: [PATCH 06/13] Update interactivity/configuring-ai-powered-insights.md Co-authored-by: Yordan <60105689+yordan-mitev@users.noreply.github.com> --- interactivity/configuring-ai-powered-insights.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md index 5e07e1164..063fe4b56 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/configuring-ai-powered-insights.md @@ -149,7 +149,7 @@ For a complete reference of all available `AIClient` options, check the article While the [declarative configuration](#configuring-the-report-engine) handles most common scenarios, some advanced use cases require programmatic customization. You can achieve this by overriding specific methods of the [ReportsControllerBase](/api/telerik.reporting.services.webapi.reportscontrollerbase) class in your `ReportsController`. This approach allows you to implement dynamic logic based on user context, report properties, or business rules. -The following methods can be overridden to customize different aspects of the AI-powered insights workflow: +You can override the methods described in the following sections and customize different aspects of the AI-powered insights workflow. ### CreateAIThread(string, string, ClientReportSource) From 76eb9dbf4f0fbe1041d36989e1294970563c9358 Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Thu, 23 Oct 2025 17:09:36 +0300 Subject: [PATCH 07/13] docs: address review comments --- interactivity/ai-powered-insights-overview.md | 10 + .../configuring-ai-powered-insights.md | 35 +++- interactivity/custom-iclient.md | 173 +++++++++--------- .../using-ai-powered-insights-rest-service.md | 10 +- 4 files changed, 132 insertions(+), 96 deletions(-) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md index fb06df3cb..6166c69ca 100644 --- a/interactivity/ai-powered-insights-overview.md +++ b/interactivity/ai-powered-insights-overview.md @@ -38,6 +38,16 @@ The feature is supported by all [web report viewers]({%slug telerikreporting/usi User Consent for AI Summaries +## Next Steps + +To enable AI-Powered Insights in your application, explore these configuration options: + +- Set up the REST service configuration—Configure your Telerik Reporting REST service with an AI client for supported providers (Azure OpenAI, OpenAI, Azure AI Foundry, or Ollama) by following the [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) guide. + +- Create a custom AI client implementation (advanced)—If you need to connect to unsupported LLM providers or implement custom logic (like token usage tracking) to supported ones, refer to [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). + +- Customize the experience (optional)—For fine-tune settings like user consent, predefined prompts, and RAG optimization, check the [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. + ## See Also * [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/configuring-ai-powered-insights.md index 063fe4b56..2bdbd1d7d 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/configuring-ai-powered-insights.md @@ -11,8 +11,8 @@ position: 3 # Customizing AI-Powered Insights This article explains how to customize the AI-powered insights functionality for different use cases. There are two distinct ways to achieve this: -- [Configuring the Report Engine](#configuring-the-report-engine) - Declarative configuration through application settings. -- [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods) - Programmatic customization with custom logic. +- [Configuring the Report Engine](#configuring-the-report-engine)—Declarative configuration through application settings. +- [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods)—Programmatic customization with custom logic. ## Configuring the Report Engine @@ -116,11 +116,11 @@ By default, the AI-powered insights functionality uses a [Retrieval-Augmented Ge If needed, you can disable this algorithm by setting `allowRAG` to `false`. You can also configure the RAG behavior through the `ragSettings` option: -- `modelMaxInputTokenLimit` - Limits the maximum input tokens the AI model can process in a single request. The default value is `15000`. -- `maxNumberOfEmbeddingsSent` - Limits how many embeddings (chunks of retrieved content) are sent to the model in a single request. The default value is `15`. -- `maxTokenSizeOfSingleEmbedding` - Limits token size of each individual embedding, which prevents large chunks from dominating the prompt. The default value is `0` (no limit). -- `tokenizationEncoding` - Specifies tokenization scheme used to estimate the tokens usage before sending the request to the LLM model. By default, the encoding is determined automatically based on the specified model, which is recommended to ensure accurate token counting. Incorrect encoding may lead to miscalculations in token limits, causing either premature truncation of context or exceeding the model’s input capacity. -- `splitTables` - Indicates whether tables should be split during Retrieval-Augmented Generation (RAG) processing. When the splitting is allowed, only the relevant table cells will be taken into account, significantly reducing the number of tokens. The default value is `true`. +- `modelMaxInputTokenLimit`—Limits the maximum input tokens the AI model can process in a single request. The default value is `15000`. +- `maxNumberOfEmbeddingsSent`—Limits how many embeddings (chunks of retrieved content) are sent to the model in a single request. The default value is `15`. +- `maxTokenSizeOfSingleEmbedding`—Limits token size of each individual embedding, which prevents large chunks from dominating the prompt. The default value is `0` (no limit). +- `tokenizationEncoding`—Specifies tokenization scheme used to estimate the tokens usage before sending the request to the LLM model. By default, the encoding is determined automatically based on the specified model, which is recommended to ensure accurate token counting. Incorrect encoding may lead to miscalculations in token limits, causing either premature truncation of context or exceeding the model’s input capacity. +- `splitTables`—Indicates whether tables should be split during Retrieval-Augmented Generation (RAG) processing. When the splitting is allowed, only the relevant table cells will be taken into account, significantly reducing the number of tokens. The default value is `true`. Below is an example that takes advantage of the table splitting and automatic encoding inference, but reduces the token limits: @@ -398,6 +398,27 @@ public override async Task GetAIResponse(string clientID, s return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); } ```` +````Token·Usage·Validation +/// +/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + const int MAX_TOKEN_COUNT = 500; + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) + { + return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` ## See Also diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index 6ccf7ef67..1118d00d5 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -20,122 +20,125 @@ To enable a custom AI client implementation, follow these steps: ````C# using Azure.AI.OpenAI; -using Microsoft.Extensions.AI; -using System.ClientModel; -using Telerik.Reporting.AI; + using Microsoft.Extensions.AI; + using System.ClientModel; + using Telerik.Reporting.AI; -namespace WebApplication1.AI; + namespace WebApplication1.AI; -public class CustomAIClient : IClient -{ - public string Model { get; } = "gpt-4o-mini"; - - public bool SupportsSystemPrompts => false; - - private readonly IChatClient chatClient; - - public CustomAIClient() + public class CustomAIClient : IClient { - string endpoint = "https://ai-explorations.openai.azure.com/"; - string credential = "YOUR_API_KEY"; - string model = "gpt-4o-mini"; + public string Model { get; } = "gpt-4o-mini"; - chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) - .GetChatClient(model) - .AsIChatClient(); - } + public bool SupportsSystemPrompts => false; - public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) - { - // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage - var chatMessages = new List(); - foreach (var message in query) - { - ChatRole chatRole = message.Role switch - { - MessageRole.System => ChatRole.System, - MessageRole.Assistant => ChatRole.Assistant, - MessageRole.User => ChatRole.User, - _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") - }; + private readonly IChatClient chatClient; - // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI - var textContents = message.Contents - .OfType() - .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) - .Cast() - .ToList(); + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; - chatMessages.Add(new ChatMessage(chatRole, textContents)); + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); } - // Call Azure OpenAI - var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); - - // Convert response back to Telerik.Reporting.AI IMessage - var resultMessages = new List(); - foreach (var responseMessage in response.Messages) + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) { - MessageRole messageRole = responseMessage.Role.Value switch + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) { - "system" => MessageRole.System, - "assistant" => MessageRole.Assistant, - "user" => MessageRole.User, - _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") - }; - - // Convert back to Telerik.Reporting.AI content - var contents = responseMessage.Contents - .OfType() - .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) - .Cast() - .ToList(); - - resultMessages.Add(new Message(messageRole, contents)); + ChatRole chatRole = message.Role switch + { + MessageRole.System => ChatRole.System, + MessageRole.Assistant => ChatRole.Assistant, + MessageRole.User => ChatRole.User, + _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") + }; + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole = responseMessage.Role.Value switch + { + "system" => MessageRole.System, + "assistant" => MessageRole.Assistant, + "user" => MessageRole.User, + _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") + }; + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Message(messageRole, contents)); + } + + return resultMessages; } - return resultMessages; - } - - public static IClient GetCustomAIClient() - { - return new CustomAIClient(); + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } } -} ```` + 1. Register the custom client in your `ReportServiceConfiguration`: * .NET ````C# builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration -{ - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... -}); + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }); ```` + * .NET Framework -````C# + ````C# public class CustomResolverReportsController : ReportsControllerBase -{ - static ReportServiceConfiguration configurationInstance; - - static CustomResolverReportsController() { - configurationInstance = new ReportServiceConfiguration + static ReportServiceConfiguration configurationInstance; + + static CustomResolverReportsController() { - HostAppId = "MyApp", - AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, - // ... - }; + configurationInstance = new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }; + } } -} ```` + You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). ## Understanding the IClient Interface @@ -159,7 +162,7 @@ public interface IClient ### Implementation Details -The `IChatClient` in the example above is not mandatory—it is used to simplify interaction with the Azure OpenAI service. You can implement the interface using any client that communicates with your chosen LLM provider. +The `IChatClient` in the [example above](#enabling-custom-ai-client) is not mandatory—it is used to simplify interaction with the Azure OpenAI service. You can implement the interface using any client that communicates with your chosen LLM provider. When RAG (Retrieval-Augmented Generation) is enabled via the `allowRAG` configuration option, the `GetResponseAsync` method is called twice per user prompt: diff --git a/interactivity/using-ai-powered-insights-rest-service.md b/interactivity/using-ai-powered-insights-rest-service.md index a52f48d84..6d03692d4 100644 --- a/interactivity/using-ai-powered-insights-rest-service.md +++ b/interactivity/using-ai-powered-insights-rest-service.md @@ -34,13 +34,14 @@ To enable the AI-powered insights, follow these steps: 1. Install exactly one of the following NuGet packages, depending on the LLM provider you use: - - `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` - for Azure AI Foundry - - `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` - for Azure OpenAI resources - - `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - for OpenAI - - `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` - for Ollama + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference`—for Azure AI Foundry + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI`—for Azure OpenAI resources + - `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI`—for OpenAI + - `Telerik.Reporting.AI.Microsoft.Extensions.Ollama`—for Ollama 1. Add the [AIClient element]({%slug telerikreporting/aiclient-element%}) to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration: + ````JSON { "telerikReporting": { @@ -64,6 +65,7 @@ To enable the AI-powered insights, follow these steps: ```` + >tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. In this case, the `friendlyName` attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements: From 64225ffcfb7f5aa19df5fa7f0bde960fb89618bf Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Thu, 23 Oct 2025 18:39:42 +0300 Subject: [PATCH 08/13] docs: add ff example for custom iclient --- interactivity/custom-iclient.md | 115 ++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index 1118d00d5..d81439553 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -18,6 +18,8 @@ To enable a custom AI client implementation, follow these steps: 1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider: + * .NET + ````C# using Azure.AI.OpenAI; using Microsoft.Extensions.AI; @@ -105,6 +107,119 @@ using Azure.AI.OpenAI; ```` + . NET Framework + + ````C# +using Azure.AI.OpenAI; + using Microsoft.Extensions.AI; + using System; + using System.ClientModel; + using System.Collections.Generic; + using System.Linq; + using System.Threading; + using System.Threading.Tasks; + using System.Web.UI.WebControls; + using Telerik.Reporting.AI; + + namespace WebApplication1.AI + { + public class CustomAIClient : IClient + { + public string Model { get; } = "gpt-4o-mini"; + + public bool SupportsSystemPrompts => false; + + private readonly IChatClient chatClient; + + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; + + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); + } + + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + { + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) + { + ChatRole chatRole; + switch (message.Role) + { + case MessageRole.System: + chatRole = ChatRole.System; + break; + case MessageRole.Assistant: + chatRole = ChatRole.Assistant; + break; + case MessageRole.User: + chatRole = ChatRole.User; + break; + default: + throw new ArgumentException($"Invalid MessageRole: {message.Role}"); + } + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole; + switch (responseMessage.Role.Value) + { + case "system": + messageRole = MessageRole.System; + break; + case "assistant": + messageRole = MessageRole.Assistant; + break; + case "user": + messageRole = MessageRole.User; + break; + default: + throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}"); + } + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Telerik.Reporting.AI.Message(messageRole, contents)); + } + + return resultMessages; + } + + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } + } + } +```` + + > This Azure OpenAI example uses `Azure.AI.OpenAI` version `2.2.0-beta.4` and `Microsoft.Extensions.AI.OpenAI` version `9.4.3-preview.1.25230.7` for demonstration purposes. For your implementation, you will typically use different packages specific to your LLM provider. Focus on the implementation structure, which is further detailed in the [Implementation Details](#implementation-details) section. + 1. Register the custom client in your `ReportServiceConfiguration`: * .NET From 1295a3acea7e99ab3cb88c72d9352e3c07627f4c Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Thu, 23 Oct 2025 18:42:57 +0300 Subject: [PATCH 09/13] chore: change node section --- _config.yml | 2 +- interactivity/custom-iclient.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/_config.yml b/_config.yml index 256902799..508931e71 100644 --- a/_config.yml +++ b/_config.yml @@ -104,7 +104,7 @@ navigation: title: "Configuring the Output Formats" interactivity: position: 80 - title: "Interactivity" + title: "Interactivity & AI" interactivity/bookmarks: position: 10 title: "Bookmarks" diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index d81439553..0c3350cdb 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -234,7 +234,7 @@ builder.Services.TryAddSingleton(sp => new ReportSe ```` -* .NET Framework + * .NET Framework ````C# public class CustomResolverReportsController : ReportsControllerBase From 65ca04bd8fd501bf0a17e801ea4c7f45d103d2a5 Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Thu, 23 Oct 2025 18:47:08 +0300 Subject: [PATCH 10/13] docs: make it clearer that friendlyname/custom aiclient are interchangeable --- interactivity/ai-powered-insights-overview.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md index 6166c69ca..39fc7e309 100644 --- a/interactivity/ai-powered-insights-overview.md +++ b/interactivity/ai-powered-insights-overview.md @@ -40,13 +40,15 @@ The feature is supported by all [web report viewers]({%slug telerikreporting/usi ## Next Steps -To enable AI-Powered Insights in your application, explore these configuration options: +To enable AI-Powered Insights in your application, start by choosing one of these implementation approaches: - Set up the REST service configuration—Configure your Telerik Reporting REST service with an AI client for supported providers (Azure OpenAI, OpenAI, Azure AI Foundry, or Ollama) by following the [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) guide. -- Create a custom AI client implementation (advanced)—If you need to connect to unsupported LLM providers or implement custom logic (like token usage tracking) to supported ones, refer to [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). +- Create a custom AI client implementation (advanced)—If you need to connect to unsupported LLM providers or implement custom logic (like token usage tracking) for any provider, refer to [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). -- Customize the experience (optional)—For fine-tune settings like user consent, predefined prompts, and RAG optimization, check the [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. +Once you have chosen your implementation approach, you can optionally: + +- Customize the experience—Fine-tune settings like user consent, predefined prompts, and RAG optimization using the [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. ## See Also From 99bfa91d75d1bcd9a1b111b7ee803a48645f779f Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Thu, 23 Oct 2025 18:58:05 +0300 Subject: [PATCH 11/13] chore: fix unordered list --- interactivity/custom-iclient.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/interactivity/custom-iclient.md b/interactivity/custom-iclient.md index 0c3350cdb..ef85245a2 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-iclient.md @@ -107,7 +107,7 @@ using Azure.AI.OpenAI; ```` - . NET Framework + * . NET Framework ````C# using Azure.AI.OpenAI; @@ -218,7 +218,7 @@ using Azure.AI.OpenAI; } ```` - > This Azure OpenAI example uses `Azure.AI.OpenAI` version `2.2.0-beta.4` and `Microsoft.Extensions.AI.OpenAI` version `9.4.3-preview.1.25230.7` for demonstration purposes. For your implementation, you will typically use different packages specific to your LLM provider. Focus on the implementation structure, which is further detailed in the [Implementation Details](#implementation-details) section. + > This Azure OpenAI example uses `Azure.AI.OpenAI` version `2.2.0-beta.4` and `Microsoft.Extensions.AI.OpenAI` version `9.4.3-preview.1.25230.7` for demonstration purposes. For your implementation, you will typically use different packages specific to your LLM provider. Focus on the implementation structure, which is further detailed in the [Understanding the IClient Interface](#understanding-the-iclient-interface) section. 1. Register the custom client in your `ReportServiceConfiguration`: From af9b104f5b4daf56b5bad503b2dbbcf750f2e78f Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Fri, 24 Oct 2025 16:14:29 +0300 Subject: [PATCH 12/13] docs: address review comments --- interactivity/ai-powered-insights-overview.md | 16 +- ...uilt-in-client-integration-ai-insights.md} | 16 +- ... custom-client-integration-ai-insights.md} | 29 ++-- ....md => customizing-ai-powered-insights.md} | 151 +++++------------- 4 files changed, 71 insertions(+), 141 deletions(-) rename interactivity/{using-ai-powered-insights-rest-service.md => built-in-client-integration-ai-insights.md} (81%) rename interactivity/{custom-iclient.md => custom-client-integration-ai-insights.md} (88%) rename interactivity/{configuring-ai-powered-insights.md => customizing-ai-powered-insights.md} (70%) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md index 39fc7e309..33c10b0c5 100644 --- a/interactivity/ai-powered-insights-overview.md +++ b/interactivity/ai-powered-insights-overview.md @@ -40,19 +40,19 @@ The feature is supported by all [web report viewers]({%slug telerikreporting/usi ## Next Steps -To enable AI-Powered Insights in your application, start by choosing one of these implementation approaches: +To enable AI-Powered Insights in your application, choose one of these two implementation approaches: -- Set up the REST service configuration—Configure your Telerik Reporting REST service with an AI client for supported providers (Azure OpenAI, OpenAI, Azure AI Foundry, or Ollama) by following the [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) guide. +- Use built-in AI client—For supported LLM providers (Azure OpenAI, OpenAI, Azure AI Foundry, or Ollama), follow the [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) guide. -- Create a custom AI client implementation (advanced)—If you need to connect to unsupported LLM providers or implement custom logic (like token usage tracking) for any provider, refer to [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). +- Create custom AI client—For unsupported LLM providers or when you need custom logic (like token usage tracking), refer to [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}). -Once you have chosen your implementation approach, you can optionally: +Once you have enabled the functionality, you can optionally: -- Customize the experience—Fine-tune settings like user consent, predefined prompts, and RAG optimization using the [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. +- Customize the experience—Fine-tune settings like user consent, predefined prompts, and RAG optimization using the [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. ## See Also * [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) -* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) -* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) -* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) diff --git a/interactivity/using-ai-powered-insights-rest-service.md b/interactivity/built-in-client-integration-ai-insights.md similarity index 81% rename from interactivity/using-ai-powered-insights-rest-service.md rename to interactivity/built-in-client-integration-ai-insights.md index 6d03692d4..88917c997 100644 --- a/interactivity/using-ai-powered-insights-rest-service.md +++ b/interactivity/built-in-client-integration-ai-insights.md @@ -1,16 +1,16 @@ --- -title: Using AI-Powered Insights with a REST service -page_title: How to Use AI-Powered Insights with a REST service -description: "Learn how to implement an AI-powered prompt UI as part of any web-based report viewer." -slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service +title: Enable AI-Powered Insights with Built-in AI Client +page_title: How to Enable AI-Powered Insights with Built-in AI Client +description: "Learn how to enable AI-powered insights using built-in support for popular LLM providers like Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client tags: telerik, reporting, ai, rest published: True position: 2 --- -# Using AI-Powered Insights With a REST Service +# Enable AI-Powered Insights with Built-in AI Client -This tutorial shows how to enable and configure AI-powered insights with a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) so that end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. +This tutorial shows how to enable and configure AI-powered insights using built-in support for popular LLM providers, such as Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama, so that end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. > If you use a [Telerik Report Server](https://docs.telerik.com/report-server/introduction) instead of a standalone Telerik Reporting REST service, check the Report Server article [AI-Powered Features Settings](https://docs.telerik.com/report-server/implementer-guide/configuration/ai-settings) instead. @@ -78,6 +78,6 @@ In this case, the `friendlyName` attribute identifies the LLM provider to the re ## See Also * [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) -* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) -* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) * [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/custom-iclient.md b/interactivity/custom-client-integration-ai-insights.md similarity index 88% rename from interactivity/custom-iclient.md rename to interactivity/custom-client-integration-ai-insights.md index ef85245a2..ddcc4347b 100644 --- a/interactivity/custom-iclient.md +++ b/interactivity/custom-client-integration-ai-insights.md @@ -1,16 +1,25 @@ --- -title: Creating Custom AI Client Implementation -page_title: How to Create a Custom AI Client Implementation -description: "Learn how to create a custom IClient implementation to integrate unsupported LLM providers with Telerik Reporting AI-powered insights." -slug: telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation +title: Enable AI-Powered Insights with Custom AI Client +page_title: How to Enable AI-Powered Insights with Custom AI Client +description: "Learn how to enable AI-powered insights by creating a custom IClient implementation to integrate unsupported LLM providers or implement custom logic." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client tags: telerik, reporting, ai, custom, implementation published: True -position: 4 +position: 3 --- -# Creating Custom AI Client Implementation +# Enable AI-Powered Insights with Custom AI Client -While Telerik Reporting provides built-in support for popular LLM providers like Azure OpenAI, OpenAI, and Ollama, you may need to integrate with other AI services or implement custom logic. This article shows how to create a custom `IClient` implementation to connect any LLM provider to the AI-powered insights functionality. +While Telerik Reporting provides built-in support for popular LLM providers like Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama, you may need to integrate with other AI services or implement custom logic, such as token usage tracking. This article shows how to enable AI-powered insights by creating a custom `IClient` implementation to connect any LLM provider. + +## Prerequisites + +To follow the steps from this tutorial, you must have: + +- A running application that hosts a Reporting REST service. +- A report viewer connected to that REST service. + +>tip If you haven't set up a Telerik Reporting REST service yet, check the article [Telerik Reporting REST Services Overview]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) to get started. ## Enabling Custom AI Client @@ -254,7 +263,7 @@ public class CustomResolverReportsController : ReportsControllerBase ```` -You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). +You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). ## Understanding the IClient Interface @@ -293,6 +302,6 @@ When RAG is disabled, the method is called only once without the report metadata ## See Also * [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) -* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) -* [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) * [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/configuring-ai-powered-insights.md b/interactivity/customizing-ai-powered-insights.md similarity index 70% rename from interactivity/configuring-ai-powered-insights.md rename to interactivity/customizing-ai-powered-insights.md index 2bdbd1d7d..9a01b3e75 100644 --- a/interactivity/configuring-ai-powered-insights.md +++ b/interactivity/customizing-ai-powered-insights.md @@ -5,7 +5,7 @@ description: "Learn how to configure the AI-powered insights functionality to ha slug: telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights tags: telerik, reporting, ai, configuration published: True -position: 3 +position: 4 --- # Customizing AI-Powered Insights @@ -16,36 +16,13 @@ This article explains how to customize the AI-powered insights functionality for ## Configuring the Report Engine -As the [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) article explains, to enable the AI-powered insights functionality, you need to configure the [AIClient element]({%slug telerikreporting/aiclient-element%}) within the report engine configuration in your application's config file. This step is essential for the report engine to connect to the LLM provider. For instance, here is a sample configuration for Azure OpenAI: +The declarative configuration approach handles most common customization scenarios through the [AIClient element]({%slug telerikreporting/aiclient-element%}) in your application's configuration file. It allows you to customize user consent, custom and predefined prompts, and RAG optimization without writing any code. -````JSON -{ - "telerikReporting": { - "AIClient": { - "friendlyName": "MicrosoftExtensionsAzureOpenAI", - "model": "gpt-4o-mini", - "endpoint": "https://ai-explorations.openai.azure.com/", - "credential": "YOUR_API_KEY" - } - } -} -```` -````XML - - - - -```` - -This is a base configuration, but it can be further extended to handle specific scenarios, as explained in the upcoming sections. +>tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. ### User Consent Configuration -By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy +By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy. User Consent for AI Summaries @@ -55,7 +32,6 @@ In enterprise environments where AI usage policies are already established or wh { "telerikReporting": { "AIClient": { - // ...base configuration... "requireConsent": false } } @@ -64,7 +40,6 @@ In enterprise environments where AI usage policies are already established or wh ````XML requireConsent="false"> @@ -80,8 +55,6 @@ To restrict users to predefined prompts only, you set `allowCustomPrompts` to `f { "telerikReporting": { "AIClient": { - // ...base configuration... - "requireConsent": false, "allowCustomPrompts": false, "predefinedPrompts": [ { "text": "Generate a summary of the report." }, @@ -94,8 +67,6 @@ To restrict users to predefined prompts only, you set `allowCustomPrompts` to `f ````XML - requireConsent="false" allowCustomPrompts="false"> @@ -127,13 +98,6 @@ Below is an example that takes advantage of the table splitting and automatic en ````JSON "telerikReporting": { "AIClient": { - // ...base configuration... - "requireConsent": false, - "allowCustomPrompts": false, - "predefinedPrompts": [ - { "text": "Generate an executive summary of this report." }, - { "text": "Translate the document into German." } - ], "ragSettings": { "modelMaxInputTokenLimit": 12000, "maxNumberOfEmbeddingsSent": 10, @@ -153,11 +117,11 @@ You can override the methods described in the following sections and customize d ### CreateAIThread(string, string, ClientReportSource) -The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to control the UI properties of the dialog, such as configuring the user consent message, as well as setting up custom and predefined prompts. You can also override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` property, which allows for dynamic adjustments outside of the basic configuration. Below, you can find some examples based on common use cases. +The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` argument. For modifying dialog properties like consent messages or predefined prompts, use the [UpdateAIPrompts](#updateaipromptsclientreportsource-aithreadinfo) method instead, which provides direct access to the `AIThreadInfo` object. #### .NET -````Disabling·AI·Insights·Dynamically +````C# /// /// Disables the AI-powered insights functionality dynamically depending on the passed parameter. /// @@ -181,47 +145,11 @@ public override IActionResult CreateAIThread(string clientID, string instanceID, return base.CreateAIThread(clientID, instanceID, reportSource); } ```` -````Changing·Consent·Message -/// -/// Overrides the default user consent message. -/// -/// -public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) -{ - var result = base.CreateAIThread(clientID, instanceID, reportSource); - - if (result is JsonResult jsonResult && jsonResult.Value is AIThreadInfo aiThreadInfo) - { - aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; - } - - return result; -} -```` -````Setting·Predefined·Prompts·Dynamically -/// -/// Sets predefined prompts dynamically depending on the passed parameter. -/// -/// -public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) -{ - var result = base.CreateAIThread(clientID, instanceID, reportSource); - - if (reportSource.Report == "report-suitable-for-markdown-output.trdp" && - result is JsonResult jsonResult && - jsonResult.Value is AIThreadInfo aiThreadInfo) - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } - - return result; -} -```` #### .NET Framework -````Disabling·AI·Insights·Dynamically +````C# /// /// Disables the AI-powered insights functionality dynamically depending on the passed parameter. /// @@ -244,71 +172,62 @@ public override HttpResponseMessage CreateAIThread(string clientID, string insta return base.CreateAIThread(clientID, instanceID, reportSource); } ```` + + +### UpdateAIPrompts(ClientReportSource, AIThreadInfo) + +The [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) method is called internally during the execution of `CreateAIThread()`. This is the recommended method for modifying dialog properties like consent messages and predefined prompts, as it provides direct access to the `AIThreadInfo` object without requiring type casting or result checking. + +#### .NET + ````Changing·Consent·Message /// /// Overrides the default user consent message. /// -/// -public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +/// +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { - var result = base.CreateAIThread(clientID, instanceID, reportSource); - - if (result.TryGetContentValue(out AIThreadInfo aiThreadInfo)) - { - aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; - } + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; - return result; + base.UpdateAIPrompts(reportSource, aiThreadInfo); } ```` ````Setting·Predefined·Prompts·Dynamically /// -/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// Modifies the collection of predefined prompts. /// -/// -public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +/// +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { - var result = base.CreateAIThread(clientID, instanceID, reportSource); - - if (reportSource.Report == "report-suitable-for-markdown-output.trdp" && - result.TryGetContentValue(out AIThreadInfo aiThreadInfo)) + if (reportSource.Report == "report-suitable-for-markdown-output.trdp") { aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); } - return result; + base.UpdateAIPrompts(reportSource, aiThreadInfo); } ```` +#### .NET Framework -### UpdateAIPrompts(ClientReportSource, AIThreadInfo) - -The [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) method is called internally during the execution of the `CreateAIThread()`. It provides easier access to the `AIThreadInfo` object, which allows you to change the predefined prompts directly. The example below demonstrate how to add a Markdown-specific predefined prompt only for a particular report. - -#### .NET - -````C# +````Changing·Consent·Message /// -/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// Overrides the default user consent message. /// /// /// protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { - if (reportSource.Report == "report-suitable-for-markdown-output.trdp") - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; base.UpdateAIPrompts(reportSource, aiThreadInfo); } ```` - -#### .NET Framework - -````C# +````Setting·Predefined·Prompts·Dynamically /// -/// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. +/// Modifies the collection of predefined prompts. /// /// /// @@ -386,6 +305,8 @@ public override async Task GetAIResponse(string clientID, string #### .NET Framework +> The RAG Optimization Monitoring example is not included in this section because RAG functionality is available only in .NET and .NET Standard configurations. + ````Modifying·Outgoing·Prompts /// /// Modifies the prompt sent from the client before passing it to the LLM. @@ -423,6 +344,6 @@ public override async Task GetAIResponse(string clientID, string ## See Also * [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) -* [Using AI-Powered Insights with a REST service]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-rest-service%}) -* [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) * [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) From 4932e908f5073521f7ff864e029bccc9678f61cb Mon Sep 17 00:00:00 2001 From: petar-i-todorov Date: Fri, 24 Oct 2025 17:09:18 +0300 Subject: [PATCH 13/13] chore: address docs inconsistencies --- .../built-in-client-integration-ai-insights.md | 4 ++-- interactivity/custom-client-integration-ai-insights.md | 5 ++--- interactivity/customizing-ai-powered-insights.md | 10 +--------- 3 files changed, 5 insertions(+), 14 deletions(-) diff --git a/interactivity/built-in-client-integration-ai-insights.md b/interactivity/built-in-client-integration-ai-insights.md index 88917c997..4a915c235 100644 --- a/interactivity/built-in-client-integration-ai-insights.md +++ b/interactivity/built-in-client-integration-ai-insights.md @@ -18,7 +18,7 @@ This tutorial shows how to enable and configure AI-powered insights using built- To follow the steps from this tutorial, you must have: -- A running application that hosts a Reporting REST service. +- A running application that hosts a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}). - A report viewer connected to that REST service. - An active subscription (or local runtime) for an LLM model provider with API access. The supported out of the box ones are: - [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview) @@ -26,7 +26,7 @@ To follow the steps from this tutorial, you must have: - [OpenAI](https://platform.openai.com/docs/models) - [Ollama](https://docs.ollama.com/quickstart) ->tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [Creating Custom AI Client Implementation]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/custom-iclient-implementation%}). +>tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}). ## Using AI-Powered Insights with a REST service diff --git a/interactivity/custom-client-integration-ai-insights.md b/interactivity/custom-client-integration-ai-insights.md index ddcc4347b..7c649f1eb 100644 --- a/interactivity/custom-client-integration-ai-insights.md +++ b/interactivity/custom-client-integration-ai-insights.md @@ -16,10 +16,9 @@ While Telerik Reporting provides built-in support for popular LLM providers like To follow the steps from this tutorial, you must have: -- A running application that hosts a Reporting REST service. +- A running application that hosts a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}). - A report viewer connected to that REST service. - ->tip If you haven't set up a Telerik Reporting REST service yet, check the article [Telerik Reporting REST Services Overview]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}) to get started. +- An active subscription (or local runtime) for an LLM model provider with API access. ## Enabling Custom AI Client diff --git a/interactivity/customizing-ai-powered-insights.md b/interactivity/customizing-ai-powered-insights.md index 9a01b3e75..117dd8e80 100644 --- a/interactivity/customizing-ai-powered-insights.md +++ b/interactivity/customizing-ai-powered-insights.md @@ -117,7 +117,7 @@ You can override the methods described in the following sections and customize d ### CreateAIThread(string, string, ClientReportSource) -The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` argument. For modifying dialog properties like consent messages or predefined prompts, use the [UpdateAIPrompts](#updateaipromptsclientreportsource-aithreadinfo) method instead, which provides direct access to the `AIThreadInfo` object. +The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` parameter. For modifying dialog properties like consent messages or predefined prompts, use the [UpdateAIPrompts](#updateaipromptsclientreportsource-aithreadinfo) method instead, which provides direct access to the `AIThreadInfo` object. #### .NET @@ -184,8 +184,6 @@ The [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.s /// /// Overrides the default user consent message. /// -/// -/// protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; @@ -197,8 +195,6 @@ protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThrea /// /// Modifies the collection of predefined prompts. /// -/// -/// protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { if (reportSource.Report == "report-suitable-for-markdown-output.trdp") @@ -216,8 +212,6 @@ protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThrea /// /// Overrides the default user consent message. /// -/// -/// protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; @@ -229,8 +223,6 @@ protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThrea /// /// Modifies the collection of predefined prompts. /// -/// -/// protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) { if (reportSource.Report == "report-suitable-for-markdown-output.trdp")