.Make sure compatibility along with a number of platforms, including.NET 6.0,. Web Structure 4.6.2, and.NET Specification 2.0 and above.Reduce dependencies to prevent model conflicts as well as the demand for binding redirects.Translating Audio Data.One of the major functionalities of the SDK is actually audio transcription. Creators can easily record audio documents asynchronously or in real-time. Below is actually an instance of exactly how to translate an audio documents:.making use of AssemblyAI.using AssemblyAI.Transcripts.var customer = brand new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local data, identical code can be utilized to accomplish transcription.wait for utilizing var stream = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.stream,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise supports real-time audio transcription utilizing Streaming Speech-to-Text. This function is particularly beneficial for treatments requiring urgent processing of audio information.using AssemblyAI.Realtime.wait for making use of var scribe = new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Ultimate: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining audio coming from a microphone for example.GetAudio( async (chunk) => wait for transcriber.SendAudioAsync( part)).await transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Apps.The SDK integrates along with LeMUR to make it possible for developers to create large language design (LLM) apps on vocal data. Here is actually an instance:.var lemurTaskParams = brand new LemurTaskParams.Prompt="Give a quick recap of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Cleverness Models.Also, the SDK features integrated help for audio knowledge styles, allowing feeling analysis as well as various other sophisticated features.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, see the formal AssemblyAI blog.Image resource: Shutterstock.