API – techtrendfeed.com https://techtrendfeed.com Wed, 09 Jul 2025 11:07:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Speed up AI improvement with Amazon Bedrock API keys https://techtrendfeed.com/?p=4373 https://techtrendfeed.com/?p=4373#respond Wed, 09 Jul 2025 11:07:35 +0000 https://techtrendfeed.com/?p=4373

At this time, we’re excited to announce a big enchancment to the developer expertise of Amazon Bedrock: API keys. API keys present fast entry to the Amazon Bedrock APIs, streamlining the authentication course of in order that builders can give attention to constructing reasonably than configuration.

CamelAI is an open-source, modular framework for constructing clever multi-agent programs for information era, world simulation, and process automation.

“As a startup with restricted sources, streamlined buyer onboarding is vital to our success. The Amazon Bedrock API keys allow us to onboard enterprise prospects in minutes reasonably than hours. With Bedrock, our prospects can rapidly provision entry to main AI fashions and seamlessly combine them into CamelAI,”

stated Miguel Salinas, CTO, CamelAI.

On this publish, discover how API keys work and how one can begin utilizing them immediately.

API key authentication

Amazon Bedrock now supplies API key entry to streamline integration with instruments and frameworks that count on API key-based authentication. The Amazon Bedrock and Amazon Bedrock runtime SDKs help API key authentication for strategies together with on-demand inference, provisioned throughput inference, mannequin fine-tuning, distillation, and analysis.

The diagram compares the default authentication course of to Amazon Bedrock (in orange) with the API keys method (in blue). Within the default course of, you will need to create an id in AWS IAM Identification Heart or IAM, connect IAM insurance policies to supply permissions to carry out API operations, and generate credentials, which you’ll be able to then use to make API calls. The gray containers within the diagram spotlight the steps that Amazon Bedrock now streamlines when producing an API key. Builders can now authenticate and entry Amazon Bedrock APIs with minimal setup overhead.

You may generate API keys within the Amazon Bedrock console, selecting between two varieties.

With long-term API keys, you possibly can set expiration instances starting from 1 day to no expiration. These keys are related to an IAM person that Amazon Bedrock routinely creates for you. The system attaches the AmazonBedrockLimitedAccess managed coverage to this IAM person, and you’ll then modify permissions as wanted via the IAM service. We advocate utilizing long-term keys primarily for exploration of Amazon Bedrock.

Quick-term API keys use the IAM permissions out of your present IAM principal and expire when your account’s session ends or can last as long as 12 hours. Quick-term API keys use AWS Signature Model 4 for authentication. For steady utility use, you possibly can implement API key refreshing with a script as proven in this instance. We advocate that you just use short-term API keys for setups that require a better degree of safety.

Making Your First API Name

When you have entry to basis fashions, getting began with Amazon Bedrock API key’s simple. Right here’s make your first API name utilizing the AWS SDK for Python (Boto3 SDK) and API keys:

Generate an API key

To generate an API key, comply with these steps:

  1. Sign up to the AWS Administration Console and open the Amazon Bedrock console
  2. Within the left navigation panel, choose API keys
  3. Select both Generate short-term API key or Generate long-term API key
  4. For long-term keys, set your required expiration time and optionally configure superior permissions
  5. Select Generate and duplicate your API key

Set Your API Key as Surroundings Variable

You may set your API key as an atmosphere variable in order that it’s routinely acknowledged if you make API requests:

# To set the API key as an atmosphere variable, you possibly can open a terminal and run the next command:
export AWS_BEARER_TOKEN_BEDROCK=${api-key}

The Boto3 SDK routinely detects your atmosphere variable if you create an Amazon Bedrock shopper.

Make Your First API Name

Now you can make API calls to Amazon Bedrock in a number of methods:

  1. Utilizing curl
    curl -X POST "https://bedrock-runtime.us-east-1.amazonaws.com/mannequin/us.anthropic.claude-3-5-haiku-20241022-v1:0/converse" 
      -H "Content material-Kind: utility/json" 
      -H "Authorization: Bearer $AWS_BEARER_TOKEN_BEDROCK" 
      -d '{
        "messages": [
            {
                "role": "user",
                "content": [{"text": "Hello"}]
            }
        ]
      }'

  2. Utilizing the Amazon Bedrock SDK:
    import boto3
    
    # Create an Amazon Bedrock shopper
    shopper = boto3.shopper(
        service_name="bedrock-runtime",
        region_name="us-east-1"     # For those who've configured a default area, you possibly can omit this line
    ) 
    
    # Outline the mannequin and message
    model_id = "us.anthropic.claude-3-5-haiku-20241022-v1:0"
    messages = [{"role": "user", "content": [{"text": "Hello"}]}]
       
    response = shopper.converse(
        modelId=model_id,
        messages=messages,
    )
    
    # Print the response
    print(response['output']['message']['content'][0]['text'])

  3. It’s also possible to use native libraries like Python Requests:
    import requests
    import os
    
    url = "https://bedrock-runtime.us-east-1.amazonaws.com/mannequin/us.anthropic.claude-3-5-haiku-20241022-v1:0/converse"
    
    payload = {
        "messages": [
            {
                "role": "user",
                "content": [{"text": "Hello"}]
            }
        ]
    }
    
    headers = {
        "Content material-Kind": "utility/json",
        "Authorization": f"Bearer {os.environ['AWS_BEARER_TOKEN_BEDROCK']}"
    }
    
    response = requests.request("POST", url, json=payload, headers=headers)
    
    print(response.textual content)

Bridging developer expertise and enterprise safety necessities

Enterprise directors can now streamline their person onboarding to Amazon Bedrock basis fashions. With setups that require a better degree of safety, directors can allow short-term API keys for his or her customers. Quick-term API keys use AWS Signature Model 4 and present IAM principals, sustaining established entry controls applied by directors.

For audit and compliance functions, all API calls are logged in AWS CloudTrail. API keys are handed as authorization headers to API requests and aren’t logged.

Conclusion

Amazon Bedrock API keys can be found in 20 AWS Areas the place Amazon Bedrock is on the market: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Hyderabad, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Eire, London, Milan, Paris, Spain, Stockholm, Zurich), and South America (São Paulo). To study extra about API keys in Amazon Bedrock, go to the API Keys documentation within the Amazon Bedrock person information.

Give API keys a attempt within the Amazon Bedrock console immediately and ship suggestions to AWS re:Submit for Amazon Bedrock or via your ordinary AWS Assist contacts.


Concerning the Authors

Sofian Hamiti is a expertise chief with over 10 years of expertise constructing AI options, and main high-performing groups to maximise buyer outcomes. He’s passionate in empowering numerous expertise to drive world affect and obtain their profession aspirations.

Ajit Mahareddy is an skilled Product and Go-To-Market (GTM) chief with over 20 years of expertise in product administration, engineering, and go-to-market. Previous to his present function, Ajit led product administration constructing AI/ML merchandise at main expertise corporations, together with Uber, Turing, and eHealth. He’s obsessed with advancing generative AI applied sciences and driving real-world affect with generative AI.

Nakul Vankadari Ramesh is a Software program Improvement Engineer with over 7 years of expertise constructing large-scale distributed programs. He presently works on the Amazon Bedrock crew, serving to speed up the event of generative AI capabilities. Beforehand, he contributed to Amazon Managed Blockchain, specializing in scalable and dependable infrastructure.

Huong Nguyen is a Principal Product Supervisor at AWS. She is a product chief at Amazon Bedrock, with 18 years of expertise constructing customer-centric and data-driven merchandise. She is obsessed with democratizing accountable machine studying and generative AI to allow buyer expertise and enterprise innovation. Exterior of labor, she enjoys spending time with household and mates, listening to audiobooks, touring, and gardening.

Massimiliano Angelino is Lead Architect for the EMEA Prototyping crew. Over the last 3 and half years he has been an IoT Specialist Answer Architect with a selected give attention to edge computing, and he contributed to the launch of AWS IoT Greengrass v2 service and its integration with Amazon SageMaker Edge Supervisor. Based mostly in Stockholm, he enjoys skating on frozen lakes.

]]>
https://techtrendfeed.com/?feed=rss2&p=4373 0
tRPC vs GraphQL vs REST: Selecting the best API design for contemporary internet purposes https://techtrendfeed.com/?p=3946 https://techtrendfeed.com/?p=3946#respond Fri, 27 Jun 2025 00:59:29 +0000 https://techtrendfeed.com/?p=3946

APIs underpin most trendy software program programs. Whether or not you’re constructing a SaaS dashboard, a cellular app, or coordinating microservices, the way you expose your knowledge shapes your velocity, flexibility, and technical debt.

Via a number of years of constructing manufacturing programs with React and TypeScript, I’ve shipped REST, GraphQL, and tRPC APIs. Every choice presents distinct strengths, with real-world tradeoffs builders and engineering leaders ought to perceive. This information compares these applied sciences from a sensible engineering perspective, specializing in structure, sort security, toolchains, and developer expertise.

API Approaches Defined

REST: The Internet Commonplace

REST (Representational State Switch) organizes APIs round sources, linked to URL endpoints (e.g., /customers/42). Purchasers work together utilizing customary HTTP strategies (GET, POST, PUT, DELETE). It’s easy, extensively supported, and language-agnostic.

GraphQL: Versatile Queries

GraphQL, developed by Fb, allows purchasers to question exactly the information they want by way of a single endpoint, utilizing a structured question language. This mannequin fits dynamic UIs and knowledge aggregation eventualities, minimizing overfetching and underfetching.

tRPC: Kind Security for TypeScript

tRPC offers end-to-end sort security by exposing backend procedures on to TypeScript purchasers, with out code era or guide typings. For those who work in a full-stack TypeScript environment-especially with Subsequent.js or monorepos-the sort inference between shopper and server can speed up iteration and cut back bugs.

Core Comparability Desk

REST GraphQL tRPC
Endpoints Useful resource URLs Single endpoint, a number of queries Process calls
Kind Security Handbook Non-compulsory (schema/codegen) Computerized, end-to-end (TS solely)
Overfetch Threat Frequent Minimal Minimal
Finest For Public APIs, CRUD Dynamic UIs, aggregation Full-stack TypeScript, inner APIs
Language Help Broad, language-agnostic Broad, language-agnostic TypeScript solely

Adoption Patterns

REST

  • Works nicely for easy CRUD companies, public APIs, or any system the place useful resource semantics map cleanly to endpoints.
  • Typical in e-commerce catalogs, third-party integrations, and companies needing broad language help.

GraphQL

  • Finest for advanced, evolving UIs that want versatile querying and mix a number of backend sources.
  • Frequent in product dashboards, social purposes, and mobile-first tasks.

tRPC

  • Fits full-stack TypeScript codebases-especially inner instruments, admin panels, or monolithic/monorepo architectures.
  • Splendid for groups optimizing for fast prototyping, constant sorts, and minimized boilerplate.

Sensible Professionals and Cons

REST

Benefits
  • Easy; almost each developer is conversant in the strategy.
  • In depth tooling (e.g., Swagger/OpenAPI).
  • Simple debugging, request logging, and use of HTTP requirements for cache/management.
  • Language-agnostic: any HTTP shopper can devour a REST API.
Limitations
  • Purchasers typically overfetch or underfetch knowledge; a number of round-trips wanted for advanced UI.
  • No inherent sort contracts; requires additional effort to maintain docs correct.
  • Evolving API form safely over time will be tough.

GraphQL

Benefits
  • Purchasers retrieve precisely the information they request.
  • Introspection and reside schema documentation built-in.
  • Allows fast frontend iteration; backward-compatible evolution.
Limitations
  • Extra preliminary setup and complexity: schema, resolvers, sorts.
  • Caching and monitoring want further patterns.
  • Overly versatile: potential for efficiency traps like N+1 queries.

tRPC

Benefits
  • Finish-to-end sort security between shopper and server.
  • No code era or guide sort upkeep.
  • Quick suggestions loop, minimal boilerplate, and robust DX in shared TypeScript tasks.
  • With Zod, runtime enter validation is trivial.
Limitations
  • Solely works in TypeScript; not appropriate for public APIs or polyglot backends.
  • Tightly {couples} front- and backend; not well-suited for exterior shoppers.

Finest Practices

REST

  • Use clear, hierarchical useful resource URLs (e.g., /customers/42/orders).
  • Apply HTTP verbs and standing codes persistently.
  • Doc endpoints with OpenAPI/Swagger.
  • Plan for versioning (/api/v1/customers), as breaking modifications will occur.

GraphQL

  • Implement schemas with linting and validation (e.g., GraphQL Codegen, Apollo Studio).
  • Optimize resolvers to handle efficiency (N+1 points, batching).
  • Gate mutations and delicate queries with auth and entry controls.

tRPC

  • Hold procedures centered and explicitly typed.
  • Validate inputs with Zod or comparable schema validation.
  • Export router sorts for client-side sort inference.
  • Even with sturdy inner typing, doc procedures for onboarding and maintainability.

Actual Examples

See this public GitHub repository for code samples illustrating all three API sorts.

Troubleshooting Ideas and Frequent Pitfalls

REST

  • Handle Endpoint Sprawl: Resist the temptation to create many comparable endpoints for slight variations of information. Hold your endpoint floor space as small and constant as potential to ease upkeep.
  • API Versioning: Implement versioning (e.g., /v1/customers) early and persistently. This avoids breaking present purchasers as your API evolves. Commonly audit API utilization to detect model drift and outdated purchasers.

GraphQL

  • Question Complexity: Monitor question execution and set limits on depth and complexity. Deeply nested or unbounded queries could cause sudden server load and efficiency bottlenecks. Use question value evaluation instruments or plugins.
  • Limit Public Queries: Keep away from exposing generic “catch-all” queries in public APIs. Restrict scope and apply strict entry controls to stop abuse-especially on endpoints that be a part of or combination massive datasets.

tRPC

  • Infrastructure Abstraction: Don’t expose backend infrastructure, similar to database schema or uncooked desk buildings, by way of procedures. Hold your API floor aligned with area ideas, not database particulars.
  • Area-Targeted Procedures: Design your API round enterprise logic relatively than CRUD operations on the database degree. This retains the contract secure and abstracts away inner modifications from purchasers.
  • Inside-Solely by Design: tRPC is meant for inner APIs inside TypeScript monorepos or full-stack apps. Keep away from utilizing tRPC for public APIs or circumstances involving groups working in a number of languages.

How you can Select

  • For those who’re constructing an inner, full-stack TypeScript device (e.g., with Subsequent.js): tRPC delivers unmatched velocity and kind security for TypeScript-first groups. Fewer bugs, near-zero guide typings, and prompt suggestions throughout refactorings.
  • In case your frontend is advanced, knowledge necessities are fluid, otherwise you combination a number of backend sources: GraphQL’s flexibility is well worth the up-front studying curve.

For those who’re exposing a public API, supporting a number of languages, or want long-term backward compatibility: REST is secure, battle-tested, and universally supported.

]]>
https://techtrendfeed.com/?feed=rss2&p=3946 0
Gemini Code Help in Apigee API Administration now usually accessible https://techtrendfeed.com/?p=3828 https://techtrendfeed.com/?p=3828#respond Mon, 23 Jun 2025 10:52:00 +0000 https://techtrendfeed.com/?p=3828

Immediately, we’re excited to announce the overall availability of Gemini Code Help in Apigee API Administration. After a profitable preview interval with worthwhile buyer suggestions, this highly effective AI-assisted API growth functionality is now prepared for manufacturing use as a part of the Gemini Code Help Enterprise version.


Accelerating API growth with Enterprise Context

In at the moment’s digital panorama, APIs function the essential connectors between purposes, providers, and knowledge. Nevertheless, creating constant, safe, and well-designed APIs at scale stays difficult for a lot of organizations. Builders should navigate advanced specs, guarantee compliance with organizational requirements, and keep away from creating duplicate or inconsistent APIs.

Gemini Code Help in Apigee addresses these challenges by combining the facility of Google’s Gemini fashions with Apigee’s distinctive Enterprise Context capabilities. By leveraging your group’s present API ecosystem by API hub, Gemini Code Help ensures generated APIs persistently align along with your established patterns, safety schemas, and object buildings.

Key options now usually accessible

Based mostly on buyer suggestions through the preview interval, we have enhanced Gemini Code Help in Apigee with a number of highly effective capabilities:


Chat interface for API creation

Create API specs utilizing pure language Gemini Code Help interface. Merely add @Apigee earlier than your LLM immediate to begin designing or updating your API specification, decreasing onboarding friction for builders preferring conversational interfaces over conventional form-based instruments.

AI-generated spec summaries

Get plain language summaries of the generated API specs to grasp the API and perceive at a look how your enterprise context was used, serving to platform groups shortly assess API performance with out diving into technical specs.


Iterative spec design

Simply refine your generated API specs by the chat interface, a top-requested function throughout preview, enabling builders to quickly iterate and ideal their APIs with out ranging from scratch.


Enhanced Enterprise Context

Profit from improved assist for nested objects, making certain constant formatting for widespread components like addresses or foreign money codecs throughout completely different guardian objects, serving to platform groups preserve governance requirements and scale back inconsistencies throughout their API ecosystem.


Duplicate API detection

Proactively determine when a requested API could duplicate present performance, so that you could re-use present APIs when acceptable somewhat than creating duplicate endpoints, stopping builders from losing time on redundant work whereas serving to platform groups scale back API sprawl.


Enterprise-grade safety

Constructed with VPC Service Controls compliance, this software meets stringent enterprise safety necessities, enabling platform groups to confidently deploy AI-assisted growth inside their safe and remoted compliance frameworks.


Seamless growth workflow

Gemini Code Help in Apigee gives a streamlined workflow that accelerates API growth whereas sustaining governance:

1: Create: Generate OpenAPI specs by pure language prompts

2: Iterate: Replace OpenAPI specs by pure language prompts

3: Take a look at: Deploy mock servers for collaborative testing

4: Publish: Share specs along with your crew by API hub

5: Implement: Generate proxies or backend implementations

With every step, Enterprise Context ensures your APIs align with organizational requirements whereas decreasing duplication and inconsistency.


Getting began with Gemini Code Help in Apigee

Gemini Code Help in Apigee is accessible as a part of the Gemini Code Help Enterprise version. Present Gemini Code Help Enterprise clients can entry these capabilities instantly inside VS Code by Cloud Code and Gemini Chat.

To get began:

1: Set up the Cloud Code and Gemini Code Help extension for VS Code

2: Hook up with your Apigee and API hub cases

3: Start creating APIs with pure language prompts

For detailed directions, go to our documentation or discover interactive tutorials within the Google Cloud console.


Develop APIs with Gemini Code Help in Apigee

Your suggestions drives enhancements in Gemini Code Help for Apigee, together with assist for extra IDEs like IntelliJ, the gRPC protocol, type rule enforcement from API Hub, and expanded capabilities for proxy authoring and optimization.

Begin constructing extra constant, safe, and well-designed APIs with Gemini Code Help in Apigee, at the moment.

]]>
https://techtrendfeed.com/?feed=rss2&p=3828 0
Dynatrace Reside Debugger, Mistral Brokers API, and extra – SD Instances Every day Digest https://techtrendfeed.com/?p=2980 https://techtrendfeed.com/?p=2980#respond Thu, 29 May 2025 22:37:13 +0000 https://techtrendfeed.com/?p=2980

Dynatrace has launched its Reside Debugger, which permits builders to debug companies working in manufacturing with out interrupting working code. 

It permits them to entry code-level knowledge with out including new code or redeploying, examine an utility’s full state, and debug throughout hundreds of workload cases concurrently.

Mistral launches Brokers API

The Brokers API consists of built-in connectors for code execution, net search, picture technology, and MCP instruments. 

It provides persistent reminiscence for conversations, permitting for seamless and contextual interactions over time. It additionally helps dialog branching to create new interplay paths at any level.

“The true energy of our Brokers API lies in its capability to orchestrate a number of brokers to unravel advanced issues. By dynamic orchestration, brokers might be added or faraway from a dialog as wanted—each contributing its distinctive capabilities to sort out totally different elements of an issue,” the corporate wrote in a weblog publish

SecurityScorecard MAX is now out there in CrowdStrike Market

SecurityScorecard MAX is a managed service for detecting and addressing provide chain safety dangers. It gives menace searching and monitoring of an organization’s third-party distributors to assist scale back provide chain threat. 

“Organizations perceive that offer chain resilience is important for safe enterprise operations,” stated Aleksandr Yampolskiy, CEO and co-founder of SecurityScorecard. “Bringing MAX to the CrowdStrike Market places proactive provide chain cybersecurity immediately into the arms of safety groups. With simplified entry to steady monitoring and menace intelligence, groups can confidently handle dangers deeper inside their provider ecosystems, constructing better resilience into each layer of their cybersecurity technique.”

]]>
https://techtrendfeed.com/?feed=rss2&p=2980 0
Gemini API I/O updates – Google Builders Weblog https://techtrendfeed.com/?p=2836 https://techtrendfeed.com/?p=2836#respond Sun, 25 May 2025 21:22:58 +0000 https://techtrendfeed.com/?p=2836

The Gemini API gives builders a streamlined strategy to construct revolutionary functions with cutting-edge generative AI fashions. Google AI Studio simplifies this course of of testing all of the API capabilities permitting for speedy prototyping and experimentation with textual content, picture, and even video prompts. When builders wish to check and construct at scale they’ll leverage all of the capabilities obtainable by means of the Gemini API.


New fashions obtainable by means of the API

Gemini 2.5 Flash Preview – We’ve added a brand new 2.5 Flash preview (gemini-2.5-flash-preview-05-20) which is best over the earlier preview at reasoning, code, and lengthy context. This model of two.5 Flash is at present #2 on the LMarena leaderboard behind solely 2.5 Professional. We’ve additionally improved Flash cost-efficiency with this newest replace decreasing the variety of tokens wanted for a similar efficiency, leading to 22% effectivity positive factors on our evals. Our purpose is to maintain bettering based mostly in your suggestions, and make each typically obtainable quickly.

Gemini 2.5 Professional and Flash text-to-speech (TTS) – We additionally introduced 2.5 Professional and Flash previews for text-to-speech (TTS) that help native audio output for each single and a number of audio system, throughout 24 languages. With these fashions, you may management TTS expression and elegance, creating wealthy audio output. With multispeaker, you may generate conversations with a number of distinct voices for dynamic interactions.

Gemini 2.5 Flash native audio dialog – In preview, this mannequin is accessible by way of the Reside API to generate pure sounding voices for dialog, in over 30 distinct voices and 24+ languages. We’ve additionally added proactive audio so the mannequin can distinguish between the speaker and background conversations, so it is aware of when to reply. As well as, the mannequin responds appropriately to a consumer’s emotional expression and tone. A separate pondering mannequin permits extra advanced queries. This now makes it attainable so that you can construct conversational AI brokers and experiences that really feel extra intuitive and pure, like enhancing name middle interactions, growing dynamic personas, crafting distinctive voice characters, and extra.

Lyria RealTime – Reside music technology is now obtainable within the Gemini API and Google AI Studio to create a steady stream of instrumental music utilizing textual content prompts. With Lyria RealTime, we use WebSockets to ascertain a persistent, real-time communication channel. The mannequin constantly produces music in small, flowing chunks and adapts based mostly on inputs. Think about including a responsive soundtrack to your app or designing a brand new kind of musical instrument! Check out Lyria RealTime with the PromptDJ-MIDI app in Google AI Studio.

Gemini 2.5 Professional Deep Suppose – We’re additionally testing an experimental reasoning mode for two.5 Professional. We’ve seen unimaginable efficiency with these Deep Pondering capabilities for extremely advanced math and coding prompts. We sit up for making it broadly obtainable so that you can experiment with quickly.

Gemma 3n – Gemma 3n is a generative AI open mannequin optimized to be used in on a regular basis units, similar to telephones, laptops, and tablets. It might probably deal with textual content, audio and imaginative and prescient inputs. This mannequin contains improvements in parameter-efficient processing, together with Per-Layer Embedding (PLE) parameter caching and a MatFormer mannequin structure that gives the pliability to scale back compute and reminiscence necessities.


New performance within the API

Thought summaries

To assist builders perceive and debug mannequin responses, we’ve added thought summaries for two.5 Professional and Flash within the Gemini API. We take the mannequin’s uncooked ideas and synthesize them right into a useful abstract with headers, related particulars and gear calls. The uncooked chain-of-thoughts in Google AI Studio has additionally been up to date with the brand new thought summaries.


Pondering budgets

We launched 2.5 Flash with pondering budgets to offer builders management over how a lot fashions suppose to steadiness efficiency, latency, and value for the apps they’re constructing. We can be extending this functionality to 2.5 Professional quickly.

from google import genai
from google.genai import varieties

shopper = genai.Consumer(api_key="GOOGLE_API_KEY")
immediate = "What's the sum of the primary 50 prime numbers?"
response = shopper.fashions.generate_content(
  mannequin="gemini-2.5-flash-preview-05-20",
  contents=immediate,
  config=varieties.GenerateContentConfig(
    thinking_config=varieties.ThinkingConfig(thinking_budget=1024,
      include_thoughts=True
    )
  )
)

for half in response.candidates[0].content material.elements:
  if not half.textual content:
    proceed
  if half.thought:
    print("Thought abstract:")
    print(half.textual content)
    print()
  else:
    print("Reply:")
    print(half.textual content)
    print()

Python

Pattern code to allow and retrieve thought summaries with out streaming, returning a ultimate thought abstract with the response.

New URL Context instrument

We added a brand new experimental instrument, URL context, to retrieve extra context from hyperlinks that you simply present. This can be utilized by itself or at the side of different instruments similar to Grounding with Google Search. This instrument is a key constructing block for builders seeking to construct their very own model of analysis brokers with the Gemini API.

from google import genai
from google.genai.varieties import Instrument, GenerateContentConfig, GoogleSearch

shopper = genai.Consumer()
model_id = "gemini-2.5-flash-preview-05-20"

instruments = []
instruments.append(Instrument(url_context=varieties.UrlContext))
instruments.append(Instrument(google_search=varieties.GoogleSearch))

response = shopper.fashions.generate_content(
    mannequin=model_id,
    contents="Give me three day occasions schedule based mostly on YOUR_URL. Additionally let me know what must taken care of contemplating climate and commute.",
    config=GenerateContentConfig(
        instruments=instruments,
        response_modalities=["TEXT"],
    )
)

for every in response.candidates[0].content material.elements:
    print(every.textual content)
# get URLs retrieved for context
print(response.candidates[0].url_context_metadata)

Python

Pattern code for Grounding with Google Search and URL Context

Pc use instrument

We’re bringing Venture Mariner’s browser management capabilities to the Gemini API by way of a brand new laptop use instrument. To make it simpler for builders to make use of this instrument, we’re enabling the creation of Cloud Run cases optimally configured for operating browser management brokers by way of one click on from Google AI Studio. We’ve begun early testing with firms like Automation Anyplace, UiPath and Browserbase. Their helpful suggestions can be instrumental in refining its capabilities for a broader experimental developer launch this summer season.


Enhancements to structured outputs

The Gemini API now has broader help for JSON Schema, together with much-requested key phrases similar to “$ref” (for references) and people enabling the definition of tuple-like constructions (e.g., prefixItems).


Video understanding enhancements

The Gemini API now permits YouTube video URLs or video uploads to be added to a immediate, enabling customers to to summarize, translate, or analyze the video content material. With this latest replace, the API helps video clipping, enabling flexibility in analyzing particular elements of a video. That is notably helpful for movies longer than 8 hours. We’ve additionally added help for dynamic frames per second (FPS), permitting 60 FPS for movies like video games or sports activities the place velocity is vital, and 0.1 FPS for movies the place velocity is much less of a precedence. To assist customers save tokens, we now have additionally launched help for 3 completely different video resolutions: excessive (720p), commonplace (480p), and low (360p).


Async perform calling

The cascaded structure within the Reside API now helps asynchronous perform calling, guaranteeing consumer conversations stay clean and uninterrupted. This implies your Reside agent can proceed producing responses even whereas it is busy executing features within the background, by merely including the habits discipline to the perform definition and setting it to NON-BLOCKING. Learn extra about this within the Gemini API developer documentation.


Batch API

We’re additionally testing a brand new API, which helps you to simply batch up your requests and get them again in a max 24 hour turnaround time. The API will come at half the worth of the interactive API and with a lot greater price limits. We hope to roll that out extra broadly later this summer season.


Begin constructing

That’s a wrap on I/O for this 12 months! With the Gemini API and Google AI Studio, you may flip your concepts into actuality, whether or not you are constructing conversational AI brokers with natural-sounding audio or growing instruments to investigate and generate code. As all the time, try the Gemini API developer docs for all the most recent code samples and extra.

Discover this announcement and all Google I/O 2025 updates on io.google.

]]>
https://techtrendfeed.com/?feed=rss2&p=2836 0
AI updates from the previous week: Anthropic launches Claude 4 fashions, OpenAI provides new instruments to Responses API, and extra — Could 23, 2025 https://techtrendfeed.com/?p=2803 https://techtrendfeed.com/?p=2803#respond Sat, 24 May 2025 20:40:02 +0000 https://techtrendfeed.com/?p=2803

Claude Opus 4 and Claude Sonnet 4 are able to enterprise long-running duties and might work repeatedly for a number of hours. Claude Opus 4 excels at coding and sophisticated problem-solving, whereas Claude Sonnet 4 improves on Sonnet 3.7 and balances efficiency and effectivity. 

Along with releasing these new fashions, the corporate additionally revealed a beta for prolonged considering with software use, the power to make use of instruments in parallel, and normal availability of Claude Code.

The Anthropic API additionally added 4 new capabilities: the code execution software, MCP connector, Information API, and the power to cache prompts for as much as one hour. 

OpenAI provides new instruments and options to the Responses API

New additions embody distant MCP server help, help for the most recent picture technology mannequin, the power to make use of the Code Interpreter software, and the power to make use of the file search software in OpenAI’s reasoning fashions. 

The corporate has additionally added background mode, which permits the mannequin to execute complicated reasoning duties asynchronously; reasoning summaries; and the power to reuse reasoning gadgets throughout totally different API requests. 

Mistral launches LLM for coding brokers

Devstral is a light-weight open supply mannequin designed particularly for agentic coding duties. Based on the SWE-Bench Verified benchmark, Devstral outperforms GPT-4.1-mini and Claude 3.5 Haiku. Its small dimension permits it to run on a single RTX 4090 or a Mac with 32GB RAM, enabling it to be utilized for native, on-device use. 

“Whereas typical LLMs are glorious at atomic coding duties resembling writing standalone features or code completion, they presently wrestle to unravel real-world software program engineering issues. Actual-world growth requires contextualising code inside a big codebase, figuring out relationships between disparate elements, and figuring out refined bugs in intricate features. Devstral is designed to deal with this downside. Devstral is educated to unravel actual GitHub points,” Mistral wrote in its announcement

AI updates from Google I/O 

Google I/O was filled with updates on AI, together with new fashions resembling the brand new textual content mannequin Gemini Diffusion and Gemma 3n, a multimodal mannequin designed for working on telephones, laptops and tablets, able to dealing with audio, textual content, picture, and video. 

Google additionally revealed two new Gemma mannequin variants: MedGemma for well being purposes and SignGemma for translating signal language into spoken language textual content.

Gemini Code Help for people and Gemini Code Help for GitHub are each now typically obtainable as properly, and are powered by Gemini 2.5. This software was first launched as a preview again in February, and at this time’s GA launch consists of a number of new updates, together with chat historical past and threads, the power to specify guidelines to use to each AI technology within the chat, customized instructions, and the power to evaluation and settle for code strategies in elements, throughout information, or all collectively.

The corporate additionally introduced a reimagined model of Colab, a brand new software that generates UI elements from wireframes or textual content prompts known as Sew, and new options in Firebase Studios, resembling the power to translate Figma designs into purposes.

AI updates from Microsoft Construct

A brand new coding agent has been added to GitHub Copilot that will get activated when a developer assigns it a GitHub situation or calls it by way of a immediate in VS Code.  It will possibly help with a variety of duties, together with including options, fixing bugs, extending checks, refactoring code, and bettering documentation. All the agent’s pull requests require human approval earlier than they run, GitHub confirmed. 

Microsoft additionally introduced Home windows AI Foundry, a platform that helps the AI developer life cycle throughout coaching and inference. Builders will be capable to handle and run open-source LLMs by means of Foundry Native or deliver proprietary fashions and convert, fine-tune, and deploy them throughout purchasers and cloud.

Help for the Mannequin Context Protocol (MCP) was additionally added throughout Microsoft’s platforms and providers, together with GitHub, Copilot Studio, Dynamics 365, Azure AI Foundry, Semantic Kernel, and Home windows 11.

Microsoft additionally introduced a brand new open supply undertaking known as NLWeb to assist builders create conversational AI interfaces for his or her web sites utilizing any mannequin or knowledge supply they’d like. NLWeb endpoints additionally act as MCP servers, so builders will be capable to simply make their content material discoverable to AI brokers in the event that they’d like.

Shopify releases new developer instruments

It’s launching a brand new unified developer platform that integrates the Dev Dashboard and CLI and gives AI-powered code technology. Builders also can now create “dev shops” the place they will preview apps in check environments, a function that was beforehand solely obtainable to Plus plans, and is now obtainable to all builders.

Different new options introduced at this time embody declarative customized knowledge definitions, a unified Polaris UI toolkit, and Storefront MCP, which permits builders to construct AI brokers that may act as procuring assistants for shops.  

HeyMarvin launches AI Moderated Interviewer

The AI Moderated Interviewer conducts moderated consumer interviews with probably hundreds of members with no human facilitator. It will possibly additionally analyze the interview responses to floor insights and tendencies. 

“What makes it so highly effective is that it permits free-flowing, qualitative, partaking conversations — however on demand and at scale,” stated Prayag Narula, CEO and co-founder of HeyMarvin. “We’re speaking a whole bunch, even hundreds of individuals, one thing that was beforehand solely seen at massive scale utilizing a small military of volunteers in moments like presidential elections. Now, even a small staff can have that very same in-depth dialogue with their prospects. It’s not only a higher survey, and it’s not changing conventional consumer interviews. It’s a complete new method of doing analysis that merely didn’t exist just a few months in the past.”

Zencoder publicizes Autonomous Zen Brokers for CI/CD

These brokers run instantly in CI/CD pipelines and might be triggered by webhooks from situation trackers or code occasions. They’ll resolve points, implement fixes, enhance code high quality, generate and run checks, and create documentation. 

“The subsequent evolution in AI-powered growth isn’t nearly coding sooner – it’s about accelerating the entire software program growth lifecycle, the place coding is only one step,” stated Andrew Filev, CEO and founding father of Zencoder. “By bringing autonomous brokers into CI/CD pipelines, we’re enabling groups to eradicate routine work and speed up hand-offs, sustaining momentum 24/7, whereas conserving people in charge of what in the end ships.”


Learn final week’s AI updates right here: OpenAI Codex, AWS Remodel for .NET, and extra — Could 16, 2025

]]>
https://techtrendfeed.com/?feed=rss2&p=2803 0
Elevate CRM Technique with Microsoft CRM Customization & API Integration  https://techtrendfeed.com/?p=2711 https://techtrendfeed.com/?p=2711#respond Thu, 22 May 2025 07:19:16 +0000 https://techtrendfeed.com/?p=2711

The Evolution of CRM: From Static Techniques to Strategic Intelligence   

The Legacy of Conventional CRM Techniques

Traditionally, Buyer Relationship Administration (CRM) programs served as digital repositories—storing buyer knowledge, monitoring interactions, and managing gross sales pipelines. Whereas practical, these programs typically operated in silos, missing integration with different enterprise processes. This isolation led to fragmented buyer views and inefficiencies in service supply.

The Shift In the direction of Built-in CRM Options

The enterprise panorama has developed, demanding extra agile and interconnected programs. Fashionable enterprises require CRM options that not solely retailer knowledge but in addition present actionable insights, automate workflows, and combine seamlessly with different enterprise programs. This shift has positioned CRM as a strategic software, central to buyer engagement and enterprise progress.

Microsoft CRM: A Platform for Strategic Progress

Microsoft Dynamics CRM has emerged as a frontrunner on this new period of clever CRM options. Its sturdy structure helps in depth customization, permitting companies to tailor the platform to their distinctive wants. With capabilities like Microsoft CRM API integration, organizations can join their CRM with varied functions, enhancing knowledge stream and operational effectivity.

Customization That Converts: Tailoring Microsoft CRM for Enterprise Benefit   

Customized Modules — Constructing CRM That Works Like Your Enterprise Thinks

One-size-fits-all hardly ever matches anybody—particularly in complicated B2B ecosystems. With Microsoft CRM software program growth, companies can transcend out-of-the-box options and construct customized modules that replicate real-world workflows.

Want a deal lifecycle tailor-made to your gross sales course of? Or a case escalation module that aligns along with your customer support SLAs? Microsoft CRM permits you to outline entities, attributes, relationships, and enterprise logic by means of customized plugins and workflow assemblies. That’s not simply customization—it’s course of orchestration at scale.

Technical perception: Builders typically use XRM growth (eXtended Relationship Administration) to create entities that aren’t customer-centric—like vendor onboarding or inner asset monitoring. This stretches the core CRM platform right into a company-wide utility layer.

Function-Primarily based Dashboards & Views — From Muddle to Readability

Your finance workforce doesn’t have to see advertising and marketing lead scoring. And your gross sales supervisor doesn’t need to be buried underneath assist tickets.

With Microsoft CRM customization providers, we create role-based dashboards that serve related KPIs to every workforce—gross sales, service, finance, ops—so each consumer opens CRM and sees solely what issues.

It’s also possible to outline customized views, grids, and kinds utilizing the Energy Apps platform, driving precision and eradicating digital friction from day by day operations.

Stat to contemplate: Based on Forrester, customized CRM dashboards enhance consumer productiveness by as much as 22%, because of decreased cognitive load and determination latency.

Seamless Integrations: APIs That Eradicate Silos

Integrating CRM with core programs like ERPs, advertising and marketing automation platforms, and assist desks is not non-compulsory—it’s foundational.

By way of Microsoft CRM API integration, we sew Microsoft CRM into your digital material utilizing RESTful APIs, OData endpoints, and middleware connectors (like Azure Logic Apps or Energy Automate). Which means real-time knowledge sharing, two-way sync, and 0 duplicate entries.

Instance: For a logistics shopper, we built-in Microsoft Dynamics CRM with SAP ERP utilizing customized middleware. This automated supply standing updates inside CRM information, lowering handbook coordination time by 40%.

Microsoft CRM unifying sales and service for seamless operations
 

Microsoft CRM Unifies and Accelerates Collaboration

 

From Workflows to Wow: Automating with Intelligence Inside Microsoft CRM  

Workflow Automation: Doing Extra With out Doing Extra

Guide processes are the quiet killers of scale. However with Microsoft CRM software program growth, we rework routine duties into self-operating sequences. Utilizing built-in instruments like Energy Automate and native CRM workflows, companies can:

Auto-assign leads primarily based on geography or trade

Set off emails or SMS on deal stage modifications

Generate service tickets primarily based on sentiment from kinds

This isn’t simply time-saving—it’s consistency at scale, and consistency breeds belief.

Professional Tip:
For superior use instances, customized plugin assemblies in C# may be deployed server-side to set off automation that commonplace workflows can’t deal with (e.g., real-time foreign money conversions or CRM-to-ERP knowledge mirroring).

AI Insights and Predictive Scoring: Information That Thinks Forward

Information is barely as helpful as your potential to behave on it. With Microsoft’s AI Builder and Dynamics 365 Buyer Insights, you may infuse predictive intelligence straight into your CRM.

Gross sales reps can now:

See lead scores primarily based on historic conversion patterns

Get AI-suggested actions to shut offers sooner

Auto-prioritize duties primarily based on urgency or threat

Stat Alert: Corporations utilizing predictive lead scoring see a median 15% enhance in win charges, in line with Nucleus Analysis.

With Microsoft CRM customization providers, we tailor AI fashions primarily based in your dataset—so the insights are as yours as your clients are.

Embedded Generative AI: Smarter Interactions, Sooner Responses (H3)

Microsoft’s integration with Copilot in Dynamics 365 is altering how groups work together with CRM. Image this:

Summarized alternative notes in seconds

Auto-generated electronic mail responses for service instances

Insightful recaps after gross sales calls

That is Generative AI assembly operational context. It’s not generic—it’s customized, contextual, and embedded.

Tech Perception:
Copilot makes use of Azure OpenAI Service for GPT-based responses and fine-tunes prompts utilizing CRM context like entity metadata, timeline historical past, and buyer information.

Strategic Execution: Turning Microsoft CRM right into a Aggressive Benefit   

From Blueprint to Construct — Laying the Groundwork for CRM Success

Too typically, Microsoft CRM implementations begin with a characteristic checklist as a substitute of a enterprise drawback assertion. That’s a mistake.

Earlier than a single area is personalized or a plugin coded, we work with shoppers to conduct course of discovery workshops. This helps map:

Ache factors throughout departments

Redundant or damaged workflows

Key integration touchpoints

This discovery is the place Microsoft CRM software program growth turns into a strategic train—not simply technical.

Use Case:
For an expert providers agency, aligning CRM workflows with their shopper onboarding course of led to a 30% sooner onboarding cycle and improved compliance throughout jurisdictions.

Phased Implementation: Avoiding the “Massive Bang” Entice

Making an attempt to launch all the things without delay? That’s a recipe for adoption failure.

We suggest a modular rollout strategy, the place Section 1 focuses on high-impact use instances (e.g., lead administration or service ticket automation), adopted by layered deployments for:

Customized dashboards and KPIs

Advertising and marketing automation flows

Monetary integrations through Microsoft CRM accounting software program options

Every part is data-informed, user-tested, and refined earlier than scaling.

Strategic stat: Based on McKinsey, CRM tasks with phased rollouts see 42% greater consumer adoption in comparison with full-suite launches.

Measuring What Issues: KPIs That Go Past Logins

We don’t measure success by what number of customers logged in final month.

As an alternative, we monitor:

Lead-to-close velocity enhancements

CSAT uplift tied to CRM insights

Income influenced by CRM-tracked interactions

Automation protection % (duties faraway from handbook execution)

These are the metrics that replicate enterprise transformation, not simply system utilization.

Improving operational efficiency with Microsoft CRM's data-driven approach

Streamline Efficiency with Microsoft CRM Options

Ultimate Take: Your CRM Isn’t a Device—It’s Your Progress Technique in Movement   

Microsoft Dynamics CRM is not only a system of document. It’s a system of intelligence, execution, and differentiation. However solely when handled as a dwelling, evolving enterprise asset—not a one-off deployment.

With the proper mix of Microsoft CRM software program growth, API integration, customization providers, and clever automation, your CRM doesn’t simply assist your online business—it accelerates it.

Whether or not you’re streamlining lead administration, integrating accounting workflows, or constructing AI-powered buyer journeys, success lies in aligning know-how with technique—and execution with experience.

Let’s Construct a Smarter CRM, Collectively    

At Flexsin, we don’t simply customise Microsoft CRM—we architect enterprise outcomes.
From tailored modules and workflow automation to Microsoft CRM API integration and Microsoft CRM accounting software program options, our end-to-end Microsoft CRM software program growth providers make sure you’re not simply maintaining with digital transformation—you’re main it.

Able to evolve your CRM from static to strategic?
Discover Flexsin’s Microsoft CRM Companies and uncover how we assist companies flip CRM right into a aggressive edge.

 



]]>
https://techtrendfeed.com/?feed=rss2&p=2711 0
AI updates from the previous week: IBM watsonx Orchestrate updates, net search in Anthropic API, and extra — Could 9, 2025 https://techtrendfeed.com/?p=2368 https://techtrendfeed.com/?p=2368#respond Mon, 12 May 2025 14:19:27 +0000 https://techtrendfeed.com/?p=2368

Software program corporations are continuously making an attempt so as to add an increasing number of AI options to their platforms, and AI corporations are continuously releasing new fashions and options. It may be arduous to maintain up with all of it, so we’ve written this roundup to share a number of notable updates round AI that software program builders ought to find out about. 

IBM introduces new instruments to assist with scaling AI brokers throughout the enterprise

At its IBM THINK convention earlier this week, IBM launched new updates that can assist alleviate a number of the challenges related to scaling AI brokers. 

New agent capabilities in watsonx Orchestrate embrace:

  • New instruments for integrating, customizing, and deploying brokers 
  • Pre-build area brokers for HR, gross sales, and procurement
  • Integration with over 80 enterprise purposes, together with ones from Adobe, AWS, Microsoft, Oracle, Salesforce Agentforce, SAP, ServiceNow, and Workday
  • Agent orchestration capabilities for complicated tasks like workflow planning and job routing that require coordination between a number of brokers and instruments
  • Agent observability throughout all the agent life cycle

The corporate additionally introduced its Agent Catalog to supply simpler entry to brokers from IBM and its companions.

Anthropic provides net search capabilities to its API

This newest addition will allow builders to construct purposes and brokers that may entry and ship essentially the most up-to-date insights. 

“When Claude receives a request that might profit from up-to-date info or specialised information, it makes use of its reasoning capabilities to find out whether or not the online search device would assist present a extra correct response. If looking out the online could be useful, Claude generates a focused search question, retrieves related outcomes, analyzes them for key info, and gives a complete reply with citations again to the supply materials,” Anthropic wrote in a weblog put up. 

Amazon Q Developer will get new agentic coding expertise in Visible Studio Code

Amazon has introduced a brand new agentic coding expertise for Amazon Q Developer in Visible Studio Code.

“This expertise brings interactive coding capabilities, constructing upon present prompt-based options. You now have a pure, real-time collaborative accomplice working alongside you whereas writing code, creating documentation, operating checks, and reviewing adjustments,” Amazon wrote in a weblog put up asserting the information.

Google releases up to date model of Gemini 2.5 Professional Preview

The updates implement higher coding capabilities, particularly for duties like remodeling code and creating agentic workflows. 

In response to Google, this launch addresses developer suggestions resembling decreasing errors in operate calling and bettering operate calling set off charges. 

OpenAI to purchase Windsurf

Bloomberg reported the deal earlier this week, saying that OpenAI would purchase the corporate for $3 billion. In response to Bloomberg, the deal has not but closed. 

Windsurf, beforehand referred to as Codeium, is an agentic IDE designed to allow seamless collaboration between builders and AI. 

HCL declares new AI agent orchestration platform

HCL Common Orchestrator (UnO) Agentic is an orchestration platform for coordinating workflows amongst AI brokers, robots, methods, and people. 

It builds upon HCL’s Common Orchestrator, and provides agentic AI capabilities to supply clever orchestration and insert AI brokers into business-critical processes and workflows.

“By integrating deterministic and probabilistic execution, HCL UnO transforms how people and clever methods collaborate to form the way forward for enterprise operations,” mentioned Kalyan Kumar (KK), chief product officer of HCLSoftware.

DigitalOcean declares new NVIDIA-powered GPU Droplets

NVIDIA RTX 4000 Ada Technology, NVIDIA RTX 6000 Ada Technology, and NVIDIA L40S GPUs at the moment are obtainable as GPU Droplets. 

In response to Bratin Saha, chief product and expertise officer at DigitalOcean, the brand new choices are supposed to present prospects with entry to extra inexpensive GPUs for his or her AI workloads. 

“DigitalOcean’s easy and scalable cloud platform makes it simpler to deploy superior AI workloads on NVIDIA expertise, so organizations can shortly and extra simply construct, scale, and deploy AI options,” mentioned Dave Salvator, director of accelerated computing merchandise at NVIDIA.

Yellowfin 9.15 now obtainable

The most recent model of the enterprise intelligence platform introduces AI-enabled Pure Question Language (AI NLQ), which permits customers to ask questions on their information. 

Different updates on this launch embrace expanded REST API capabilities, enhanced bar and column chart customization, less complicated yearly information comparisons and report styling, stricter default controls for higher information safety, and assist for writable Clickhouse information sources. 

“Yellowfin 9.15 debuts the primary integration between the Yellowfin product and AI platforms,” mentioned Brad Scarff, CTO of Yellowfin. “These platforms have monumental potential to unlock productiveness and value advantages for all of our prospects, and upcoming variations of Yellowfin will construct on this preliminary launch to supply additional revolutionary AI-enabled options.”

Apiiro declares partnership with ServiceNow

On account of the collaboration, Apiiro’s AI-native deep code evaluation (DCA) and code-to-runtime matching will likely be utilized in ServiceNow’s Configuration Administration Database (CMDB), which gives an up-to-date view of IT and software program environments

“This integration is a serious milestone for Apiiro and the ASPM market at massive, as IT operations, safety operations, and software safety proceed to converge,” mentioned John Leon, VP  of partnerships and enterprise growth at Apiiro. “It’s a privilege to increase our partnership with ServiceNow by introducing our Agentic Utility Safety platform because the definitive supply of reality for software program growth and turning into the software program growth lifecycle (SDLC) Methods of Document throughout the ServiceNow CMDB, equipping enterprise customers with a exact stock of software program property to make sure operational effectivity in at this time’s quickly evolving, AI-driven software program growth revolution.”

Dremio launches MCP Server

The server will permit AI brokers to discover datasets, generate queries, and retrieve ruled information.  

“Dremio’s implementation of MCP allows Claude to increase its reasoning capabilities on to a company’s information property, unlocking new potentialities for AI-powered insights whereas sustaining enterprise governance,” mentioned Mahesh Murag, product supervisor at Anthropic.


View AI updates from final month right here.

]]>
https://techtrendfeed.com/?feed=rss2&p=2368 0
Get Began in AI and NFTs with the Limewire API https://techtrendfeed.com/?p=2329 https://techtrendfeed.com/?p=2329#respond Sun, 11 May 2025 14:09:46 +0000 https://techtrendfeed.com/?p=2329
LimeWire

AI media creation has expanded to unimaginable video artwork and a number of different necessary enhancements, and LimeWire is main the best way in creating an superior interface for the common consumer to turn into an AI artist. Limewire has simply launched its Developer API, a way for engineers like us to create dynamic AI artwork on the fly!

Fast Hits

  • Free to enroll!
  • Offers strategies to create a wide range of high quality photographs from any variety of AI providers and algorithms
  • Create photographs primarily based on textual content and different photographs
  • Modify current photographs to scale them, take away backgrounds, and extra
  • Use JavaScript, PHP, Python, or any of your favourite languages
  • Documentation is clear and simple to grasp
  • Very straightforward to get began

A easy API name is as straightforward as:

curl -i -X POST 
  https://api.limewire.com/api/picture/technology 
  -H 'Authorization: Bearer MY_API_KEY' 
  -H 'Content material-Sort: utility/json' 
  -H 'Settle for: utility/json' 
  -H 'X-Api-Model: v1' 
  -d '{
    "immediate": "A fantastic princess in entrance of her kingdom",
    "aspect_ratio": "1:1"
  }'

You can too upscale an current, uploaded picture:

curl -i -X POST 
  https://api.limewire.com/api/picture/upscaling 
  -H 'Authorization: Bearer MY_API_KEY' 
  -H 'Content material-Sort: utility/json' 
  -H 'Settle for: utility/json' 
  -H 'X-Api-Model: v1' 
  -d '{
    "image_asset_id": "116a972f-666a-44a1-a3df-c9c28a1f56c0",
    "upscale_factor": 4
  }'

The worth in creating AI artwork dynamically is tough to emphasize the enormity of for engineers and authors alike. Relatively than scouring Google Pictures for picture to match my weblog put up, I can use LimeWire’s API to ship key phrases from the article to create a consultant picture. Likewise, authors can feed their story to LimeWire to generate illustrations! You possibly can even combine the developer API into your platform in your customers to make use of!

Give LimeWire’s new developer API a attempt! LimeWire permits you to create AI photographs the place you are!


]]>
https://techtrendfeed.com/?feed=rss2&p=2329 0
xAI Dev Leaks API Key for Non-public SpaceX, Tesla LLMs – Krebs on Safety https://techtrendfeed.com/?p=2016 https://techtrendfeed.com/?p=2016#respond Fri, 02 May 2025 14:09:25 +0000 https://techtrendfeed.com/?p=2016

An worker at Elon Musk’s synthetic intelligence firm xAI leaked a personal key on GitHub that for the previous two months might have allowed anybody to question non-public xAI massive language fashions (LLMs) which seem to have been customized made for working with inside knowledge from Musk’s corporations, together with SpaceX, Tesla and Twitter/X, KrebsOnSecurity has realized.

Picture: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” on the safety consultancy Seralys, was the primary to publicize the leak of credentials for an x.ai utility programming interface (API) uncovered within the GitHub code repository of a technical employees member at xAI.

Caturegli’s publish on LinkedIn caught the eye of researchers at GitGuardian, an organization that makes a speciality of detecting and remediating uncovered secrets and techniques in public and proprietary environments. GitGuardian’s methods continuously scan GitHub and different code repositories for uncovered API keys, and hearth off automated alerts to affected customers.

GitGuardian’s Eric Fourrier advised KrebsOnSecurity the uncovered API key had entry to a number of unreleased fashions of Grok, the AI chatbot developed by xAI. In complete, GitGuardian discovered the important thing had entry to no less than 60 fine-tuned and personal LLMs.

“The credentials can be utilized to entry the X.ai API with the identification of the person,” GitGuardian wrote in an e-mail explaining their findings to xAI. “The related account not solely has entry to public Grok fashions (grok-2-1212, and so on) but additionally to what seems to be unreleased (grok-2.5V), improvement (research-grok-2p5v-1018), and personal fashions (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier discovered GitGuardian had alerted the xAI worker concerning the uncovered API key almost two months in the past — on March 2. However as of April 30, when GitGuardian immediately alerted xAI’s safety workforce to the publicity, the important thing was nonetheless legitimate and usable. xAI advised GitGuardian to report the matter by its bug bounty program at HackerOne, however just some hours later the repository containing the API key was faraway from GitHub.

“It seems like a few of these inside LLMs had been fine-tuned on SpaceX knowledge, and a few had been fine-tuned with Tesla knowledge,” Fourrier stated. “I undoubtedly don’t assume a Grok mannequin that’s fine-tuned on SpaceX knowledge is meant to be uncovered publicly.”

xAI didn’t reply to a request for remark. Nor did the 28-year-old xAI technical employees member whose key was uncovered.

Carole Winqwist, chief advertising and marketing officer at GitGuardian, stated giving probably hostile customers free entry to non-public LLMs is a recipe for catastrophe.

“When you’re an attacker and you’ve got direct entry to the mannequin and the again finish interface for issues like Grok, it’s undoubtedly one thing you should use for additional attacking,” she stated. “An attacker might it use for immediate injection, to tweak the (LLM) mannequin to serve their functions, or attempt to implant code into the availability chain.”

The inadvertent publicity of inside LLMs for xAI comes as Musk’s so-called Division of Authorities Effectivity (DOGE) has been feeding delicate authorities information into synthetic intelligence instruments. In February, The Washington Put up reported DOGE officers had been feeding knowledge from throughout the Training Division into AI instruments to probe the company’s applications and spending.

The Put up stated DOGE plans to copy this course of throughout many departments and companies, accessing the back-end software program at completely different elements of the federal government after which utilizing AI know-how to extract and sift by details about spending on workers and applications.

“Feeding delicate knowledge into AI software program places it into the possession of a system’s operator, growing the probabilities will probably be leaked or swept up in cyberattacks,” Put up reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot known as GSAi to 1,500 federal staff on the Basic Companies Administration, a part of an effort to automate duties beforehand completed by people as DOGE continues its purge of the federal workforce.

A Reuters report final month stated Trump administration officers advised some U.S. authorities workers that DOGE is utilizing AI to surveil no less than one federal company’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE workforce has closely deployed Musk’s Grok AI chatbot as a part of their work slashing the federal authorities, though Reuters stated it couldn’t set up precisely how Grok was getting used.

Caturegli stated whereas there isn’t any indication that federal authorities or person knowledge could possibly be accessed by the uncovered x.ai API key, these non-public fashions are possible skilled on proprietary knowledge and should unintentionally expose particulars associated to inside improvement efforts at xAI, Twitter, or SpaceX.

“The truth that this key was publicly uncovered for 2 months and granted entry to inside fashions is regarding,” Caturegli stated. “This type of long-lived credential publicity highlights weak key administration and inadequate inside monitoring, elevating questions on safeguards round developer entry and broader operational safety.”

]]>
https://techtrendfeed.com/?feed=rss2&p=2016 0