Spring AI : My Multi-Provider AI Chat
Spring AI : Multi-Provider AI Chat
Overview
I recently published a small hands-on project to explore how to build AI capabilities in Java using Spring Boot and Spring AI.
Project links:
| Project Description | Git Hub |
|---|---|
| Explore different options using spring ai chatclient | GitHUB |
| Implementing Tools calling concept with spring ai | GitHUB |
I’ve kept the repository README detailed so anyone can run and test the APIs quickly.
If you’re learning Spring AI and want a compact reference project, feel free to explore, fork, or share feedback.
Why Spring AI?
Most AI examples today are Python-first, which makes sense from a research perspective. But in many enterprise environments, core systems are still Java-based.
Spring AI brings AI integration into the familiar Spring ecosystem. That means:
-
Dependency injection and clean configuration
-
Provider abstraction (switch models without rewriting business logic)
-
Consistent patterns alongside existing REST APIs
-
Easier integration into existing enterprise applications
For teams already building with Spring Boot, this makes AI features feel like an extension of the platform rather than a separate experimental stack.
What the Project Covers
This project focuses on a practical backend setup with:
- Chat APIs & Conversation APIs
- Streaming responses (SSE)
- Prompt-based analysis endpoints (code + ticket analysis)
- Multi-provider AI support using header/config-based selection
- LLM tools calling ( when you want the LLM to invoke a business logic as part of your existing application)
- Spring AI Advisors for intercepting and enriching requests/responses, including built-in advisors and custom ones like a PII Redaction advisor
Supported Models
- OpenAI
- Gemini
- Ollama (locally setup)
- Groq
- Cohere
- Mistral
