I just set up a new dedicated AI server that is quite fast by my standards. I have it running with OpenWebUI and would like to integrate it with other services. I think it would be cool to have something like copilot where I can be writing code in a text editor and have it add a readme function or something like that. I have also used some RAG stuff and like it, but I think it would be cool to have a RAG that can access live data, like having the most up to date docker compose file and nginx configs for when I ask it about server stuff. So, what are you integrating your AI stuff with, and how can I get started?

  • VagueAnodyneComments@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 days ago

    good question honestly

    I didn’t do dedicated but i have nice hardware for the ollama / OpenWebUI stack, and I’ve done some experimenting. Certain models have limited uses for me and I’m at least glad I’m able to eff around with this securely/locally. So far I’ve found Qwen3 and Deepseekr1 to be kinda handy sometimes.

    I’d love to find more applications so i guess this reply is more of a follow