My First Meanderings With My Own Locally Hosted AI Tools
This is by no means any form of detailed tutorial, but I did pick up a few things about locally hosted AI while I’ve been testing the last few days, so I thought I’d share some of what I’ve picked up.
I did upgrade my GPU card recently to 12 GB VRAM, mainly to better accommodate a game I sometimes play and to run AI functions on DaVinci Resolve Studio, which was struggling on my previous 6 GB VRAM card. But I realised my GPU is mostly idle, and seeing my home server has no decent GPU in it, I could maybe put my GPU to some better and more useful purpose. I also realised that many AI models could fit in even 8 GB or less of VRAM. I had recently upped my Google One subscription to the AI Plus option, and wanted to see how locally hosted AI compares with that. Not only that, but I’ve been testing out that Gemini option to see if it will replace my paid Canva subscription service.
TL;DR for all of this is the locally hosted AI is not quite up to the standard of Gemini AI Plus. Probably a 24 GB VRAM card would do a lot better as you can fit way bigger AI Models into it and even run more than one AI app at a time, but those cards are super pricey. As someone said recently, you either pay for the AI subscription service, or you pay for your own hardware!
The full post is too long to put here, so best to read at my blog as there are various photos also inserted into the text.
See
My First Meanderings With My Own Locally Hosted AI Tools
This is by no means any form of detailed tutorial, but I did pick up a few things about locally hosted AI while I've been testing the last few days, so I thought I'd share some of what I've picked up. I did upgrade my GPU card recently to 12 GB VRAM, mainly to better...
#
technology #
selfhosting #
AI