7 Comments
User's avatar
Stub's avatar

Happy birthday @technovangelist!

Your videos are gold!

I'd be interested in a deeper dive on how you use Obsidian with Ollama.

Thanks for everything!

emddom's avatar

Happy birthday! Wishing you the best! šŸŽšŸŽˆšŸ°

Todor's avatar

Happy Birthday. Glad to see you are back. I would like to learn how you did the autocompletion in Obsidian with Llama. I’m also trying to setup something like this in Obsidian. Thanks.

Fernando's avatar

Happy birthday, Matt!

I've been searching for a way to use pydantic and ollama but it does not work, even with 70b models. Could you help me out? Is there a way to do it?

Thanks.

haldor's avatar

Your content is very valuable! I would love to find a very simple RAG app. Here’s my large corpus of data (even local data) and here’s my local or remote model I want you to use. That’s it. I don’t want config files, llamaindex, langchain, chromadb setup etc, etc..getting as close to that as possible would be amazing useful. You can do this today with a few documents. I want to do it with 1000’s of documents. Based on how simple and clear your videos are, you would be the right person to find this.