Happy Birthday. Glad to see you are back. I would like to learn how you did the autocompletion in Obsidian with Llama. Iām also trying to setup something like this in Obsidian. Thanks.
Your content is very valuable! I would love to find a very simple RAG app. Hereās my large corpus of data (even local data) and hereās my local or remote model I want you to use. Thatās it. I donāt want config files, llamaindex, langchain, chromadb setup etc, etc..getting as close to that as possible would be amazing useful. You can do this today with a few documents. I want to do it with 1000ās of documents. Based on how simple and clear your videos are, you would be the right person to find this.
Happy birthday @technovangelist!
Your videos are gold!
I'd be interested in a deeper dive on how you use Obsidian with Ollama.
Thanks for everything!
Happy birthday! Wishing you the best! ššš°
Happy birthday!
Happy Birthday. Glad to see you are back. I would like to learn how you did the autocompletion in Obsidian with Llama. Iām also trying to setup something like this in Obsidian. Thanks.
Happy birthday, Matt!
I've been searching for a way to use pydantic and ollama but it does not work, even with 70b models. Could you help me out? Is there a way to do it?
Thanks.
Happy birthday
Your content is very valuable! I would love to find a very simple RAG app. Hereās my large corpus of data (even local data) and hereās my local or remote model I want you to use. Thatās it. I donāt want config files, llamaindex, langchain, chromadb setup etc, etc..getting as close to that as possible would be amazing useful. You can do this today with a few documents. I want to do it with 1000ās of documents. Based on how simple and clear your videos are, you would be the right person to find this.