Replies: 5 comments 11 replies
-
Hi @a2jc4life I think you'd be surprised how low the actual cost is. Based on your numbers, initially embedding your entire vault should be ~$1. Maybe $2 max. After that, the embedding cost is negligible. For the chat models, you're charged on a per use basis. And for a single query, the most you're charged is the token limit for a single request per query. For the GPT-4, that will top out ~$0.25 per query, and for GPT-3.5-16K (my personal recommendation) it will top out at $0.05 per query. And GPT-3.5-4K would only be pennies per query. I encourage you to give it a try and report back on your findings for anyone else that might be concerned about the cost. Thanks for your interest in Smart Connections 🌴 |
Beta Was this translation helpful? Give feedback.
-
Just as a point of reference: My largest and main vault is over 20,000 files and 50,000,000 words. I use this plugin in that vault and have re-embedded the entire thing many times. I also use it on other smaller vaults of mine. I also use plugins like text generator frequently. I do some of my own development using the OpenAI, even at times doing embedding processes I would have to leave overnight because they were quite large. I also use the API through other tools and apps. I almost always use GPT4. Even with all of that, and considering that most of my heavier development took place when the costs were somewhat higher, my total costs have only been $350 since Nov 2022. Most months my cost is less than $10. I hope that helps. |
Beta Was this translation helpful? Give feedback.
-
These are my concerns as well and just starting to use chatGPT for more than entertainment. I'm assuming subscribing to ChatGPT Plus does not work because it uses an API? I'm also assuming you could set a soft $ limit to see if you wanted to continue and a hard $ limit to stop the process if it was getting too expensive knowing you might lose the money?!? |
Beta Was this translation helpful? Give feedback.
-
Update: I can't begin to fathom how the math works, but y'all are right. Unless it has a lot of vault still to process -- which it doesn't look like it does? -- it cost about $1.42 to initialize. Based on what I've been doing with it using the chat, it's about 5 cents to process a moderately-sized note in a similar manner to what I was doing in OpenAI directly. That's a mixed-bag, though. To do it directly at is free. To do it within the plugin obviously saves having to go to a separate site. But I also found it was noticeably slower than using the OpenAI site, so...it depends, I guess, on what's most important to the particular user. |
Beta Was this translation helpful? Give feedback.
-
Is there a reason why this cant be modified to include a local AI? I have Ollama installed and I use the API to get things resolved locally, all day. Obviously even on a maxed out M2 Pro, it is not as fast as OpenAI but its cheaper especially if you have a large vault. |
Beta Was this translation helpful? Give feedback.
-
I love the looks of this, but honestly, I'm kind of terrified to even try it, given that it seems we have to guess how much it will cost, and find out later. The chatbot says a token can be anywhere from approximately a character to approximately a word. My vault contains 4,069,804 words, or roughly 28,500,000 characters. And the current GPT costs $0.06 per thousand tokens, which is a lot more than the previous $0.0015. It looks like it would cost me $100s -- potentially $1000s since, again, we're guessing here at the number of tokens required -- just for initial processing.
By the estimation on this page: https://openai.com/pricing of 1000 tokens being equivalent to roughly 750 words, my vault would be about 5,426 sets of 1000 tokens. That's about $325. At a token per character, it would be more like ...$1700, if I did my math right.
Those are huge numbers, and that's huge variation.
How are people running this?
Beta Was this translation helpful? Give feedback.
All reactions