-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance check between LLM vs No LLM backbone #118
Comments
Thank you very much for your attention to our work and valuable feedback. I'm quite puzzled by this result, as it seems to be inconsistent with our experimental findings. However, given the complexity of training a large model and the relatively small size of the current dataset, insufficient training may lead to ineffective outcomes. We are currently reflecting on this issue. It is important to emphasize that TimeLLM is clearly more important and valuable as a model that integrates text and time series data across modes. We are making every effort in this direction, hoping to achieve semantic integration of time series and text data. Our new work is in progress, and we'll keep everyone updated if we make progress. |
Was the performance gap between using an LLM in the backbone vs no LLM in the backbone ever tested to verify the excellent eval results to actually be originating from the LLMs?
I tried to run tests on my own, and the metrics on the runs without the LLM backbone seem to be performing better or sometimes equivalent to the ones with LLM backbones, and I just wanted to get the author's perspective on the case
The text was updated successfully, but these errors were encountered: