Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance check between LLM vs No LLM backbone #118

Closed
SajayR opened this issue Jun 26, 2024 · 1 comment
Closed

Performance check between LLM vs No LLM backbone #118

SajayR opened this issue Jun 26, 2024 · 1 comment

Comments

@SajayR
Copy link

SajayR commented Jun 26, 2024

Was the performance gap between using an LLM in the backbone vs no LLM in the backbone ever tested to verify the excellent eval results to actually be originating from the LLMs?
I tried to run tests on my own, and the metrics on the runs without the LLM backbone seem to be performing better or sometimes equivalent to the ones with LLM backbones, and I just wanted to get the author's perspective on the case

@kwuking
Copy link
Collaborator

kwuking commented Jul 8, 2024

Thank you very much for your attention to our work and valuable feedback. I'm quite puzzled by this result, as it seems to be inconsistent with our experimental findings. However, given the complexity of training a large model and the relatively small size of the current dataset, insufficient training may lead to ineffective outcomes. We are currently reflecting on this issue. It is important to emphasize that TimeLLM is clearly more important and valuable as a model that integrates text and time series data across modes. We are making every effort in this direction, hoping to achieve semantic integration of time series and text data. Our new work is in progress, and we'll keep everyone updated if we make progress.

@kwuking kwuking closed this as completed Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants