-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduces MockLLM and provides an example #130
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing.
High level question - what's the use case for the MockLLM? do we expect the user to set prompt messages and response message directly? This is related to several comments for the MockLLM chat interface.
Please also run ruff check log10 examples
and ruff format log10 examples
for lint/format. current check failed.
mock_function: Callable = None | ||
If mock_function is not provided, it is assigned to an identity function. | ||
''' | ||
super().__init__(hparams, log10_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about marking {"model": "MockLLM"}
, maybe when hparams["model"]
is None?
log10/llm.py
Outdated
request = self.chat_request(messages, hparams) | ||
|
||
start_time = time.perf_counter() | ||
content = self.mock_function(messages[-1].content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be better to check messages is not empty before messages[-1]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.mock_function(messages[-1].content)
Does this assume the mock_function
is the default lambda x:x
? What if self.mock_function need to take all messages
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wenzhe-log10 yes, extra check won't hurt. will do.
Yes, mock_function does that. However, I will update this with another commit that does what you are asking for from a different interface.
{ | ||
"message": { | ||
"role": "assistant", | ||
"content": content, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems the completion
is the last message of messages
. And request
also contains the full messages
. Depends on how you want to use the logs in future, you may want to remove the last message (completion) from request
.
Or maybe have a mock_function to do that, which takes the full messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check the chat_expected
method I will be committing soon.
log10/llm.py
Outdated
''' | ||
MockLLM is an LLM interface that allows you to mock the behavior of an LLM using any Python function. | ||
It is useful for log10 testing and development without having to setup or call a real LLM. | ||
See examples/feedback/echo for an example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please add a simple example here (maybe in the format of doctest)? Might be easier to follow than the above examples, which combines feedback etc..
Format like this, but make the code runnable.
Example:
>>> from log10.llm import MockLLM, Message, Log10Config
>>> client = MockLLM(mock_function=noisy_predictor, log10_config=config)
>>> response = client.chat([Message(role="user", content=str(x))])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I like yours. I will add that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool. looks good.
Do we still need the two example files in examples/feedback/echo
? Maybe remove since you have add the doc string now.
Last thing is to update the ruff format (failed here).
stale PR. closing this for now |
Rationale: For testing and other purposes, we need a Log10 LLM instance that mimics an LLM chat call but doesn't rely on existing LLMs, local or otherwise. The NoopLLM class provides an interface that always produces a constant output. However, for testing feedback/logging, you might want to have a mock LLM that produces different but deterministic outputs for different inputs. This PR provides that.
As a choice, the current implementation only provides
.chat()
to facilitate my current work and leaves.text()
for future implementation.