Log10 is addressing the challenges around reliability and consistency of LLM-powered applications via a platform that provides robust evaluations, fine-tuning and debugging tools. We are currently a team of 8 having previously worked in AI and infra roles at companies such as Intel, MosaicML, Docker, SiFive, Starburst and Adobe.
As a part of log10’s hiring process, we use a take home challenge that’s hopefully fun, engaging and gives us something to go over together during your interview. Try only to use the base components and libraries provided by nextjs, tailwindui and other common packages.
We are looking for:
- That it works - for us when we try your submission, and when you demo it on the follow up call.
- Code quality - safe, reliable, easy to read.
- Ability to explain your decisions when we chat about your submission.
Try not to spend more than 8 hours on the test, but feel free to add bells and whistles if time allows.
Playgrounds have come to play a big role in LLM application development. It allows developers to discover the limits of the models, and learn how different variants of prompts produce different kinds of output.
You have a web application which displays the system, user and assistant (AI) messages, allows a user to type and submit additional messages and uses the OpenAI API to produce new assistant messages.
The provided skeleton repository uses the OpenAI API directly (without the backend to relay the requests). So your task is to extend it with one or more of the following options:
OpenAI supports streaming of generations. This provides a nice user experience, as long generations can be a bit jarring to wait for.
As you can tell, the call to OpenAI and its key management isn’t safe for production. Move the calls to NextJS API to make this more secure.
Now that server side support of the OpenAI call, make it support streaming, too.
With or without streaming support (subscriptions), we prefer to use GraphQL over REST.
The skeleton implementation is very limited on purpose. Feel free to add the ability to change the system prompt, edit existing messages, adding new threads etc.
Use a database to persist LLM messages (and/or threads) across reloads.
Please zip your solution and email it as an attachment to us. We will run your code locally.
Get started by replacing the OpenAI key in src/app/page.tsx
and run the following:
yarn dev
or
npm dev