diff --git a/README.md b/README.md index b9aabfb..a116106 100644 --- a/README.md +++ b/README.md @@ -1,59 +1,82 @@ -# πŸš€ QLLM: Simplifying Language Model Interactions +# QLLM: Simplifying Language Model Interactions -Welcome to QLLM, a project designed to streamline your interactions with Large Language Models (LLMs). This monorepo contains two powerful packages: -1. πŸ“š qllm-lib: A versatile TypeScript library for seamless LLM integration -2. πŸ–₯️ qllm-cli: A command-line interface for effortless LLM interactions +![npm version](https://img.shields.io/npm/v/qllm) +![Stars](https://img.shields.io/github/stars/quantalogic/qllm) +![Forks](https://img.shields.io/github/forks/quantalogic/qllm) -## 🌟 Why QLLM and QLLM-LIB? -QLLM bridges the gap between cutting-edge language models and their practical implementation in business processes. Our goal is to make the power of generative AI accessible and actionable for businesses of all sizes. +## Chapter 1: Introduction -QLLM-LIB provides a user-friendly AI toolbox that empowers developers to harness the potential of various LLMs through a single, unified interface. By simplifying interactions with these AI models, we aim to boost productivity and drive innovation across industries. +### 1.1 Welcome to QLLM +Welcome to QLLM, your ultimate command-line tool for interacting with Large Language Models (LLMs). -## πŸ“¦ Packages +> Imagine having a powerful AI assistant at your fingertips, ready to help you tackle complex tasks, generate creative content, and analyze dataβ€”all from your terminal. -### qllm-lib +This README will guide you through everything you need to know to harness the full potential of QLLM and become a master of AI-powered productivity. + +### 1.2 Show Your Support +If you find QLLM helpful and enjoyable to use, please consider giving us a star ✨ on GitHub! Your support not only motivates us to keep improving the project but also helps others discover QLLM. Thank you for being a part of our community! + + +## Chapter 2: Benefits of QLLM + +### 2.1 Why QLLM and QLLM-LIB? +#### Key Benefits: +1. **Unified Access**: QLLM brings together multiple LLM providers under one roof. No more context-switching between different tools and APIs. +2. **Command-Line Power**: As a developer, you live in the terminal. QLLM integrates seamlessly into your existing workflow. +3. **Flexibility and Customization**: Tailor AI interactions to your specific needs with extensive configuration options and support for custom templates. +4. **Time-Saving Features**: From quick queries to ongoing conversations, QLLM helps you get answers fast. +5. **Cross-Platform Compatibility**: Works consistently across Windows, macOS, and Linux. + +### 2.2 Anecdote: A Productivity Boost +Imagine you're a data analyst working on a tight deadline. You need to quickly analyze a large dataset and generate a report for your team. Instead of manually sifting through the data and writing the report, you turn to QLLM. With a few simple commands, you're able to: +1. **Summarize the key insights** from the dataset. +2. **Generate visualizations** to highlight important trends. +3. **Draft a concise, well-written report**. + +All of this without leaving your terminal. The time you save allows you to focus on higher-level analysis and deliver the report ahead of schedule. Your manager is impressed, and you've just demonstrated the power of QLLM to streamline your workflow. + +## Chapter 3: Packages + +```mermaid +graph TD + A[qllm-cli] --> B[qllm-lib] +``` -qllm-lib is a TypeScript library that offers a unified interface for interacting with various LLM providers. It simplifies working with different AI models and provides features like templating, streaming, and conversation management. -#### Practical Example +### 3.1 qllm-lib +A versatile TypeScript library for seamless LLM integration. It simplifies working with different AI models and provides features like templating, streaming, and conversation management. + +#### Practical Example ```typescript import { createLLMProvider } from 'qllm-lib'; async function generateProductDescription() { - const provider = createLLMProvider({ name: 'openai' }); - - const result = await provider.generateChatCompletion({ - messages: [ - { - role: 'user', - content: { - type: 'text', - text: 'Write a compelling product description for a new smartphone with a foldable screen, 5G capability, and 48-hour battery life.' - } - }, - ], - options: { model: 'gpt-4', maxTokens: 200 }, - }); - - console.log('Generated Product Description:', result.text); + const provider = createLLMProvider({ name: 'openai' }); + const result = await provider.generateChatCompletion({ + messages: [ + { + role: 'user', + content: { + type: 'text', + text: 'Write a compelling product description for a new smartphone with a foldable screen, 5G capability, and 48-hour battery life.' + }, + }, + ], + options: { model: 'gpt-4', maxTokens: 200 }, + }); + console.log('Generated Product Description:', result.text); } generateProductDescription(); ``` -This example demonstrates how to use qllm-lib to generate a product description, which could be useful for e-commerce platforms or marketing teams. - -For more detailed information and advanced usage, check out the [qllm-lib README](./packages/qllm-lib/README.md). - -### qllm-cli - -qllm-cli is a command-line interface that leverages qllm-lib to provide easy access to LLM capabilities directly from your terminal. +### 3.2 qllm-cli +A command-line interface that leverages qllm-lib to provide easy access to LLM capabilities directly from your terminal. #### Practical Example - ```bash # Generate a product description qllm ask "Write a 50-word product description for a smart home security camera with night vision and two-way audio." @@ -61,250 +84,310 @@ qllm ask "Write a 50-word product description for a smart home security camera w # Use a specific model for market analysis qllm ask --model gpt-4o-mini --provider openai "Analyze the potential market impact of electric vehicles in the next 5 years. Provide 3 key points." -# Stream a response for real-time content generation -qllm ask --stream --model gemma2:2b --provider ollama "Write a short blog post about the benefits of remote work." +# Write a short blog post about the benefits of remote work +qllm ask --model gemma2:2b --provider ollama "Write a short blog post about the benefits of remote work." -# Describe a picture -qllm ask --stream --model llava:latest --provider ollama "Describe the picture" -i "https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/Kowloon_Waterfront%2C_Hong_Kong%2C_2013-08-09%2C_DD_05.jpg/640px-Kowloon_Waterfront%2C_Hong_Kong%2C_2013-08-09%2C_DD_05.jpg" +# Analyze CSV data from stdin +cat sales_data.csv | qllm ask "Analyze this CSV data. Provide a summary of total sales, top-selling products, and any notable trends. Format your response as a bulleted list." -# Chat -qllm chat --provider ollama --model gemma2:2b +## Example using question from stdin +echo "What is the weather in Tokyo?" | qllm --provider ollama --model gemma2:2b ``` +## Chapter 4: Getting Started -These examples show how qllm-cli can be used for various business tasks, from content creation to market analysis. - -For a complete list of commands and options, refer to the [qllm-cli README](./packages/qllm-cli/README.md). - -## πŸš€ Getting Started - -### Installing qllm-lib +### 4.1 System Requirements +Before we dive into the exciting world of QLLM, let's make sure your system is ready: +- Node.js (version 16.5 or higher) +- npm (usually comes with Node.js) +- A terminal or command prompt +- An internet connection (QLLM needs to talk to the AI, after all!) -To use qllm-lib in your project: - -```bash -npm install qllm-lib -``` - -### Installing qllm-cli - -To use qllm-cli globally: - -```bash -npm install -g qllm -``` - -## πŸ› οΈ Development - -Certainly! I'll rewrite the full updated documentation, incorporating the improvements while maintaining the original features and style. Here's the enhanced version: +### 4.2 Step-by-Step Installation Guide +1. Open your terminal or command prompt. +2. Run the following command: + ```bash + npm install -g qllm + ``` + This command tells npm to install QLLM globally on your system, making it available from any directory. +3. Wait for the installation to complete. You might see a progress bar and some text scrolling by. Don't panic, that's normal! +4. Once it's done, verify the installation by running: + ```bash + qllm --version + ``` + You should see a version number (e.g., 1.8.0) displayed. If you do, congratulations! You've successfully installed QLLM. +> πŸ’‘ Pro Tip: If you encounter any permission errors during installation, you might need to use `sudo` on Unix-based systems or run your command prompt as an administrator on Windows. -### Quick Start +### 4.3 Configuration +Now that QLLM is installed, let's get it configured. Think of this as teaching QLLM your preferences and giving it the keys to the AI kingdom. -For experienced users who want to get up and running quickly: +#### Configuring Default Settings +While you're in the configuration mode, you can also set up some default preferences: +1. Choose your default provider and model. +2. Set default values for parameters like temperature and max tokens. +3. Configure other settings like log level and custom prompt directory. +Here's an example of what this might look like: ```bash -git clone https://github.com/quantalogic/qllm.git -cd qllm -pnpm install -pnpm run build -pnpm run test +$ qllm configure +? Default Provider: openai +? Default Model: gpt-4o-mini +? Temperature (0.0 to 1.0): 0.7 +? Max Tokens: 150 +? Log Level: info ``` -### Project Structure - -This monorepo contains the following packages: - -- `qllm-core`: Core functionality of the QLLM library -- `qllm-cli`: Command-line interface for QLLM -- (Add other packages as applicable) - -Each package has its own `package.json`, source code, and tests. - -### How to Use +> πŸ’‘ Pro Tip: You can always change these settings later, either through the `qllm configure` command or directly in the configuration file located at `~/.qllmrc`. +> -This section provides comprehensive instructions on how to install, build, test, version, and publish the project. +**Providers Supported** -#### Installation +- openai +- anthropic +- AWS Bedrock (Anthropic) +- ollama +- groq +- mistral +- claude +- openrouter -To set up the project: +### 4.4 Your First QLLM Command +Enough setup, let's see QLLM in action! We'll start with a simple query to test the waters. -1. Ensure you have Node.js (β‰₯16.5.0) and pnpm (β‰₯6.0.0) installed. -2. Clone the repository: +#### Running a Simple Query +1. In your terminal, type: ```bash - git clone https://github.com/quantalogic/qllm.git - cd qllm + qllm ask "What is the meaning of life, the universe, and everything?" ``` -3. Install dependencies: - ```bash - pnpm install - ``` - -#### Building +2. Press Enter and watch the magic happen! -To build all packages in the monorepo: - -```bash -pnpm run build +#### Understanding the Output +QLLM will display the response from the AI. It might look something like this: +```plaintext +Assistant: The phrase "the meaning of life, the universe, and everything" is a reference to Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the story, a supercomputer named Deep Thought is asked to calculate the answer to the "Ultimate Question of Life, the Universe, and Everything." After 7.5 million years of computation, it provides the answer: 42... ``` -This command executes the build script for each package, compiling TypeScript and bundling with Rollup as configured. +> 🧠 **Pause and Reflect**: What do you think about this response? How does it compare to what you might have gotten from a simple web search? -#### Testing +## Chapter 5: Core Commands -Run tests across all packages: +### 5.1 The 'ask' Command +The `ask` command is your go-to for quick, one-off questions. It's like having a knowledgeable assistant always ready to help. +#### Syntax and Options ```bash -pnpm run test +qllm ask "Your question here" ``` +- `-p, --provider`: Specify the LLM provider (e.g., openai, anthropic) +- `-m, --model`: Choose a specific model +- `-t, --max-tokens`: Set maximum tokens for the response +- `--temperature`: Adjust output randomness (0.0 to 1.0) -This executes test suites in each package, ensuring code quality and functionality. - -#### Versioning and Changesets - -This project uses Semantic Versioning (SemVer) and Changesets for version management. - -##### Understanding Semantic Versioning - -SemVer uses a three-part version number: MAJOR.MINOR.PATCH - -- MAJOR: Incremented for incompatible API changes -- MINOR: Incremented for backwards-compatible new features -- PATCH: Incremented for backwards-compatible bug fixes - -##### Creating a Changeset - -1. Make code changes. -2. Run: +#### Use Cases and Examples +1. Quick fact-checking: ```bash - pnpm changeset + qllm ask "What year was the first Moon landing?" ``` -3. Follow prompts to select modified packages and describe changes. - - Example: +2. Code explanation: + ```bash + qllm ask "Explain this Python code: print([x for x in range(10) if x % 2 == 0])" ``` - $ pnpm changeset - πŸ¦‹ Which packages would you like to include? Β· qllm-core, qllm-cli - πŸ¦‹ Which packages should have a major bump? Β· No items were selected - πŸ¦‹ Which packages should have a minor bump? Β· qllm-cli - πŸ¦‹ Which packages should have a patch bump? Β· qllm-core - πŸ¦‹ Please enter a summary for this change (this will be in the changelogs). - πŸ¦‹ Summary Β· Added new CLI command and fixed core module bug +3. Language translation: + ```bash + qllm ask "Translate 'Hello, world!' to French, Spanish, and Japanese" ``` -4. Commit the generated changeset file with your changes. - -##### Updating Versions - -To apply changesets and update versions: +### 5.2 The 'chat' Command +While `ask` is perfect for quick queries, `chat` is where QLLM really shines. It allows you to have multi-turn conversations, maintaining context throughout. +#### Starting and Managing Conversations +To start a chat session: ```bash -pnpm run version +qllm chat ``` +Once in a chat session, you can use various commands: +- `/help`: Display available commands +- `/new`: Start a new conversation +- `/save`: Save the current conversation -This command: -1. Analyzes changesets -2. Updates `package.json` files -3. Updates changelogs (CHANGELOG.md) -4. Removes changeset files +### 5.3 The 'run' Command +The `run` command allows you to execute predefined templates, streamlining complex or repetitive tasks. -Example output: +#### Using Predefined Templates +To run a template: +```bash +qllm ``` -Applying changesets -qllm-core patch -qllm-cli minor -All changesets applied! +For example: +```bash +qllm https://raw.githubusercontent.com/quantalogic/qllm/main/prompts/chain_of_thought_leader.yaml ``` -##### Versioning in Monorepo - -- Each package has its own version -- Inter-package dependencies are automatically updated -- Root `package.json` version represents the overall project version - -#### Publishing +#### Creating Custom Templates +You can create your own templates as YAML files. Here's a simple example: +```yaml +name: "Simple Greeting" +description: "A template that generates a greeting" +input_variables: + name: + type: "string" + description: "The name of the person to greet" +prompt: "Generate a friendly greeting for {{name}}." +``` +Save this as `greeting.yaml` and run it with: +```bash +qllm run greeting.yaml +``` -To publish packages: +> 🧠 **Pause and Reflect**: How could you use custom templates to streamline your workflow? Think about repetitive tasks in your daily work that could benefit from AI assistance. + +## Chapter 6: Practical Examples + +### 6.1 Code Analysis Workflow +Imagine you're a developer facing code reviews. Let's set up a code review template to streamline this process. + +#### Setting up a Code Review Template +Save this as `code_review.yaml`: +```yaml +name: "Code Review" +description: "Analyzes code and provides improvement suggestions" +input_variables: + code: + type: "string" + description: "The code to review" + language: + type: "string" + description: "The programming language" +prompt: > + You are an experienced software developer. Review the following {{language}} code and provide suggestions for improvement: {{language}} {{code}} + Please consider: + 1. Code efficiency + 2. Readability + 3. Best practices + 4. Potential bugs +``` -1. Ensure the project is built: - ```bash - pnpm run build - ``` -2. Run the publish command: - ```bash - pnpm run publish-packages - ``` +### 6.2 Content Creation Pipeline +Let's look at how QLLM can assist in content creation, from ideation to drafting and editing. + +#### Ideation Phase +Create a template for brainstorming ideas. Save this as `brainstorm_ideas.yaml`: +```yaml +name: "Content Brainstorming" +description: "Generates content ideas based on a topic and target audience" +input_variables: + topic: + type: "string" + description: "The main topic or theme" + audience: + type: "string" + description: "The target audience" + content_type: + type: "string" + description: "The type of content (e.g., blog post, video script, social media)" +prompt: | + As a creative content strategist, generate 5 unique content ideas for {{content_type}} about {{topic}} targeted at {{audience}}. For each idea, provide: + 1. A catchy title + 2. A brief description (2-3 sentences) + 3. Key points to cover + 4. Potential challenges or considerations +``` -This publishes all packages to npm with public access. +### 6.3 Data Analysis Assistant +Imagine you have a CSV file with sales data. You can use QLLM to help interpret this data: +```bash +cat sales_data.csv | qllm ask "Analyze this CSV data. Provide a summary of total sales, top-selling products, and any notable trends. Format your response as a bulleted list." +``` -### Additional Commands +### 6.4 Image Analysis and Description +QLLM also supports image analysis, allowing you to describe and analyze images directly through the command line. -- Linting: `pnpm run lint` -- Formatting: `pnpm run format` -- Cleaning build artifacts: `pnpm run clean` -- Installing CLI locally: - ```bash - pnpm run install:local - ``` - This builds the project and installs `qllm-cli` globally from `packages/qllm-cli`. +#### Example of Image Analysis +```bash +qllm ask "What do you see in this image?" -i path/to/image.jpg +``` +This command sends the specified image to the AI for analysis and generates a description based on its contents. -### Best Practices +### 6.5 Screenshots Feature +You can capture and analyze screenshots directly from the CLI, making it easier to get insights from visual content. -- Create a changeset for each significant change -- Use clear, concise descriptions in changesets -- Run `pnpm run version` before publishing -- Review changes in `package.json` and changelogs before committing +#### Example of Using Screenshots +```bash +qllm ask "Analyze this screenshot" --screenshot 0 +``` +This command captures the current screen and sends it to the AI for analysis, providing insights based on what is displayed. -By following these practices, you ensure accurate version numbers and help users understand the impact of updates. +## Chapter 7: Troubleshooting Common Issues +Even the most powerful tools can sometimes hiccup. Here are some common issues you might encounter with QLLM and how to resolve them: +1. **Rate Limiting**: Implement a retry mechanism with exponential backoff. +2. **Unexpected Output Format**: Be more specific in your prompts. -### Troubleshooting +## Chapter 8: Best Practices +To get the most out of QLLM, keep these best practices in mind: +1. **Effective Prompt Engineering**: Be specific and clear in your prompts. +2. **Managing Conversation Context**: Use `/new` to start fresh conversations when switching topics. +3. **Leveraging Templates for Consistency**: Create templates for tasks you perform regularly. -Common issues and their solutions: +## Chapter 9: Conclusion and Next Steps +Congratulations! You've now mastered the essentials of QLLM and are well on your way to becoming a CLI AI wizard. -1. **Issue**: `pnpm install` fails - **Solution**: Ensure you're using pnpm 6.0.0 or higher. Try clearing the pnpm cache with `pnpm store prune`. +### 9.1 Final Challenge +Within the next 24 hours, use QLLM to solve a real problem you're facing in your work or personal projects. It could be analyzing some data, drafting a document, or even helping debug a tricky piece of code. Share your experience with a colleague or in the QLLM community. -2. **Issue**: Build fails with TypeScript errors - **Solution**: Check that you're using a compatible TypeScript version (5.5.4 or compatible). Run `pnpm update typescript` to update. +Thank you for joining me on this whirlwind tour of QLLM. Now go forth and command your AI assistant with confidence! πŸš€ -3. **Issue**: Changesets not working - **Solution**: Ensure @changesets/cli is installed correctly. Try reinstalling with `pnpm add -D @changesets/cli`. +## Chapter 10: Additional Resources +For detailed documentation on the packages used in QLLM, please refer to the following links: -### FAQ +## 10. Contributing -Q: Can I use npm or yarn instead of pnpm? -A: While it's possible, we strongly recommend using pnpm for consistency and to avoid potential issues. +We warmly welcome contributions to QLLM CLI! This project is licensed under the Apache License, Version 2.0. To contribute, please follow these steps: -Q: How do I contribute to a specific package? -A: Navigate to the package directory in `packages/` and make your changes there. Ensure you create a changeset for your modifications. +1. Fork the repository on GitHub. +2. Clone your forked repository to your local machine. +3. Create a new branch for your feature or bug fix. +4. Make your changes, adhering to the existing code style and conventions. +5. Write tests for your changes if applicable. +6. Run the existing test suite to ensure your changes don't introduce regressions: + ``` + pnpm test + ``` +7. Commit your changes with a clear and descriptive commit message. +8. Push your changes to your fork on GitHub. +9. Create a pull request from your fork to the main QLLM CLI repository. -### Contributing +Please ensure your code adheres to our coding standards: -We welcome contributions! Please follow these steps: +- Use TypeScript for type safety. +- Follow the existing code style (we use Prettier for formatting). +- Write unit tests for new features. +- Update documentation as necessary, including this README if you're adding or changing features. -1. Fork the repository -2. Create a new branch for your feature -3. Make your changes -4. Create a changeset describing your changes -5. Submit a pull request +We use GitHub Actions for CI/CD, so make sure your changes pass all automated checks. -For more details, see our [CONTRIBUTING.md](CONTRIBUTING.md) file. +### License +This project is licensed under the Apache License, Version 2.0. You may obtain a copy of the License at -Remember to check the `scripts` section in `package.json` for any additional or updated commands. +http://www.apache.org/licenses/LICENSE-2.0 -## 🀝 Contributing +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for more details. +## Acknowledgements -## πŸ“„ License +We would like to extend our heartfelt thanks to the following individuals and organizations for their invaluable contributions to QLLM: -This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details. +1. **OpenAI**: For their groundbreaking work on large language models and the API that powers QLLM. +2. **Anthropic**: For their innovative approach to AI and the Claude models that enhance QLLM's capabilities. +3. **AWS Bedrock**: For their support in providing access to advanced AI models through AWS. +4. **Ollama**: For their cutting-edge LLM platform that powers QLLM'slocally. +5. **Groq**: For their powerful and scalable LLM infrastructure. +6. **Mistral**: For their innovative approach to AI and te represent France πŸ‡«πŸ‡·. -## 🌟 Final Thoughts -QLLM and QLLM-LIB are designed to make working with Large Language Models more accessible and efficient. Whether you're a developer integrating AI capabilities into your applications or a data scientist streamlining your workflow, QLLM provides the tools you need to leverage the power of AI effectively. +A special thanks to the entire QLLM community for their feedback and support. Your insights and contributions are invaluable to us. -We invite you to explore the detailed documentation for each package and join us in improving how businesses interact with AI. Together, we can create practical solutions that drive real-world impact. +And of course for Quantalogic for funding the project. -Happy coding! πŸš€ +https://www.quantalogic.app diff --git a/packages/qllm-cli/CHANGELOG.md b/packages/qllm-cli/CHANGELOG.md index 8909be2..15572ee 100644 --- a/packages/qllm-cli/CHANGELOG.md +++ b/packages/qllm-cli/CHANGELOG.md @@ -1,5 +1,16 @@ # qllm +## 2.9.0 + +### Minor Changes + +- Improve configuration, and several fix + +### Patch Changes + +- Updated dependencies + - qllm-lib@3.6.0 + ## 2.8.0 ### Minor Changes diff --git a/packages/qllm-cli/README.md b/packages/qllm-cli/README.md index 87335d9..bf365bf 100644 --- a/packages/qllm-cli/README.md +++ b/packages/qllm-cli/README.md @@ -1,5 +1,13 @@ # QLLM: Quantalogic Large Language Model CLI & AI Toolbox πŸš€ +![npm version](https://img.shields.io/npm/v/qllm) +![Stars](https://img.shields.io/github/stars/quantalogic/qllm) +![Forks](https://img.shields.io/github/forks/quantalogic/qllm) + +## Table of Contents + +... + ## Table of Contents 1. [Introduction](#1-introduction) @@ -82,7 +90,9 @@ QLLM CLI boasts an impressive array of features designed to elevate your AI inte To embark on your QLLM CLI journey, ensure you have Node.js (version 14 or higher) installed on your system. Then, execute the following command: -```bash +- **Install QLLM CLI globally:** + +``` npm install -g qllm ``` @@ -90,7 +100,7 @@ This global installation makes the `qllm` command readily available in your term Verify the installation with: -```bash +``` qllm --version ``` @@ -104,7 +114,7 @@ Before diving into the world of AI interactions, configure QLLM CLI with your AP Initiate the interactive configuration mode: -```bash +``` qllm configure ``` @@ -140,7 +150,7 @@ This command allows you to manage configuration settings for the QLLM CLI. Examples: -```bash +``` qllm configure --set provider=openai qllm configure --set model=gpt-4 ``` @@ -149,7 +159,7 @@ qllm configure --set model=gpt-4 Display your current settings at any time: -```bash +``` qllm configure --list ``` @@ -167,13 +177,13 @@ QLLM CLI offers a variety of commands for interacting with LLMs. Here's an overv QLLM CLI allows you to run templates directly. This is now the default behavior when no specific command is provided: -```bash +``` qllm ``` For example: -```bash +``` qllm https://raw.githubusercontent.com/quantalogic/qllm/main/prompts/chain_of_tought_leader.yaml ``` @@ -192,31 +202,31 @@ The `run` command supports various options: #### Using with Piped Input -```bash +``` echo "Explain quantum computing" | qllm ask ``` or -```bash +``` cat article.txt | qllm ask "Summarize this text" ``` #### Image Analysis -```bash +``` qllm ask "Describe this image" -i path/to/image.jpg ``` #### Streaming Responses -```bash +``` qllm ask "Write a short story about AI" -s ``` #### Saving Output to File -```bash +``` qllm ask "Explain the theory of relativity" -o relativity_explanation.txt ``` @@ -224,7 +234,7 @@ qllm ask "Explain the theory of relativity" -o relativity_explanation.txt Start an interactive chat session: -```bash +``` qllm chat ``` @@ -252,13 +262,13 @@ The `chat` command also supports options similar to the `ask` command for settin View available providers: -```bash +``` qllm list providers ``` List models for a specific provider: -```bash +``` qllm list models openai ``` @@ -273,7 +283,7 @@ The `list models` command offers several options: Manage your settings at any time: -```bash +``` qllm configure --set model gpt-4 qllm configure --get logLevel qllm configure --list @@ -287,7 +297,7 @@ QLLM CLI offers sophisticated features for power users: Include images in your queries for visual analysis: -```bash +``` qllm ask "Describe this image" -i path/to/image.jpg ``` @@ -300,19 +310,19 @@ QLLM CLI supports multiple image input methods: Use an image from your clipboard: -```bash +``` qllm ask "What's in this image?" --use-clipboard ``` Capture and use a screenshot: -```bash +``` qllm ask "Analyze this screenshot" --screenshot 0 ``` Combine multiple image inputs: -```bash +``` qllm ask "Compare these images" -i image1.jpg -i image2.jpg --use-clipboard ``` @@ -320,7 +330,7 @@ qllm ask "Compare these images" -i image1.jpg -i image2.jpg --use-clipboard For long-form content, stream the output in real-time: -```bash +``` qllm ask "Write a short story about AI" -s ``` @@ -330,7 +340,7 @@ This feature allows you to see the AI's response as it's generated, providing a Save the LLM's response directly to a file: -```bash +``` qllm ask "Explain the theory of relativity" -o relativity_explanation.txt ``` @@ -364,62 +374,62 @@ Each command supports various options. Use `qllm --help` for detailed Explore these example use cases for QLLM CLI: -1. Creative Writing Assistance: +1. **Creative Writing Assistance:** - ```bash + ``` qllm ask "Write a haiku about artificial intelligence" ``` -2. Code Explanation: +2. **Code Explanation:** - ```bash + ``` qllm ask "Explain this Python code: [paste your code here]" ``` -3. Image Analysis: +3. **Image Analysis:** - ```bash + ``` qllm ask "Describe the contents of this image" -i vacation_photo.jpg ``` -4. Interactive Problem-Solving: +4. **Interactive Problem-Solving:** - ```bash + ``` qllm chat -p anthropic -m claude-2 ``` -5. Data Analysis: +5. **Data Analysis:** - ```bash + ``` qllm ask "Analyze this CSV data: [paste CSV here]" --max-tokens 500 ``` -6. Language Translation: +6. **Language Translation:** - ```bash + ``` qllm ask "Translate 'Hello, world!' to French, Spanish, and Japanese" ``` -7. Document Summarization: +7. **Document Summarization:** - ```bash + ``` qllm ask "Summarize this article: [paste article text]" -o summary.txt ``` -8. Character Creation: +8. **Character Creation:** - ```bash + ``` qllm ask "Create a detailed character profile for a sci-fi novel" ``` -9. Recipe Generation: +9. **Recipe Generation:** - ```bash + ``` qllm ask "Create a recipe using chicken, spinach, and feta cheese" ``` -10. Workout Planning: - ```bash +10. **Workout Planning:** + ``` qllm ask "Design a 30-minute HIIT workout routine" ``` @@ -429,7 +439,7 @@ If you encounter issues while using QLLM CLI, try these troubleshooting steps: 1. Verify your API keys are correctly configured: - ```bash + ``` qllm configure --list ``` @@ -439,7 +449,7 @@ If you encounter issues while using QLLM CLI, try these troubleshooting steps: 3. Update to the latest version of QLLM CLI: - ```bash + ``` npm update -g qllm ``` @@ -455,7 +465,7 @@ If problems persist, please open an issue on our GitHub repository with a detail ## 10. Contributing -We warmly welcome contributions to QLLM CLI! To contribute, please follow these steps: +We warmly welcome contributions to QLLM CLI! This project is licensed under the Apache License, Version 2.0. To contribute, please follow these steps: 1. Fork the repository on GitHub. 2. Clone your forked repository to your local machine. @@ -463,7 +473,7 @@ We warmly welcome contributions to QLLM CLI! To contribute, please follow these 4. Make your changes, adhering to the existing code style and conventions. 5. Write tests for your changes if applicable. 6. Run the existing test suite to ensure your changes don't introduce regressions: - ```bash + ``` npm test ``` 7. Commit your changes with a clear and descriptive commit message. @@ -479,7 +489,7 @@ Please ensure your code adheres to our coding standards: We use GitHub Actions for CI/CD, so make sure your changes pass all automated checks. -## 11. License +### License This project is licensed under the Apache License, Version 2.0. You may obtain a copy of the License at diff --git a/packages/qllm-cli/package.json b/packages/qllm-cli/package.json index e866392..3fb808a 100644 --- a/packages/qllm-cli/package.json +++ b/packages/qllm-cli/package.json @@ -1,6 +1,6 @@ { "name": "qllm", - "version": "2.8.0", + "version": "2.9.0", "description": "QLLM CLI: A versatile CLI tool for interacting with multiple AI/LLM providers. Features include chat sessions, one-time queries, image handling, and conversation management. Streamlines AI development with easy provider/model switching and configuration.", "keywords": [ "ai", diff --git a/packages/qllm-cli/src/chat/chat.ts b/packages/qllm-cli/src/chat/chat.ts index b923d77..551dc00 100644 --- a/packages/qllm-cli/src/chat/chat.ts +++ b/packages/qllm-cli/src/chat/chat.ts @@ -67,7 +67,6 @@ export class Chat { private async promptUser(): Promise { try { - const input = await this.ioManager.getUserInput("You: "); // Check if input is undefined (e.g., due to Ctrl+C) diff --git a/packages/qllm-cli/src/chat/command-processor.ts b/packages/qllm-cli/src/chat/command-processor.ts index 3a18fcf..3105905 100644 --- a/packages/qllm-cli/src/chat/command-processor.ts +++ b/packages/qllm-cli/src/chat/command-processor.ts @@ -98,7 +98,7 @@ export class CommandProcessor { provider: config.get("provider"), maxTokens: config.get("maxTokens"), temperature: config.get("temperature"), - stream: true, + noStream: false, }); if (result && conversationId) { diff --git a/packages/qllm-cli/src/commands/ask-command.ts b/packages/qllm-cli/src/commands/ask-command.ts index 86fe489..cffeaa9 100644 --- a/packages/qllm-cli/src/commands/ask-command.ts +++ b/packages/qllm-cli/src/commands/ask-command.ts @@ -57,6 +57,11 @@ export const askCommandAction = async ( } let validOptions: PartialAskCommandOptions = options; + + validOptions = { + ...options, + }; + try { validOptions = await validateOptions( AskCommandOptionsPartialSchema, @@ -78,6 +83,11 @@ export const askCommandAction = async ( const modelName = validOptions.model || cliConfig.get("model") || DEFAULT_MODEL; + const maxTokens = + validOptions.maxTokens || cliConfig.get("maxTokens") || undefined; + const temperature = + validOptions.temperature || cliConfig.get("temperature") || undefined; + const spinner = createSpinner("Processing...").start(); try { @@ -97,6 +107,8 @@ export const askCommandAction = async ( image: imageInputs, provider: providerName, model: modelName, + maxTokens: maxTokens, + temperature: temperature, }; const response = await askQuestion( @@ -106,7 +118,7 @@ export const askCommandAction = async ( usedOptions, ); - if (!usedOptions.stream) { + if (usedOptions.noStream) { spinner.success({ text: ioManager.colorize( "response received successfully!", @@ -120,7 +132,7 @@ export const askCommandAction = async ( ioManager.displaySuccess(`Response saved to ${options.output}`); } - if (!usedOptions.stream) { + if (!usedOptions.noStream) { ioManager.stdout.log(response); } process.exit(0); @@ -253,7 +265,7 @@ async function askQuestion( }, }; - if (options.stream) { + if (!options.noStream) { return streamResponse(spinner, provider, params); } else { const response = await provider.generateChatCompletion(params); diff --git a/packages/qllm-cli/src/commands/chat-command.ts b/packages/qllm-cli/src/commands/chat-command.ts index e3cc6a9..20ab332 100644 --- a/packages/qllm-cli/src/commands/chat-command.ts +++ b/packages/qllm-cli/src/commands/chat-command.ts @@ -1,6 +1,5 @@ // packages/qllm-cli/src/commands/chat-command.ts -import { Command } from "commander"; import { getListProviderNames, getLLMProvider } from "qllm-lib"; import { Chat } from "../chat/chat"; import { chatConfig } from "../chat/chat-config"; @@ -12,12 +11,14 @@ import { } from "../types/chat-command-options"; import { IOManager } from "../utils/io-manager"; import { validateOptions } from "../utils/validate-options"; -import { DEFAULT_MODEL, DEFAULT_PROVIDER } from "../constants"; +import { DEFAULT_PROVIDER } from "../constants"; declare var process: NodeJS.Process; //eslint-disable-line export const chatAction = async (options: ChatCommandOptions) => { try { + const cliConfig = CliConfigManager.getInstance(); + await chatConfig.initialize(); let validOptions = options; @@ -40,13 +41,12 @@ export const chatAction = async (options: ChatCommandOptions) => { const providerName = validOptions.provider || - CliConfigManager.getInstance().get("provider") || - DEFAULT_MODEL; - const modelName = - validOptions.model || - CliConfigManager.getInstance().get("model") || + cliConfig.get("provider") || DEFAULT_PROVIDER; + const modelName = + validOptions.model || cliConfig.get("model") || DEFAULT_PROVIDER; + const availableProviders = getListProviderNames(); if (!availableProviders.includes(providerName)) { ioManager.displayWarning( @@ -62,12 +62,27 @@ export const chatAction = async (options: ChatCommandOptions) => { ); } - chatConfig.set("maxTokens", validOptions.maxTokens); - chatConfig.set("temperature", validOptions.temperature); - chatConfig.set("topP", validOptions.topP); - chatConfig.set("frequencyPenalty", validOptions.frequencyPenalty); - chatConfig.set("presencePenalty", validOptions.presencePenalty); - chatConfig.set("stopSequence", validOptions.stopSequence); + chatConfig.set("presencePenalty", cliConfig.get("presencePenalty")); + chatConfig.set("frequencyPenalty", cliConfig.get("frequencyPenalty")); + chatConfig.set("stopSequence", cliConfig.get("stopSequence")); + chatConfig.set("maxTokens", cliConfig.get("maxTokens")); + chatConfig.set("temperature", cliConfig.get("temperature")); + chatConfig.set("topP", cliConfig.get("topP")); + chatConfig.set("frequencyPenalty", cliConfig.get("frequencyPenalty")); + chatConfig.set("presencePenalty", cliConfig.get("presencePenalty")); + chatConfig.set("stopSequence", cliConfig.get("stopSequence")); + + if (validOptions.maxTokens) + chatConfig.set("maxTokens", validOptions.maxTokens); + if (validOptions.temperature) + chatConfig.set("temperature", validOptions.temperature); + if (validOptions.topP) chatConfig.set("topP", validOptions.topP); + if (validOptions.frequencyPenalty) + chatConfig.set("frequencyPenalty", validOptions.frequencyPenalty); + if (validOptions.presencePenalty) + chatConfig.set("presencePenalty", validOptions.presencePenalty); + if (validOptions.stopSequence) + chatConfig.set("stopSequence", validOptions.stopSequence); const provider = await getLLMProvider(providerName); const models = await provider.listModels(); diff --git a/packages/qllm-cli/src/commands/configure-command.ts b/packages/qllm-cli/src/commands/configure-command.ts index 98799b9..57c9290 100644 --- a/packages/qllm-cli/src/commands/configure-command.ts +++ b/packages/qllm-cli/src/commands/configure-command.ts @@ -1,16 +1,16 @@ -// packages/qllm-cli/src/commands/configure-command.ts import { Command } from "commander"; import { CliConfigManager } from "../utils/cli-config-manager"; import { IOManager } from "../utils/io-manager"; -import { Config } from "../types/configure-command-options"; -import { ConfigSchema } from "../types/configure-command-options"; // {{ edit_1 }} import { z } from "zod"; import { CONFIG_OPTIONS } from "../types/configure-command-options"; import { utils } from "../chat/utils"; import { getListProviderNames, getLLMProvider } from "qllm-lib"; +declare var process: NodeJS.Process; //eslint-disable-line + const configManager = CliConfigManager.getInstance(); const ioManager = new IOManager(); + export const configureCommand = new Command("configure") .description("Configure QLLM CLI settings") .option("-l, --list", "List all configuration settings") @@ -21,10 +21,8 @@ export const configureCommand = new Command("configure") if (options.list) { listConfig(); } else if (options.set) { - if (options.set) { - const [key, value] = options.set.split("="); // Split the input on '=' - await setConfig(key, value); - } + const [key, value] = options.set.split("="); // Split the input on '=' + await setConfig(key, value); } else if (options.get) { getConfig(options.get); } else { @@ -42,6 +40,11 @@ export const configureCommand = new Command("configure") function listConfig(): void { const config = configManager.getAllSettings(); + const pathConfig = configManager.getConfigPath(); + + ioManager.displaySectionHeader("Path Configuration"); + ioManager.displayInfo(`Path: ${pathConfig}`); + ioManager.displaySectionHeader("Current Configuration"); Object.entries(config).forEach(([key, value]) => { if (key === "apiKeys") { @@ -49,11 +52,11 @@ function listConfig(): void { if (value) { Object.entries(value).forEach(([provider, apiKey]) => { ioManager.displayInfo( - ` ${provider}: ${maskApiKey(apiKey)}`, + ` ${provider}: ${maskApiKey(apiKey)}`, ); }); } else { - ioManager.displayInfo(" No API keys set"); + ioManager.displayInfo(" No API keys set"); } } else { ioManager.displayInfo(`${key}: ${JSON.stringify(value)}`); @@ -73,36 +76,35 @@ function maskApiKey(apiKey: string): string { } async function setConfig(key: string, value: string): Promise { - const config = configManager.configCopy(); // Ensure this is a fresh copy - - // Declare variables outside the switch statement - let models: Array<{ id: string }>; - let modelIds: string[]; + // Ensure this is a fresh copy + const config = configManager.configCopy(); if (key === "model") { // Ensure the provider is set before setting the model if (!config.provider) { throw new Error("Provider must be set before setting the model."); } - const provider = await getLLMProvider(config.provider); // {{ edit_1 }} - models = await provider.listModels(); // {{ edit_2 }} - modelIds = models.map((model) => model.id); + + const provider = await getLLMProvider(config.provider); + const models = await provider.listModels(); + const modelIds = models.map((model) => model.id); if (!modelIds.includes(value)) { throw new Error( `Invalid model: ${value}. Available models for provider ${config.provider}: ${modelIds.join(", ")}`, ); } - config.model = value; - } - // Update the configManager with the new values - configManager.set(key as keyof Config, config[key as keyof Config]); + // Update the configuration using the config manager + configManager.set("model", value); + } else { + // Update the configuration using the config manager + configManager.setValue(key, value); + } // Save the updated configuration try { - console.log(`Setting ${key} to ${value}`); - configManager.set(key as keyof Config, value); // Ensure the key-value pair is set correctly + ioManager.displayInfo(`Setting ${key} to ${value}`); await configManager.save(); ioManager.displaySuccess( `Configuration updated and saved successfully`, @@ -116,7 +118,7 @@ async function setConfig(key: string, value: string): Promise { } function getConfig(key: string): void { - const configValue = configManager.get(key as keyof Config); // Ensure this retrieves the correct value + const configValue = configManager.getValue(key); if (configValue) { ioManager.displayInfo(`Configuration for ${key}: ${configValue}`); } else { @@ -125,9 +127,7 @@ function getConfig(key: string): void { } async function interactiveConfig(): Promise { - const config = configManager.configCopy(); // Ensure this is a fresh copy const validProviders = getListProviderNames(); // Fetch valid providers - const configGroups = [ { name: "Provider Settings", @@ -155,14 +155,13 @@ async function interactiveConfig(): Promise { for (const group of configGroups) { ioManager.displayGroupHeader(group.name); - for (const key of group.options) { const configOption = CONFIG_OPTIONS.find( (option) => option.name === key, ); if (!configOption) continue; - const value = config[key as keyof Config]; + const value = configManager.getValue(key); const currentValue = value !== undefined ? ioManager.colorize(JSON.stringify(value), "yellow") @@ -172,7 +171,7 @@ async function interactiveConfig(): Promise { if (key === "provider") { newValue = await ioManager.getUserInput( - `${ioManager.colorize(key, "cyan")} (${configOption.description}) (current: ${currentValue}).\nAvailable providers:\n${validProviders.map((provider) => ` - ${provider}`).join("\n")}\nPlease select a provider: `, + `${ioManager.colorize(key, "cyan")} (${configOption.description}) (current: ${currentValue}).\nAvailable providers:\n${validProviders.map((provider) => ` - ${provider}`).join("\n")}\nPlease select a provider: `, ); // Validate the input against the list of valid providers @@ -188,14 +187,14 @@ async function interactiveConfig(): Promise { const models = await provider.listModels(); const modelIds = models.map( (model: { id: string }) => model.id, - ); // Explicitly type the model parameter + ); - // Update the current value to reflect the new provider - config.provider = newValue.trim(); + // Update the current value using the config manager + configManager.set("provider", newValue.trim()); // Prompt for the default model with improved display const modelInput = await ioManager.getUserInput( - `${ioManager.colorize("model", "cyan")} (Available models):\n${modelIds.map((modelId) => ` - ${modelId}`).join("\n")}\nPlease select a model: `, + `${ioManager.colorize("model", "cyan")} (Available models):\n${modelIds.map((modelId) => ` - ${modelId}`).join("\n")}\nPlease select a model: `, ); // Validate the input against the list of models @@ -206,48 +205,35 @@ async function interactiveConfig(): Promise { continue; // Skip to the next option } - // Set the validated model - config.model = modelInput.trim(); // Ensure this line is executed after setting the provider + // Set the validated model using the config manager + configManager.set("model", modelInput.trim()); } else { newValue = await ioManager.getUserInput( `${ioManager.colorize(key, "cyan")} (${configOption.description}) (current: ${currentValue}): `, ); - } - if (newValue && newValue.trim() !== "") { - await utils.retryOperation( - async () => { - try { - const schema = - ConfigSchema.shape[key as keyof Config]; - const validatedValue = schema.parse( - configOption.type === "number" - ? parseFloat(newValue) - : newValue, - ); - configManager.set( - key as keyof Config, - validatedValue, - ); - ioManager.displaySuccess( - `${key} updated successfully`, - ); - } catch (error) { - if (error instanceof z.ZodError) { - throw new Error( - `Invalid input: ${error.errors[0].message}`, + if (newValue && newValue.trim() !== "") { + await utils.retryOperation( + async () => { + try { + configManager.setValue(key, newValue); + ioManager.displaySuccess( + `${key} updated successfully`, ); + } catch (error) { + if (error instanceof z.ZodError) { + throw new Error( + `Invalid input: ${error.errors[0].message}`, + ); + } + throw error; } - throw error; - } - }, - 3, - 0, - ); + }, + 3, + 0, + ); + } } - - // Update the configManager with the new values - configManager.set(key as keyof Config, config[key as keyof Config]); // Ensure the manager is updated } ioManager.newLine(); // Add a newline after each group } diff --git a/packages/qllm-cli/src/commands/run-command.ts b/packages/qllm-cli/src/commands/run-command.ts index fa5c8b1..c9b3d9e 100644 --- a/packages/qllm-cli/src/commands/run-command.ts +++ b/packages/qllm-cli/src/commands/run-command.ts @@ -54,6 +54,16 @@ export const runActionCommand = async ( cliConfig.get("model") || DEFAULT_MODEL; + const maxTokens = + validOptions.maxTokens || + cliConfig.get("maxTokens") || + undefined; + + const temperature = + validOptions.temperature || + cliConfig.get("temperature") || + undefined; + const variables = parseVariables(validOptions.variables); const executor = setupExecutor(ioManager, spinner); const provider = await getLLMProvider(providerName); @@ -69,12 +79,8 @@ export const runActionCommand = async ( variables: { ...variables }, providerOptions: { model: modelName, - maxTokens: - template.parameters?.max_tokens || - validOptions.maxTokens, - temperature: - template.parameters?.temperature || - validOptions.temperature, + maxTokens: maxTokens, + temperature: temperature, topKTokens: template.parameters?.top_k, topProbability: template.parameters?.top_p, seed: template.parameters?.seed, @@ -86,7 +92,7 @@ export const runActionCommand = async ( stop: template.parameters?.stop_sequences, }, provider, - stream: validOptions.stream, + stream: !validOptions.noStream, onPromptForMissingVariables: async ( template, initialVariables, diff --git a/packages/qllm-cli/src/qllm.ts b/packages/qllm-cli/src/qllm.ts index af35b84..9df1226 100755 --- a/packages/qllm-cli/src/qllm.ts +++ b/packages/qllm-cli/src/qllm.ts @@ -67,7 +67,7 @@ export async function main() { "-v, --variables ", "Template variables in JSON format", ) - .option("-s, --stream", "Stream the response") + .option("-ns, --no-stream", "Stream the response", true) .option("-o, --output ", "Output file for the response") .option( "-e, --extract ", @@ -118,7 +118,7 @@ export async function main() { program .command("ask") .description("Ask a question to an LLM") - .argument("", "The question to ask") + .argument("[question]", "The question to ask (optional if piped)") .option( "-c, --context ", "Additional context for the question", @@ -135,10 +135,10 @@ export async function main() { "Capture screenshot from specified display number", (value) => parseInt(value, 10), ) - .option("-s, --stream", "Stream the response", false) + .option("-ns, --no-stream", "Stream the response", true) .option("-o, --output ", "Output file for the response") .option( - "--system-message ", + "-s, --system-message ", "System message to prepend to the conversation", ) .action((question, options) => { @@ -146,9 +146,8 @@ export async function main() { const mergedOptions = { ...globalOptions, ...options, - question, }; - askCommandAction(question, mergedOptions); + askCommandAction(question || "", mergedOptions); }); // Add other commands diff --git a/packages/qllm-cli/src/types/ask-command-options.ts b/packages/qllm-cli/src/types/ask-command-options.ts index 0a524c4..e0a59d1 100644 --- a/packages/qllm-cli/src/types/ask-command-options.ts +++ b/packages/qllm-cli/src/types/ask-command-options.ts @@ -10,7 +10,7 @@ const BaseAskCommandOptionsSchema = z.object({ temperature: z.number().min(0).max(1).optional(), /** Whether to stream the response */ - stream: z.boolean().optional(), + noStream: z.boolean().optional(), /** Output file for the response */ output: z.string().optional(), diff --git a/packages/qllm-cli/src/types/run-command-options.ts b/packages/qllm-cli/src/types/run-command-options.ts index fa7bc1a..b42f12b 100644 --- a/packages/qllm-cli/src/types/run-command-options.ts +++ b/packages/qllm-cli/src/types/run-command-options.ts @@ -7,7 +7,7 @@ export const RunCommandOptionsSchema = z.object({ model: z.string().optional(), maxTokens: z.number().int().positive().optional(), temperature: z.number().min(0).max(1).optional(), - stream: z.boolean().optional(), + noStream: z.boolean().optional(), output: z.string().optional(), extract: z.string().optional(), }); diff --git a/packages/qllm-cli/src/utils/cli-config-manager.ts b/packages/qllm-cli/src/utils/cli-config-manager.ts index 4f33a1b..2c797ea 100644 --- a/packages/qllm-cli/src/utils/cli-config-manager.ts +++ b/packages/qllm-cli/src/utils/cli-config-manager.ts @@ -4,14 +4,15 @@ import fs from "fs/promises"; import path from "path"; import os from "os"; import { z } from "zod"; +import { IOManager } from "./io-manager"; + +declare var process: NodeJS.Process; // eslint-disable-line // Define the schema for the configuration const CliConfigSchema = z.object({ provider: z.string().optional(), model: z.string().optional(), logLevel: z.enum(["error", "warn", "info", "debug"]).default("info"), - apiKeys: z.record(z.string()).optional(), - customPromptDirectory: z.string().optional(), temperature: z.number().min(0).max(1).optional(), maxTokens: z.number().positive().optional(), topP: z.number().min(0).max(1).optional(), @@ -22,12 +23,15 @@ const CliConfigSchema = z.object({ type Config = z.infer; +type PartialConfig = Partial; + const CONFIG_FILE_NAME = ".qllmrc"; export class CliConfigManager { private static instance: CliConfigManager; private config: Config = { logLevel: "info" }; private configPath: string; + private ioManager: IOManager = new IOManager(); private constructor() { this.configPath = path.join(os.homedir(), CONFIG_FILE_NAME); @@ -57,7 +61,7 @@ export class CliConfigManager { this.config = CliConfigSchema.parse(parsedConfig); } catch (error) { if ((error as NodeJS.ErrnoException).code !== "ENOENT") { - console.warn(`Error loading config: ${error}`); + this.ioManager.displayError(`Error loading config: ${error}`); } // If file doesn't exist or is invalid, we'll use default config } @@ -70,7 +74,7 @@ export class CliConfigManager { JSON.stringify(this.config, null, 2), ); } catch (error) { - console.error(`Error saving config: ${error}`); + this.ioManager.displayError(`Error saving config: ${error}`); } } @@ -78,19 +82,56 @@ export class CliConfigManager { return this.config[key]; } + public getValue(key: string): Config[keyof Config] | undefined { + return this.config[key as keyof Config]; + } + public set(key: K, value: Config[K]): void { this.config[key] = value; } - public getApiKey(provider: string): string | undefined { - return this.config.apiKeys?.[provider]; + public setValue(key: string, value: string | undefined): void { + switch (key) { + case "provider": + this.config.provider = value as Config["provider"]; + break; + case "model": + this.config.model = value as Config["model"]; + break; + case "logLevel": + this.config.logLevel = value as Config["logLevel"]; + break; + case "temperature": + this.config.temperature = value ? parseFloat(value) : undefined; + break; + case "maxTokens": + this.config.maxTokens = value ? parseInt(value) : undefined; + break; + case "topP": + this.config.topP = value ? parseFloat(value) : undefined; + break; + case "frequencyPenalty": + this.config.frequencyPenalty = value + ? parseFloat(value) + : undefined; + break; + case "presencePenalty": + this.config.presencePenalty = value + ? parseFloat(value) + : undefined; + break; + case "stopSequence": + this.config.stopSequence = value + ? value.split(",").map((s) => s.trim()) + : undefined; + break; + default: + this.ioManager.displayError(`Invalid key: ${key}`); + } } - public setApiKey(provider: string, apiKey: string): void { - if (!this.config.apiKeys) { - this.config.apiKeys = {}; - } - this.config.apiKeys[provider] = apiKey; + public configCopy(): Config { + return { ...this.config }; // Return a shallow copy of the config } public async initialize(): Promise { @@ -98,34 +139,12 @@ export class CliConfigManager { // You can add any initialization logic here } - public configCopy(): Config { - return { - provider: this.config.provider, - model: this.config.model, - logLevel: this.config.logLevel, - apiKeys: this.config.apiKeys - ? { ...this.config.apiKeys } - : undefined, - customPromptDirectory: this.config.customPromptDirectory, - temperature: this.config.temperature, - maxTokens: this.config.maxTokens, - topP: this.config.topP, - frequencyPenalty: this.config.frequencyPenalty, - presencePenalty: this.config.presencePenalty, - stopSequence: this.config.stopSequence, - }; - } - public getAllSettings(): Config { - return this.configCopy(); + return { ...this.config }; // Simplified copy logic } - public async setMultiple(settings: Partial): Promise { - Object.entries(settings).forEach(([key, value]) => { - if (key in this.config) { - (this.config as any)[key] = value; - } - }); + public async setMultiple(settings: PartialConfig): Promise { + Object.assign(this.config, settings); await this.save(); } diff --git a/packages/qllm-lib/CHANGELOG.md b/packages/qllm-lib/CHANGELOG.md index 92ed44c..8ff6a04 100644 --- a/packages/qllm-lib/CHANGELOG.md +++ b/packages/qllm-lib/CHANGELOG.md @@ -1,5 +1,11 @@ # qllm-lib +## 3.6.0 + +### Minor Changes + +- Improve configuration, and several fix + ## 3.5.0 ### Minor Changes diff --git a/packages/qllm-lib/README.md b/packages/qllm-lib/README.md index ab0690d..c3378b2 100644 --- a/packages/qllm-lib/README.md +++ b/packages/qllm-lib/README.md @@ -1,5 +1,9 @@ # πŸš€ qllm-lib +![npm version](https://img.shields.io/npm/v/qllm-lib) +![Stars](https://img.shields.io/github/stars/quantalogic/qllm) +![Forks](https://img.shields.io/github/forks/quantalogic/qllm) + ## πŸ“š Table of Contents - [Introduction](#-introduction) @@ -24,7 +28,9 @@ qllm-lib is a powerful TypeScript library that provides a unified interface for To install qllm-lib, use npm: -```bash +- **Install qllm-lib:** + +``` npm install qllm-lib ``` diff --git a/packages/qllm-lib/package.json b/packages/qllm-lib/package.json index 3ce13f8..3fe6e3f 100644 --- a/packages/qllm-lib/package.json +++ b/packages/qllm-lib/package.json @@ -1,6 +1,6 @@ { "name": "qllm-lib", - "version": "3.5.0", + "version": "3.6.0", "description": "Core library providing robust AI engineering functionalities tailored for Large Language Model (LLM) applications, enabling developers to build, deploy, and optimize AI solutions with ease.", "keywords": [ "ai", diff --git a/packages/qllm-lib/src/providers/anthropic/constants.ts b/packages/qllm-lib/src/providers/anthropic/constants.ts index a9d6b7d..70bdac0 100644 --- a/packages/qllm-lib/src/providers/anthropic/constants.ts +++ b/packages/qllm-lib/src/providers/anthropic/constants.ts @@ -2,4 +2,4 @@ export const DEFAULT_AWS_BEDROCK_REGION = 'us-west-2'; export const DEFAULT_AWS_BEDROCK_PROFILE = 'bedrock'; export const DEFAULT_MODEL = 'anthropic.claude-3-haiku-20240307-v1:0'; -export const DEFAULT_MAX_TOKENS = 1024 * 256; +export const DEFAULT_MAX_TOKENS = 128 * 1024; // 128,000 tokens diff --git a/packages/qllm-lib/src/providers/anthropic/index.ts b/packages/qllm-lib/src/providers/anthropic/index.ts index 188082c..fa324ef 100644 --- a/packages/qllm-lib/src/providers/anthropic/index.ts +++ b/packages/qllm-lib/src/providers/anthropic/index.ts @@ -105,8 +105,9 @@ export class AnthropicProvider extends BaseLLMProvider { : undefined, tools: formattedTools, }; - console.log('AnthropicProvider.generateChatCompletion request:'); - console.dir(request, { depth: null }); + + // console.log('AnthropicProvider.generateChatCompletion request:'); + // console.dir(request, { depth: null }); const response = await this.client.messages.create(request); const getTextFromContentBlock = ( @@ -244,8 +245,8 @@ export class AnthropicProvider extends BaseLLMProvider { } private formatToolCalls(toolCalls?: any): ToolCall[] | undefined { - console.log('tool calls:'); - console.dir(toolCalls, { depth: null }); + //console.log('tool calls:'); + //console.dir(toolCalls, { depth: null }); if (!toolCalls) return undefined; return toolCalls.map((toolCall: any) => ({ id: toolCall.id, diff --git a/packages/qllm-lib/src/providers/ollama/index.ts b/packages/qllm-lib/src/providers/ollama/index.ts index 7000dd1..aae791f 100644 --- a/packages/qllm-lib/src/providers/ollama/index.ts +++ b/packages/qllm-lib/src/providers/ollama/index.ts @@ -1,6 +1,3 @@ -import fs from 'fs/promises'; -import path from 'path'; -import axios from 'axios'; import { ChatCompletionParams, ChatCompletionResponse, @@ -28,7 +25,7 @@ import ollama, { ToolCall as OllamaToolCall, Options as OllamaOptions, } from 'ollama'; -import { createTextMessageContent, imageToBase64 } from '../../utils/images'; +import { imageToBase64 } from '../../utils/images'; import { listModels } from './list-models'; const DEFAULT_MODEL = 'llama3.1'; @@ -155,13 +152,8 @@ export class OllamaProvider implements LLMProvider, EmbeddingProvider { if (isTextContent(messageContent)) { content += messageContent.text + '\n'; } else if (isImageUrlContent(messageContent)) { - try { - const imageContent = await createOllamaImageContent(messageContent.url); - images.push(imageContent.url); - } catch (error) { - console.error('Error processing image:', error); - throw error; - } + const imageContent = await createOllamaImageContent(messageContent.url); + images.push(imageContent.url); } } @@ -202,19 +194,14 @@ export class OllamaProvider implements LLMProvider, EmbeddingProvider { } export const createOllamaImageContent = async (source: string): Promise => { - try { - const content = await imageToBase64(source); - - // Return the raw base64 string without the data URL prefix - return { - type: 'image_url', - url: content.base64, - }; - } catch (error) { - console.error(`Error processing image from: ${source}`, error); - throw error; - } + const content = await imageToBase64(source); + // Return the raw base64 string without the data URL prefix + return { + type: 'image_url', + url: content.base64, + }; }; + function formatTools(tools: Tool[] | undefined): OllamaTool[] | undefined { if (!tools) { return undefined; diff --git a/packages/qllm-lib/src/providers/ollama/list-models.ts b/packages/qllm-lib/src/providers/ollama/list-models.ts index f568839..6cbe2ce 100644 --- a/packages/qllm-lib/src/providers/ollama/list-models.ts +++ b/packages/qllm-lib/src/providers/ollama/list-models.ts @@ -32,7 +32,7 @@ export async function listModels(baseUrl: string = 'http://localhost:11434'): Pr const response = await axios.get(`${baseUrl}/api/tags`); if (!response.data || !response.data.models || !Array.isArray(response.data.models)) { - console.warn('Unexpected response format from Ollama API'); + //console.warn('Unexpected response format from Ollama API'); return []; } @@ -42,8 +42,9 @@ export async function listModels(baseUrl: string = 'http://localhost:11434'): Pr description: formatModelDescription(model.details), })); } catch (error) { - console.error('Error fetching models from Ollama:', error); - throw new LLMProviderError('Failed to fetch models from Ollama', 'Ollama'); + //console.error('Error fetching models from Ollama:', error); + const errorMessage = error instanceof Error ? error.message : 'Unknown error'; + throw new LLMProviderError(`Failed to fetch models from Ollama ${errorMessage}`, 'Ollama'); } } diff --git a/packages/qllm-lib/src/providers/openai/index.ts b/packages/qllm-lib/src/providers/openai/index.ts index 3cea708..a77f6e1 100644 --- a/packages/qllm-lib/src/providers/openai/index.ts +++ b/packages/qllm-lib/src/providers/openai/index.ts @@ -17,14 +17,15 @@ import { ChatMessageWithSystem, } from '../../types'; import { - ChatCompletionMessageParam, - ChatCompletionContentPart, - ChatCompletionTool, + ChatCompletionMessageParam as ChatCompletionMessageParamOpenAI, + ChatCompletionContentPart as ChatCompletionContentPartOpenAI, + ChatCompletionTool as ChatCompletionToolOpenAI, + ChatCompletionCreateParamsStreaming as ChatCompletionCreateParamsStreamingOpenAI, + ChatCompletionCreateParamsNonStreaming as ChatCompletionCreateParamsNonStreamingOpenAI, } from 'openai/resources/chat/completions'; import { createBase64Url, imageToBase64 } from '../../utils/images/image-to-base64'; -import { L } from 'ollama/dist/shared/ollama.1164e541'; -const DEFAULT_MAX_TOKENS = 1024 * 4; +const DEFAULT_MAX_TOKENS = 1024 * 8; const DEFAULT_MODEL = 'gpt-4o-mini'; const DEFAULT_EMBEDDING_MODEL = 'text-embedding-3-small'; @@ -59,13 +60,17 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { presence_penalty: options.presencePenalty, stop: options.stop, // Remove logprobs from here - logit_bias: options.logitBias, - top_logprobs: options.topLogprobs, + // logprobs: options.logitBias, + // top_logprobs: options.topLogprobs, }; - return Object.fromEntries( - Object.entries(optionsToInclude).filter(([_, value]) => value !== undefined), + const filteredOptions = Object.fromEntries( + Object.entries(optionsToInclude) + .filter(([_, value]) => value !== undefined) + .filter(([_, value]) => value !== null), ) as unknown as LLMOptions; + + return filteredOptions; } async generateChatCompletion(params: ChatCompletionParams): Promise { @@ -78,7 +83,7 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { const model = options.model || DEFAULT_MODEL; const filteredOptions = this.getFilteredOptions(options); - const response = await this.client.chat.completions.create({ + const chatRequest: ChatCompletionCreateParamsNonStreamingOpenAI = { messages: formattedMessages, tools: formattedTools, parallel_tool_calls: parallelToolCalls, @@ -86,10 +91,12 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { tool_choice: toolChoice, max_tokens: options.maxTokens || DEFAULT_MAX_TOKENS, ...filteredOptions, + // Ensure logprobs is a boolean + logprobs: typeof options.logprobs === 'boolean' ? options.logprobs : undefined, model: model, - // Add logprobs as a boolean - logprobs: options.logprobs !== undefined, - }); + }; + + const response = await this.client.chat.completions.create(chatRequest); const firstResponse = response.choices[0]; const usage = response.usage; @@ -122,7 +129,7 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { const model = options.model || DEFAULT_MODEL; const filteredOptions = this.getFilteredOptions(options); - const stream = await this.client.chat.completions.create({ + const chatRequest: ChatCompletionCreateParamsStreamingOpenAI = { messages: formattedMessages, tools: formattedTools, parallel_tool_calls: parallelToolCalls, @@ -130,15 +137,16 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { tool_choice: toolChoice, max_tokens: options.maxTokens || DEFAULT_MAX_TOKENS, ...filteredOptions, + // Ensure logprobs is a boolean + logprobs: typeof options.logprobs === 'boolean' ? options.logprobs : undefined, model: model, stream: true, - // Add logprobs as a boolean - logprobs: options.logprobs !== undefined, - }); + }; + const stream = await this.client.chat.completions.create(chatRequest); for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content; - const usage = chunk.usage; + const _usage = chunk.usage; const finishReason = chunk.choices[0]?.finish_reason; const result: ChatStreamCompletionResponse = { text: content || null, @@ -200,17 +208,17 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { private async formatMessages( messages: ChatMessageWithSystem[], - ): Promise { - const formattedMessages: ChatCompletionMessageParam[] = []; + ): Promise { + const formattedMessages: ChatCompletionMessageParamOpenAI[] = []; for (const message of messages) { - const formattedMessage: ChatCompletionMessageParam = { + const formattedMessage: ChatCompletionMessageParamOpenAI = { role: message.role, content: '', // Initialize with an empty string }; if (Array.isArray(message.content)) { - const contentParts: ChatCompletionContentPart[] = []; + const contentParts: ChatCompletionContentPartOpenAI[] = []; for (const content of message.content) { if (content.type === 'text') { contentParts.push({ type: 'text', text: content.text }); @@ -221,7 +229,7 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { contentParts.push({ type: 'image_url', image_url: { url: content.url }, // Keep the URL as is for remote - } as ChatCompletionContentPart); // Type assertion + } as ChatCompletionContentPartOpenAI); // Type assertion } else { // Convert local image file to base64 const contentImage = await imageToBase64(content.url); @@ -229,7 +237,7 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { contentParts.push({ type: 'image_url', image_url: { url: urlBase64Image }, // Use the base64 image - } as ChatCompletionContentPart); // Type assertion + } as ChatCompletionContentPartOpenAI); // Type assertion } } } @@ -246,7 +254,7 @@ export class OpenAIProvider implements LLMProvider, EmbeddingProvider { return formattedMessages; } - private formatTools(tools?: Tool[]): ChatCompletionTool[] | undefined { + private formatTools(tools?: Tool[]): ChatCompletionToolOpenAI[] | undefined { if (!tools) return undefined; return tools.map((tool) => ({ ...tool, diff --git a/packages/qllm-lib/src/providers/openrouter/index.ts b/packages/qllm-lib/src/providers/openrouter/index.ts index e98aca8..9b5fd88 100644 --- a/packages/qllm-lib/src/providers/openrouter/index.ts +++ b/packages/qllm-lib/src/providers/openrouter/index.ts @@ -4,7 +4,6 @@ import { AuthenticationError, RateLimitError, InvalidRequestError, - ChatMessage, LLMOptions, Model, ChatCompletionResponse, @@ -14,7 +13,7 @@ import { } from '../../types'; import { OpenAIProvider } from '../openai'; -const DEFAULT_MAX_TOKENS = 1024 * 4; +const DEFAULT_MAX_TOKENS = 1024 * 32; const BASE_URL = 'https://openrouter.ai/api/v1'; const DEFAULT_MODEL = 'qwen/qwen-2-7b-instruct:free'; diff --git a/packages/qllm-lib/src/providers/perplexity/index.ts b/packages/qllm-lib/src/providers/perplexity/index.ts index ba5d2f6..6077e15 100644 --- a/packages/qllm-lib/src/providers/perplexity/index.ts +++ b/packages/qllm-lib/src/providers/perplexity/index.ts @@ -17,7 +17,7 @@ import { } from '../../types'; import { ALL_PERPLEXITY_MODELS, DEFAULT_PERPLEXITY_MODEL } from './models'; -const DEFAULT_MAX_TOKENS = 1024 * 4; +const DEFAULT_MAX_TOKENS = 1024 * 32; const DEFAULT_MODEL = 'mixtral-8x7b-instruct'; const DEFAULT_EMBEDDING_MODEL = 'text-embedding-3-small'; @@ -46,21 +46,24 @@ export class PerplexityProvider implements LLMProvider, EmbeddingProvider { private getOptions(options: LLMOptions): LLMOptions { // Remove undefined and null values - const optionsToInclude = { + const optionsToInclude: Partial = { temperature: options.temperature, model: options.model, maxTokens: options.maxTokens, - topP: options.topProbability, - topK: options.topKTokens, - presencePenalty: - options.presencePenalty != null ? Math.max(1, options.presencePenalty) : undefined, - frequencyPenalty: - options.frequencyPenalty != null ? Math.max(1, options.frequencyPenalty) : undefined, + // Explicitly unset logprobs and topLogprobs + logprobs: undefined, + topLogprobs: undefined, }; - return Object.fromEntries( - Object.entries(optionsToInclude).filter(([_, v]) => v != null), + const filteredOptions = Object.fromEntries( + Object.entries(optionsToInclude) + .filter(([_, v]) => v != null) + .filter(([_, v]) => v !== undefined), ) as unknown as LLMOptions; + + // console.log('filteredOptions πŸ”₯ 🍡: ', filteredOptions); + + return filteredOptions; } async generateChatCompletion(params: ChatCompletionParams): Promise { @@ -69,17 +72,22 @@ export class PerplexityProvider implements LLMProvider, EmbeddingProvider { const model = options.model || DEFAULT_MODEL; const filteredOptions = this.getOptions(options); - const response = await this.openAIProvider.generateChatCompletion({ + const chatRequest: ChatCompletionParams = { messages: messages, options: { - ...filteredOptions, + ...filteredOptions, // Include filtered options model, }, tools, toolChoice, parallelToolCalls, responseFormat, - }); + }; + + // console.log('chatRequest πŸ”₯: ', chatRequest); + // console.dir(chatRequest, { depth: null }); + + const response = await this.openAIProvider.generateChatCompletion(chatRequest); return { ...response, @@ -98,17 +106,22 @@ export class PerplexityProvider implements LLMProvider, EmbeddingProvider { const model = options.model || DEFAULT_MODEL; const filteredOptions = this.getOptions(options); - const stream = this.openAIProvider.streamChatCompletion({ + const chatRequest: ChatCompletionParams = { messages: messages, options: { - ...filteredOptions, + ...filteredOptions, // Include filtered options model, }, tools, toolChoice, parallelToolCalls, responseFormat, - }); + }; + + // console.log('chatRequest πŸ”₯ 🍡: ', chatRequest); + // console.dir(chatRequest, { depth: null }); + + const stream = this.openAIProvider.streamChatCompletion(chatRequest); for await (const chunk of stream) { yield { diff --git a/packages/qllm-lib/src/providers/qroq/index.ts b/packages/qllm-lib/src/providers/qroq/index.ts index 194df9b..d0dfa20 100644 --- a/packages/qllm-lib/src/providers/qroq/index.ts +++ b/packages/qllm-lib/src/providers/qroq/index.ts @@ -10,18 +10,17 @@ import { LLMProviderError, Model, Tool, - ChatMessage, EmbeddingProvider, EmbeddingRequestParams, EmbeddingResponse, isTextContent, isImageUrlContent, - ToolCall, ChatMessageWithSystem, } from '../../types'; const DEFAULT_MODEL = 'mixtral-8x7b-32768'; const DEFAULT_EMBEDDING_MODEL = 'text-embedding-ada-002'; +const DEFAULT_MAX_TOKENS = 1024 * 32; export class GroqProvider extends BaseLLMProvider implements EmbeddingProvider { private client: Groq; @@ -39,7 +38,7 @@ export class GroqProvider extends BaseLLMProvider implements EmbeddingProvider { defaultOptions: LLMOptions = { model: DEFAULT_MODEL, - maxTokens: 1024, + maxTokens: DEFAULT_MAX_TOKENS, }; async listModels(): Promise { diff --git a/packages/qllm-samples/CHANGELOG.md b/packages/qllm-samples/CHANGELOG.md index 276661c..693b325 100644 --- a/packages/qllm-samples/CHANGELOG.md +++ b/packages/qllm-samples/CHANGELOG.md @@ -1,5 +1,12 @@ # qllm-samples +## 1.0.4 + +### Patch Changes + +- Updated dependencies + - qllm-lib@3.6.0 + ## 1.0.3 ### Patch Changes diff --git a/packages/qllm-samples/package.json b/packages/qllm-samples/package.json index 3b00f1e..ab45241 100644 --- a/packages/qllm-samples/package.json +++ b/packages/qllm-samples/package.json @@ -1,6 +1,6 @@ { "name": "qllm-samples", - "version": "1.0.3", + "version": "1.0.4", "description": "QLLM Samples", "keywords": [ "ai",