Skip to content

Commit

Permalink
Examples, langchain and release (#95)
Browse files Browse the repository at this point in the history
Signed-off-by: Tomas Pilar <tomas.pilar@ibm.com>
  • Loading branch information
pilartomas authored Mar 21, 2024
1 parent 917ce6b commit d44ec5e
Show file tree
Hide file tree
Showing 17 changed files with 92 additions and 107 deletions.
21 changes: 21 additions & 0 deletions .github/workflows/node.js.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,27 @@ jobs:
- run: yarn --frozen-lockfile
- run: yarn test

examples:
runs-on: ubuntu-latest

env:
GENAI_API_KEY: ${{ secrets.TEST_API_KEY }}

strategy:
matrix:
node-version: [18.18.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/

steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'yarn'
- run: yarn --frozen-lockfile
- run: yarn examples

build:
runs-on: ubuntu-latest

Expand Down
17 changes: 10 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ The SDK supports both TypeScript and JavaScript as well as ESM and CommonJS.

## Key features

- ⚡️ Performant - processes 1k of short inputs in about 4 minutes
- ⚡️ Performant - processes 1k of short inputs in under a minute
- ☀️ Fault-tolerant - retry strategies and overflood protection
- 🚦 Handles concurrency limiting - even if you have multiple parallel jobs running
- 📌 Aligned with the REST API - clear structure that mirrors service endpoints and data
Expand Down Expand Up @@ -126,14 +126,15 @@ Standalone API reference is NOT available at the moment, please refer to the [RE
The following example showcases how you can integrate GenAI into your project.

```typescript
import { Client } from '@ibm-generative-ai/node-sdk';
import { GenAIModel } from '@ibm-generative-ai/node-sdk/langchain';

const model = new GenAIModel({
modelId: 'google/flan-ul2',
parameters: {},
configuration: {
client: new Client({
apiKey: 'pak-.....',
},
}),
});
```

Expand Down Expand Up @@ -171,15 +172,16 @@ console.log(text); // ArcticAegis
### Streaming

```typescript
import { Client } from '@ibm-generative-ai/node-sdk';
import { GenAIModel } from '@ibm-generative-ai/node-sdk/langchain';

const model = new GenAIModel({
modelId: 'google/flan-ul2',
stream: true,
parameters: {},
configuration: {
client: new Client({
apiKey: 'pak-.....',
},
}),
});

await model.invoke('Tell me a joke.', {
Expand All @@ -196,15 +198,16 @@ await model.invoke('Tell me a joke.', {
### Chat support

```typescript
import { Client } from '@ibm-generative-ai/node-sdk';
import { GenAIChatModel } from '@ibm-generative-ai/node-sdk/langchain';
import { SystemMessage, HumanMessage } from '@langchain/core/messages';

const client = new GenAIChatModel({
model_id: 'meta-llama/llama-2-70b-chat',
configuration: {
client: new Client({
endpoint: process.env.ENDPOINT,
apiKey: process.env.API_KEY,
},
}),
parameters: {
decoding_method: 'greedy',
min_new_tokens: 10,
Expand Down
2 changes: 1 addition & 1 deletion examples/chat.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { Client } from '../src/index.js';

import { CHAT_MODEL } from './constants.js';
import { CHAT_MODEL } from './shared/constants.js';

const client = new Client({
apiKey: process.env.GENAI_API_KEY,
Expand Down
2 changes: 1 addition & 1 deletion examples/generate.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { Client } from '../src/index.js';

import { MODEL } from './constants.js';
import { MODEL } from './shared/constants.js';

const client = new Client({
apiKey: process.env.GENAI_API_KEY,
Expand Down
4 changes: 2 additions & 2 deletions examples/history.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { Client } from '../src/index.js';

import { CHAT_MODEL } from './constants.js';
import { CHAT_MODEL } from './shared/constants.js';

const client = new Client({
apiKey: process.env.GENAI_API_KEY,
Expand All @@ -24,7 +24,7 @@ const client = new Client({
messages: [{ role: 'user', content: 'How are you?' }],
});
const { results } = await client.request.chat({
conversationId: conversation_id,
conversation_id,
});
for (const request of results) {
console.log(request);
Expand Down
5 changes: 3 additions & 2 deletions examples/langchain/llm-chat.ts
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
import { HumanMessage } from '@langchain/core/messages';

import { GenAIChatModel } from '../../src/langchain/llm-chat.js';
import { Client } from '../../src/index.js';

const makeClient = () =>
new GenAIChatModel({
model_id: 'meta-llama/llama-2-70b-chat',
configuration: {
client: new Client({
endpoint: process.env.ENDPOINT,
apiKey: process.env.API_KEY,
},
}),
parameters: {
decoding_method: 'greedy',
min_new_tokens: 1,
Expand Down
5 changes: 3 additions & 2 deletions examples/langchain/llm.ts
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import { Client } from '../../src/index.js';
import { GenAIModel } from '../../src/langchain/index.js';

const makeClient = (stream?: boolean) =>
new GenAIModel({
modelId: 'google/flan-t5-xl',
stream,
configuration: {
client: new Client({
endpoint: process.env.ENDPOINT,
apiKey: process.env.API_KEY,
},
}),
parameters: {
decoding_method: 'greedy',
min_new_tokens: 5,
Expand Down
2 changes: 1 addition & 1 deletion examples/models.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { Client } from '../src/index.js';

import { MODEL } from './constants.js';
import { MODEL } from './shared/constants.js';

const client = new Client({
apiKey: process.env.GENAI_API_KEY,
Expand Down
71 changes: 0 additions & 71 deletions examples/prompt-templates.ts

This file was deleted.

File renamed without changes.
19 changes: 18 additions & 1 deletion examples/tune.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import { blob } from 'node:stream/consumers';
import { createReadStream } from 'node:fs';

import { Client } from '../src/index.js';

const client = new Client({
Expand Down Expand Up @@ -26,13 +29,24 @@ const client = new Client({
const { results: tuneTypes } = await client.tune.types({});
console.log(tuneTypes);

// Upload file for tuning
const { result: file } = await client.file.create({
purpose: 'tune',
file: {
name: 'tune_input.jsonl',
content: (await blob(
createReadStream('examples/assets/tune_input.jsonl'),
)) as any,
},
});

// Create a tune
const { result: createdTune } = await client.tune.create({
name: 'Awesome Tune',
tuning_type: 'prompt_tuning',
model_id: 'google/flan-t5-xl',
task_id: 'generation',
training_file_ids: ['fileId'],
training_file_ids: [file.id],
});
console.log(createdTune);

Expand All @@ -50,4 +64,7 @@ const client = new Client({

// Delete the tune
await client.tune.delete({ id: createdTune.id });

// Detele the file
await client.file.delete({ id: file.id });
}
10 changes: 2 additions & 8 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@ibm-generative-ai/node-sdk",
"version": "2.0.2",
"version": "2.0.3",
"description": "IBM Generative AI Node.js SDK (Tech Preview)",
"keywords": [
"ai",
Expand Down Expand Up @@ -58,13 +58,7 @@
"postpack": "pinst --enable",
"generate": "./scripts/generate.sh",
"generate:new": "node ./scripts/generate.js",
"example:run": "ts-node -r dotenv-flow/config",
"example:generate": "yarn run example:run examples/generate.ts",
"example:tune": "yarn run example:run examples/tune.ts",
"example:history": "yarn run example:run examples/history.ts",
"example:file": "yarn run example:run examples/file.ts",
"example:chat": "yarn run example:run examples/chat.ts",
"example:models": "yarn run example:run examples/models.ts"
"examples": "./scripts/examples.sh"
},
"peerDependencies": {
"@langchain/core": ">=0.1.0"
Expand Down
11 changes: 11 additions & 0 deletions scripts/examples.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

set -e

# Run all modules in examples/ directory
for file in examples/* examples/langchain/*; do
if [ -f "$file" ]; then
echo "Running example $file"
npx ts-node -r dotenv-flow/config "$file" > /dev/null
fi
done
11 changes: 7 additions & 4 deletions src/langchain/llm-chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@ type TextChatInput = TextChatCreateInput & TextChatCreateStreamInput;
export type GenAIChatModelParams = BaseChatModelParams &
Omit<TextChatInput, 'messages' | 'prompt_template_id'> & {
model_id: NonNullable<TextChatInput['model_id']>;
configuration?: Configuration;
};
} & (
| { client: Client; configuration?: never }
| { client?: never; configuration: Configuration }
);
export type GenAIChatModelOptions = BaseChatModelCallOptions &
Partial<Omit<GenAIChatModelParams, 'configuration'>>;
Partial<Omit<GenAIChatModelParams, 'client' | 'configuration'>>;

export class GenAIChatModel extends BaseChatModel<GenAIChatModelOptions> {
protected readonly client: Client;
Expand All @@ -47,6 +49,7 @@ export class GenAIChatModel extends BaseChatModel<GenAIChatModelOptions> {
parent_id,
use_conversation_parameters,
trim_method,
client,
configuration,
...options
}: GenAIChatModelParams) {
Expand All @@ -60,7 +63,7 @@ export class GenAIChatModel extends BaseChatModel<GenAIChatModelOptions> {
this.parentId = parent_id;
this.useConversationParameters = use_conversation_parameters;
this.trimMethod = trim_method;
this.client = new Client(configuration);
this.client = client ?? new Client(configuration);
}

async _generate(
Expand Down
11 changes: 7 additions & 4 deletions src/langchain/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,14 @@ import {
TextGenerationCreateOutput,
} from '../schema.js';

interface BaseGenAIModelOptions {
type BaseGenAIModelOptions = {
stream?: boolean;
parameters?: Record<string, any>;
timeout?: number;
configuration?: Configuration;
}
} & (
| { client: Client; configuration?: never }
| { client?: never; configuration: Configuration }
);

export type GenAIModelOptions =
| (BaseGenAIModelOptions & { modelId?: string; promptId?: never })
Expand All @@ -41,6 +43,7 @@ export class GenAIModel extends BaseLLM {
stream = false,
parameters,
timeout,
client,
configuration,
...baseParams
}: GenAIModelOptions & BaseLLMParams) {
Expand All @@ -51,7 +54,7 @@ export class GenAIModel extends BaseLLM {
this.timeout = timeout;
this.isStreaming = Boolean(stream);
this.parameters = parameters || {};
this.#client = new Client(configuration);
this.#client = client ?? new Client(configuration);
}

#createPayload(
Expand Down
3 changes: 2 additions & 1 deletion tests/e2e/client.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ describe('client', () => {
]);
};

test('should correctly process moderation chunks during streaming', async () => {
// TODO remove skip after server bug is fixed or when schema is updated
test.skip('should correctly process moderation chunks during streaming', async () => {
const stream = await makeValidStream({
min_new_tokens: 1,
max_new_tokens: 5,
Expand Down
Loading

0 comments on commit d44ec5e

Please sign in to comment.