diff --git a/dev-python/aiogram/Manifest b/dev-python/aiogram/Manifest index 2a65f4c884..a7ffc2d7eb 100644 --- a/dev-python/aiogram/Manifest +++ b/dev-python/aiogram/Manifest @@ -1,2 +1,2 @@ -DIST aiogram-3.5.0.tar.gz 1278788 BLAKE2B 1cda3125a693d89c6729181e56fe66b494094a9be55c888bf5941f1061eb051591ae5fef415bb381e579ab433e0512f21fe8ca78933adfd92498f144b3922250 SHA512 0e8631382086c154b3924c420813c61ab23e96f665c3ce89859f68213fe46de66fcfec867e99bfe547852d5437f83924c3b4451658faab5673750620b77beac7 DIST aiogram-3.6.0.tar.gz 1296416 BLAKE2B 8c2164b3f4c4973ad5d88766d90151ee820ee96c53e5d2ade732685a393d9c1bf4bf88c24e7580c2d4c7ac518d381995609358a831263211a683552bff16f56d SHA512 d19890fe572bd69a3f19dbde0e517281fa14314cc020b612819c8c6f117f480701b8a2b4e7ab1e03ee63f40ae2ba30e821c26510bafec05ded896ffc65cdb9d6 +DIST aiogram-3.7.0.tar.gz 1304350 BLAKE2B 478c8efdf46397bf1d11cddf641874d74ddfa8a4bf80884101342fcde76d138d78ff728e6811d73b9abc489ceefe69edaeeeee15605e11031ce15fc27ee692ac SHA512 ae53bc3fc8c3053771a73c6258cf3d2928e3e9cb3cc76fcb3f2f94a1c9a2c3e7204f0c1f1e03735e2949ea44b050d4f5fb593a11e8d94ba7c9d5dc4cdf3d6539 diff --git a/dev-python/aiogram/aiogram-3.5.0.ebuild b/dev-python/aiogram/aiogram-3.7.0.ebuild similarity index 100% rename from dev-python/aiogram/aiogram-3.5.0.ebuild rename to dev-python/aiogram/aiogram-3.7.0.ebuild diff --git a/dev-python/lsassy/Manifest b/dev-python/lsassy/Manifest index 4396656ab6..31c756254f 100644 --- a/dev-python/lsassy/Manifest +++ b/dev-python/lsassy/Manifest @@ -1,2 +1,2 @@ DIST lsassy-3.1.10.tar.gz 1779844 BLAKE2B 94f6bcb7ae6d6d8c01cf40dc3d55342395b8eff70136072070a38f420b0eed95954625270f9fafc9e3ce72f4e93911b863cc2e3ef240514eb7820e8f95d29d7f SHA512 9bfd30f5db93d70daa1242293dfa886628b44f74b809786a1deb100231718d367c6f6d9d54007d0f6765ce84ee00bb571798334be978d03813cb6d7eda2ccf15 -DIST lsassy-3.1.9.tar.gz 1779751 BLAKE2B b4846e365ccd704c69d4944f1a75ee1b01c0dd995ef227bc9188eb64d09ff33ff4491a23fdf89aa2bafb7bf6900789d062154443cf3eb267ae02cd12f499a8eb SHA512 46f03e6445027f3b551f8c7fcdee1c5f2bda1a5b76a2eda8eb45a6a7b73622b988c38ee20855bdc53d4f5444f5f908f7b3df8f2524a685b9dc0545d732e69a3b +DIST lsassy-3.1.11.tar.gz 1779714 BLAKE2B 771f6969e40a1dffb36f7a04543234b2562c9e82d912b01435eed4f2275352e52f3238e781dde5eb43e72a3dd4ef95dc3fa06c196d304c11ac5fe321f314f276 SHA512 2382fdb3701d49f5d548d005d9949effd8c9df69963f65b3d15b1e5286042d41a126ba26240bd3d4ec3941fabe829d34eb87919159aa4cd776ec21f49e4a2a69 diff --git a/dev-python/lsassy/lsassy-3.1.9.ebuild b/dev-python/lsassy/lsassy-3.1.11.ebuild similarity index 94% rename from dev-python/lsassy/lsassy-3.1.9.ebuild rename to dev-python/lsassy/lsassy-3.1.11.ebuild index 19129310ca..f358b3dc6d 100644 --- a/dev-python/lsassy/lsassy-3.1.9.ebuild +++ b/dev-python/lsassy/lsassy-3.1.11.ebuild @@ -1,4 +1,4 @@ -# Copyright 1999-2023 Gentoo Authors +# Copyright 1999-2024 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=8 diff --git a/dev-python/openai/Manifest b/dev-python/openai/Manifest index f8ee2173f1..d430694e7b 100644 --- a/dev-python/openai/Manifest +++ b/dev-python/openai/Manifest @@ -1,2 +1,2 @@ -DIST openai-1.10.0.tar.gz 129205 BLAKE2B 8af9ff3b77a33504e35ad354179d4caa57c1c7c4862d90552ba4c295d25c2e9b267e88f816387e7980f2875566cfc73f6f6fe50105bbd791ede38d1a30f0dd75 SHA512 4555cd2c887a9124e5e22d5f246819e5fb0f8f6fb71f474a62fcea1fa559b1353400e92210a50aaf736131f02078012f98a5e0c68c04eb29edf89c81ff632cd1 DIST openai-1.16.2.tar.gz 152136 BLAKE2B 901b71b7f8a77679cb782338460b4bbd42334206a1c3cdeb2852bc2b2cb3171578f2e0cfc7705584194f5a806947392184b4371474abeb29469ae44dd6e743c7 SHA512 e05f6011d48c8bef75f31077de2c15018768504c29494955ef7c7031cd650c1434112939811edcc2701604817c995402ec453cec2ba5303f979c194cab393f79 +DIST openai-1.32.0.tar.gz 181341 BLAKE2B ae5ebb5ee57ff10242767d3e1819a9a466ddacd3dca4309b3c18cad45274adace140ba58cbb7047021d839c73d45ac8e3776d3ddcb32efbc53127f126047d67f SHA512 4b01e66b2510df9d5f8d426c76f4f44ee10fc2ca6ec21d07c475cae8bfb379a6f5296fa57455741c423a02ccbc1511ec39cf4fccef9e1912898ffcc6ed31bd96 diff --git a/dev-python/openai/files/1.32.0-README.md b/dev-python/openai/files/1.32.0-README.md new file mode 100644 index 0000000000..5e351ba03c --- /dev/null +++ b/dev-python/openai/files/1.32.0-README.md @@ -0,0 +1,638 @@ +# OpenAI Python API library + +[![PyPI version](https://img.shields.io/pypi/v/openai.svg)](https://pypi.org/project/openai/) + +The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+ +application. The library includes type definitions for all request params and response fields, +and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx). + +It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/). + +## Documentation + +The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md). + +## Installation + +> [!IMPORTANT] +> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code. + +```sh +# install from PyPI +pip install openai +``` + +## Usage + +The full API of this library can be found in [api.md](api.md). + +```python +import os +from openai import OpenAI + +client = OpenAI( + # This is the default and can be omitted + api_key=os.environ.get("OPENAI_API_KEY"), +) + +chat_completion = client.chat.completions.create( + messages=[ + { + "role": "user", + "content": "Say this is a test", + } + ], + model="gpt-3.5-turbo", +) +``` + +While you can provide an `api_key` keyword argument, +we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) +to add `OPENAI_API_KEY="My API Key"` to your `.env` file +so that your API Key is not stored in source control. + +### Polling Helpers + +When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes +helper functions which will poll the status until it reaches a terminal state and then return the resulting object. +If an API method results in an action which could benefit from polling there will be a corresponding version of the +method ending in '\_and_poll'. + +For instance to create a Run and poll until it reaches a terminal state you can run: + +```python +run = client.beta.threads.runs.create_and_poll( + thread_id=thread.id, + assistant_id=assistant.id, +) +``` + +More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle) + +### Bulk Upload Helpers + +When creating an interacting with vector stores, you can use the polling helpers to monitor the status of operations. +For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once. + +```python +sample_files = [Path("sample-paper.pdf"), ...] + +batch = await client.vector_stores.file_batches.upload_and_poll( + store.id, + files=sample_files, +) +``` + +### Streaming Helpers + +The SDK also includes helpers to process streams and handle the incoming events. + +```python +with client.beta.threads.runs.stream( + thread_id=thread.id, + assistant_id=assistant.id, + instructions="Please address the user as Jane Doe. The user has a premium account.", +) as stream: + for event in stream: + # Print the text from text delta events + if event.type == "thread.message.delta" and event.data.delta.content: + print(event.data.delta.content[0].text) +``` + +More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md) + +## Async usage + +Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call: + +```python +import os +import asyncio +from openai import AsyncOpenAI + +client = AsyncOpenAI( + # This is the default and can be omitted + api_key=os.environ.get("OPENAI_API_KEY"), +) + + +async def main() -> None: + chat_completion = await client.chat.completions.create( + messages=[ + { + "role": "user", + "content": "Say this is a test", + } + ], + model="gpt-3.5-turbo", + ) + + +asyncio.run(main()) +``` + +Functionality between the synchronous and asynchronous clients is otherwise identical. + +## Streaming responses + +We provide support for streaming responses using Server Side Events (SSE). + +```python +from openai import OpenAI + +client = OpenAI() + +stream = client.chat.completions.create( + model="gpt-4", + messages=[{"role": "user", "content": "Say this is a test"}], + stream=True, +) +for chunk in stream: + print(chunk.choices[0].delta.content or "", end="") +``` + +The async client uses the exact same interface. + +```python +from openai import AsyncOpenAI + +client = AsyncOpenAI() + + +async def main(): + stream = await client.chat.completions.create( + model="gpt-4", + messages=[{"role": "user", "content": "Say this is a test"}], + stream=True, + ) + async for chunk in stream: + print(chunk.choices[0].delta.content or "", end="") + + +asyncio.run(main()) +``` + +## Module-level client + +> [!IMPORTANT] +> We highly recommend instantiating client instances instead of relying on the global client. + +We also expose a global client instance that is accessible in a similar fashion to versions prior to v1. + +```py +import openai + +# optional; defaults to `os.environ['OPENAI_API_KEY']` +openai.api_key = '...' + +# all client options can be configured just like the `OpenAI` instantiation counterpart +openai.base_url = "https://..." +openai.default_headers = {"x-foo": "true"} + +completion = openai.chat.completions.create( + model="gpt-4", + messages=[ + { + "role": "user", + "content": "How do I output all files in a directory using Python?", + }, + ], +) +print(completion.choices[0].message.content) +``` + +The API is the exact same as the standard client instance based API. + +This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code. + +We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because: + +- It can be difficult to reason about where client options are configured +- It's not possible to change certain client options without potentially causing race conditions +- It's harder to mock for testing purposes +- It's not possible to control cleanup of network connections + +## Using types + +Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like: + +- Serializing back into JSON, `model.to_json()` +- Converting to a dictionary, `model.to_dict()` + +Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`. + +## Pagination + +List methods in the OpenAI API are paginated. + +This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually: + +```python +import openai + +client = OpenAI() + +all_jobs = [] +# Automatically fetches more pages as needed. +for job in client.fine_tuning.jobs.list( + limit=20, +): + # Do something with job here + all_jobs.append(job) +print(all_jobs) +``` + +Or, asynchronously: + +```python +import asyncio +import openai + +client = AsyncOpenAI() + + +async def main() -> None: + all_jobs = [] + # Iterate through items across all pages, issuing requests as needed. + async for job in client.fine_tuning.jobs.list( + limit=20, + ): + all_jobs.append(job) + print(all_jobs) + + +asyncio.run(main()) +``` + +Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages: + +```python +first_page = await client.fine_tuning.jobs.list( + limit=20, +) +if first_page.has_next_page(): + print(f"will fetch next page using these details: {first_page.next_page_info()}") + next_page = await first_page.get_next_page() + print(f"number of items we just fetched: {len(next_page.data)}") + +# Remove `await` for non-async usage. +``` + +Or just work directly with the returned data: + +```python +first_page = await client.fine_tuning.jobs.list( + limit=20, +) + +print(f"next page cursor: {first_page.after}") # => "next page cursor: ..." +for job in first_page.data: + print(job.id) + +# Remove `await` for non-async usage. +``` + +## Nested params + +Nested parameters are dictionaries, typed using `TypedDict`, for example: + +```python +from openai import OpenAI + +client = OpenAI() + +completion = client.chat.completions.create( + messages=[ + { + "role": "user", + "content": "Can you generate an example json object describing a fruit?", + } + ], + model="gpt-3.5-turbo-1106", + response_format={"type": "json_object"}, +) +``` + +## File uploads + +Request parameters that correspond to file uploads can be passed as `bytes`, a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`. + +```python +from pathlib import Path +from openai import OpenAI + +client = OpenAI() + +client.files.create( + file=Path("input.jsonl"), + purpose="fine-tune", +) +``` + +The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically. + +## Handling errors + +When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised. + +When the API returns a non-success status code (that is, 4xx or 5xx +response), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties. + +All errors inherit from `openai.APIError`. + +```python +import openai +from openai import OpenAI + +client = OpenAI() + +try: + client.fine_tuning.jobs.create( + model="gpt-3.5-turbo", + training_file="file-abc123", + ) +except openai.APIConnectionError as e: + print("The server could not be reached") + print(e.__cause__) # an underlying Exception, likely raised within httpx. +except openai.RateLimitError as e: + print("A 429 status code was received; we should back off a bit.") +except openai.APIStatusError as e: + print("Another non-200-range status code was received") + print(e.status_code) + print(e.response) +``` + +Error codes are as followed: + +| Status Code | Error Type | +| ----------- | -------------------------- | +| 400 | `BadRequestError` | +| 401 | `AuthenticationError` | +| 403 | `PermissionDeniedError` | +| 404 | `NotFoundError` | +| 422 | `UnprocessableEntityError` | +| 429 | `RateLimitError` | +| >=500 | `InternalServerError` | +| N/A | `APIConnectionError` | + +### Retries + +Certain errors are automatically retried 2 times by default, with a short exponential backoff. +Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, +429 Rate Limit, and >=500 Internal errors are all retried by default. + +You can use the `max_retries` option to configure or disable retry settings: + +```python +from openai import OpenAI + +# Configure the default for all requests: +client = OpenAI( + # default is 2 + max_retries=0, +) + +# Or, configure per-request: +client.with_options(max_retries=5).chat.completions.create( + messages=[ + { + "role": "user", + "content": "How can I get the name of the current day in Node.js?", + } + ], + model="gpt-3.5-turbo", +) +``` + +### Timeouts + +By default requests time out after 10 minutes. You can configure this with a `timeout` option, +which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object: + +```python +from openai import OpenAI + +# Configure the default for all requests: +client = OpenAI( + # 20 seconds (default is 10 minutes) + timeout=20.0, +) + +# More granular control: +client = OpenAI( + timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0), +) + +# Override per-request: +client.with_options(timeout=5.0).chat.completions.create( + messages=[ + { + "role": "user", + "content": "How can I list all files in a directory using Python?", + } + ], + model="gpt-3.5-turbo", +) +``` + +On timeout, an `APITimeoutError` is thrown. + +Note that requests that time out are [retried twice by default](#retries). + +## Advanced + +### Logging + +We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module. + +You can enable logging by setting the environment variable `OPENAI_LOG` to `debug`. + +```shell +$ export OPENAI_LOG=debug +``` + +### How to tell whether `None` means `null` or missing + +In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`: + +```py +if response.my_field is None: + if 'my_field' not in response.model_fields_set: + print('Got json like {}, without a "my_field" key present at all.') + else: + print('Got json like {"my_field": null}.') +``` + +### Accessing raw response data (e.g. headers) + +The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g., + +```py +from openai import OpenAI + +client = OpenAI() +response = client.chat.completions.with_raw_response.create( + messages=[{ + "role": "user", + "content": "Say this is a test", + }], + model="gpt-3.5-turbo", +) +print(response.headers.get('X-My-Header')) + +completion = response.parse() # get the object that `chat.completions.create()` would have returned +print(completion) +``` + +These methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version. + +For the sync client this will mostly be the same with the exception +of `content` & `text` will be methods instead of properties. In the +async client, all methods will be async. + +A migration script will be provided & the migration in general should +be smooth. + +#### `.with_streaming_response` + +The above interface eagerly reads the full response body when you make the request, which may not always be what you want. + +To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods. + +As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object. + +```python +with client.chat.completions.with_streaming_response.create( + messages=[ + { + "role": "user", + "content": "Say this is a test", + } + ], + model="gpt-3.5-turbo", +) as response: + print(response.headers.get("X-My-Header")) + + for line in response.iter_lines(): + print(line) +``` + +The context manager is required so that the response will reliably be closed. + +### Making custom/undocumented requests + +This library is typed for convenient access to the documented API. + +If you need to access undocumented endpoints, params, or response properties, the library can still be used. + +#### Undocumented endpoints + +To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other +http verbs. Options on the client will be respected (such as retries) will be respected when making this +request. + +```py +import httpx + +response = client.post( + "/foo", + cast_to=httpx.Response, + body={"my_param": True}, +) + +print(response.headers.get("x-foo")) +``` + +#### Undocumented request params + +If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request +options. + +#### Undocumented response properties + +To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You +can also get all the extra fields on the Pydantic model as a dict with +[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra). + +### Configuring the HTTP client + +You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including: + +- Support for proxies +- Custom transports +- Additional [advanced](https://www.python-httpx.org/advanced/#client-instances) functionality + +```python +from openai import OpenAI, DefaultHttpxClient + +client = OpenAI( + # Or use the `OPENAI_BASE_URL` env var + base_url="http://my.test.server.example.com:8083", + http_client=DefaultHttpxClient( + proxies="http://my.test.proxy.example.com", + transport=httpx.HTTPTransport(local_address="0.0.0.0"), + ), +) +``` + +### Managing HTTP resources + +By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting. + +## Microsoft Azure OpenAI + +To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI` +class instead of the `OpenAI` class. + +> [!IMPORTANT] +> The Azure API shape differs from the core API shape which means that the static types for responses / params +> won't always be correct. + +```py +from openai import AzureOpenAI + +# gets the API Key from environment variable AZURE_OPENAI_API_KEY +client = AzureOpenAI( + # https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning + api_version="2023-07-01-preview", + # https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource + azure_endpoint="https://example-endpoint.openai.azure.com", +) + +completion = client.chat.completions.create( + model="deployment-name", # e.g. gpt-35-instant + messages=[ + { + "role": "user", + "content": "How do I output all files in a directory using Python?", + }, + ], +) +print(completion.to_json()) +``` + +In addition to the options provided in the base `OpenAI` client, the following options are provided: + +- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable) +- `azure_deployment` +- `api_version` (or the `OPENAI_API_VERSION` environment variable) +- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable) +- `azure_ad_token_provider` + +An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py). + +## Versioning + +This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: + +1. Changes that only affect static types, without breaking runtime behavior. +2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals)_. +3. Changes that we do not expect to impact the vast majority of users in practice. + +We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. + +We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions. + +## Requirements + +Python 3.7 or higher. diff --git a/dev-python/openai/openai-1.10.0.ebuild b/dev-python/openai/openai-1.32.0.ebuild similarity index 88% rename from dev-python/openai/openai-1.10.0.ebuild rename to dev-python/openai/openai-1.32.0.ebuild index 979ea31ed4..f40612d8f5 100644 --- a/dev-python/openai/openai-1.10.0.ebuild +++ b/dev-python/openai/openai-1.32.0.ebuild @@ -1,4 +1,4 @@ -# Copyright 1999-2023 Gentoo Authors +# Copyright 1999-2024 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=8 @@ -37,3 +37,8 @@ REQUIRED_USE="${PYTHON_REQUIRED_USE}" # wandb? ( datalib )" RESTRICT="test" + +src_prepare(){ + cp "${FILESDIR}/${PV}-README.md" ./README.md || die + eapply_user +} diff --git a/www-apps/seafile-pro-server/Manifest b/www-apps/seafile-pro-server/Manifest index 657c8e39ba..e3f280c586 100644 --- a/www-apps/seafile-pro-server/Manifest +++ b/www-apps/seafile-pro-server/Manifest @@ -1,2 +1,2 @@ DIST seafile-pro-server_10.0.15_x86-64_CentOS.tar.gz 205551703 BLAKE2B 5dbc9380a0b5c5844163d0dfe456b1ae5b5b2f5400f77e3b46438b0eaa470fb3cea2df1c3b0a4d2a18e1ebb2b3986fd60fe9a57d0967829a4c92524e9d90d0a7 SHA512 22b199e56dcebf2725f63d4eac3427ae99efa5b163bacf81fd48da47391a040bc91d3fd1347d47c090163ce673342f3a89bd33f0332c510ee3f22ee24ec3a223 -DIST seafile-pro-server_11.0.6_x86-64_CentOS.tar.gz 202919105 BLAKE2B 21ad9c1e3d50d8a45badf528661bbde67a68430fb2658e3a4510886a242cfefb8470f0fa3edd023165ea99c886907a5d97b90349a34d08d01714eb2bc3e90b87 SHA512 05d854131717b14189d7108bc2dcd7cffaa05e89601005ccfbc1ad48b140f75aea03e4f7e4f102242d57512393e81d29dbce97e1dfef4fd018c9836e8987f440 +DIST seafile-pro-server_11.0.7_x86-64_CentOS.tar.gz 202964860 BLAKE2B 694af477335ad746c45ee37e07cc159e4ed192c48c7ad608d2db0ef8a03c7ab0dcfaea0e8025ef34ae8cfb354323e3d58608d2ce142b5ad1f64eb27179af1b48 SHA512 22d3d7226f4a90ccecd5efcd1da18b51986e0898b3f5e0b6b10555843291f7245a5d4d8c635b6323555a736a71da3bcf6d0f2b2526b93ce6f650d8bb9f529ed5 diff --git a/www-apps/seafile-pro-server/seafile-pro-server-11.0.6.ebuild b/www-apps/seafile-pro-server/seafile-pro-server-11.0.7.ebuild similarity index 98% rename from www-apps/seafile-pro-server/seafile-pro-server-11.0.6.ebuild rename to www-apps/seafile-pro-server/seafile-pro-server-11.0.7.ebuild index bf92ebdb7e..a4257154cb 100644 --- a/www-apps/seafile-pro-server/seafile-pro-server-11.0.6.ebuild +++ b/www-apps/seafile-pro-server/seafile-pro-server-11.0.7.ebuild @@ -53,7 +53,7 @@ DEPEND="${RDEPEND}" REQUIRED_USE="${PYTHON_REQUIRED_USE}" src_prepare() { - eapply "${FILESDIR}"/pillow-10.patch + #eapply "${FILESDIR}"/pillow-10.patch #match with cffi in RDEPEND section # sed -e "s|1.14.0|${CFFI_PV}|" -i seahub/thirdpart/cffi/__init__.py || die "sed failed" rm -r seahub/thirdpart/{cffi*,requests*}