Skip to content

Commit

Permalink
Merge pull request #11755 from nextcloud/fix/ai-5min
Browse files Browse the repository at this point in the history
fix(admin_manual/ai): Add notes about 5min limitation
  • Loading branch information
DaphneMuller authored Apr 22, 2024
2 parents 334f6e5 + e5522e4 commit 2d10a74
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 0 deletions.
1 change: 1 addition & 0 deletions admin_manual/ai/app_context_chat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,3 +71,4 @@ Known Limitations
* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it.
* Make sure to test this app for whether it meets your use-case's quality requirements
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
* Due to technical limitations that we are in the process of mitigating, each task currently incurs a time cost of between 0 and 5 minutes in addition to the actual processing time
1 change: 1 addition & 0 deletions admin_manual/ai/app_llm2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,3 +74,4 @@ Known Limitations
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
* Language models notoriously have a high energy consumption, if you want to reduce load on your server you can choose smaller models or quantized models in excahnge for lower accuracy
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
* Due to technical limitations that we are in the process of mitigating, each task currently incurs a time cost of between 0 and 5 minutes in addition to the actual processing time
1 change: 1 addition & 0 deletions admin_manual/ai/app_stt_whisper2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,4 @@ Known Limitations
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
* Language models notoriously have a high energy consumption, if you want to reduce load on your server you can choose smaller models or quantized models in excahnge for lower accuracy
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
* Due to technical limitations that we are in the process of mitigating, each task currently incurs a time cost of between 0 and 5 minutes in addition to the actual processing time

0 comments on commit 2d10a74

Please sign in to comment.