Skip to content

Commit

Permalink
update imagens
Browse files Browse the repository at this point in the history
  • Loading branch information
JoseRFJuniorLLMs committed Jul 24, 2024
1 parent 710f0d6 commit a2a6181
Show file tree
Hide file tree
Showing 72 changed files with 146 additions and 189 deletions.
153 changes: 48 additions & 105 deletions .obsidian/workspace.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,117 +4,59 @@
"type": "split",
"children": [
{
"id": "4333aaa5b745cfbe",
"id": "ac456ecc1433b9da",
"type": "tabs",
"children": [
{
"id": "e78b3bb3461a1de2",
"id": "1572df8c48a273a7",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "RAG/Multmodal RAG.md",
"file": "Vector Index.md",
"mode": "source",
"source": false
}
}
},
{
"id": "4937a6094eea5804",
"id": "af6dc11d19f83bba",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "RAG/RAG Retrieval e Graph.md",
"file": "Multi-Scale Transformer (MST).md",
"mode": "source",
"source": false
}
}
},
{
"id": "00937658fbf6529a",
"type": "leaf",
"state": {
"type": "release-notes",
"state": {
"currentVersion": "1.6.7"
}
}
},
{
"id": "fb82a77b843b4b66",
"id": "915fb20d19dced5c",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "RAG/Agentic RAG.md",
"file": "Overfitting.md",
"mode": "source",
"source": false
}
}
},
{
"id": "d101b9bb8fa77856",
"id": "bbb698fbf438c5d5",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "ML.md",
"mode": "source",
"source": false
}
}
},
{
"id": "e1ff809cf858d0e9",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Cross Encoders.md",
"mode": "source",
"source": false
}
}
},
{
"id": "f7cb6a2ccca033b9",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Correlation vs Covariance.md",
"mode": "source",
"source": false
}
}
},
{
"id": "5055bf9c581497e6",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Covariance.md",
"mode": "source",
"source": false
}
}
},
{
"id": "bc057e155e0216e1",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Correlation.md",
"file": "Layer-Wise Training.md",
"mode": "source",
"source": false
}
}
}
],
"currentTab": 7
"currentTab": 1
}
],
"direction": "vertical"
Expand Down Expand Up @@ -164,7 +106,7 @@
}
],
"direction": "horizontal",
"width": 400.5
"width": 345.5
},
"right": {
"id": "5612a97ef8df680b",
Expand All @@ -180,7 +122,7 @@
"state": {
"type": "backlink",
"state": {
"file": "Covariance.md",
"file": "Multi-Scale Transformer (MST).md",
"collapseAll": false,
"extraContext": false,
"sortOrder": "alphabetical",
Expand All @@ -197,7 +139,7 @@
"state": {
"type": "outgoing-link",
"state": {
"file": "Covariance.md",
"file": "Multi-Scale Transformer (MST).md",
"linksCollapsed": false,
"unlinkedCollapsed": true
}
Expand All @@ -220,7 +162,7 @@
"state": {
"type": "outline",
"state": {
"file": "Covariance.md"
"file": "Multi-Scale Transformer (MST).md"
}
}
}
Expand All @@ -241,44 +183,45 @@
"command-palette:Open command palette": false
}
},
"active": "5055bf9c581497e6",
"active": "af6dc11d19f83bba",
"lastOpenFiles": [
"Correlation vs Covariance.md",
"Covariance.md",
"Correlation.md",
"Pasted image 20240722183124.png",
"Cross Encoders.md",
"Pasted image 20240722183028.png",
"ML.md",
"Pasted image 20240722182643.png",
"RAG/Agentic RAG.md",
"Overfitting.md",
"Multi-Scale Transformer (MST).md",
"Layer-Wise Training.md",
"Vector Index.md",
"Img/Gen_AI.png",
"Img/Advanced_RAG.png",
"Img/Self_Learning_LLMs.png",
"Img/Modular_RAG_Framework.png",
"Img/Advanced_RAG_Tecniques.png",
"Img/Embedding_Models_RAG.png",
"Img/Developing_LLM.png",
"Img/Semantic_Search.png",
"Img/Knowledge_Graph_RAG_Single_Store.png",
"Img/Two_Stage_Retrieval_System.png",
"RAG/RAG.md",
"RAG/RAG Retrieval e Graph.md",
"img/RAG_vs_Vector_RAG.png",
"Pasted image 20240719194205.png",
"img/ML_ENCODER.png",
"ML Ecoder.md",
"RAG/Multmodal RAG.md",
"README.md",
"img/rag_query_response.png",
"LLMs_road.png",
"img/VETOR_HYBRID.png",
"LLMs Road Map.md",
"Relative Absolute Error (RAE).md",
"img/rag_reranking.png",
"RAG/RAG Pain Points.md",
"RAG/Advanced RAG LimaIndex & Claude 3.md",
"RAG/Adaptive RAG.md",
"LLMs/Self-Learning LLMs.md",
"LLMs/Methods for improving LLMs.md",
"LLMs/LLMs APP.md",
"LLMs/How LLMs Are Built.md",
"LLMs/Enhancing LLMs.md",
"LLMs/Components of an LLMs.md",
"LLMs/Agent LLMs Calls.md",
"ML/ML.md",
"ML",
"LLMs/LLMs Road Map.md",
"Augmentation.md",
"AUC.md",
"Anomaly Detection in Microsoft Azure.md",
"Anomaly Detection.md",
"AI frameworks.md",
"Accuracy.md",
"RAG/RAG Retrieval e Graph.md",
"RAG/RAG Re-Ranking.md",
"RAG/RAG Query Guardrails vs Response Guardrails.md",
"RAG/Types of Embedding Models for RAG.md",
"RAG/Two-Stage Rretrieval System.md",
"RAG/Text Splitting RAG.md",
"RAG/Text & Knowledge Graph Embeddings.md",
"RAG/Systematic RAG Workflow.md",
"RAG/RAG vs FineTuning.md",
"RAG/RAG Using Llama 3.md",
"RAG/RAG Stack.md",
"LLMs/LLM models.md",
"RAG/RAG Retrieval Sources.md",
"FineTuning",
"LangChain",
"LLMs",
Expand Down
2 changes: 1 addition & 1 deletion Correlation vs Covariance.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
![[Pasted image 20240722183124.png]]
![[Correlation_vs_Covariance.png]]

[[Correlation]] vs [[Covariance]]
2 changes: 1 addition & 1 deletion Cross Encoders.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
![[Pasted image 20240722183028.png]]
![[CrossEncoders.png]]

[[Cross Encoders]]
File renamed without changes.
Empty file added Improving search.md
Empty file.
Empty file added Integration.md
Empty file.
2 changes: 1 addition & 1 deletion LLMs/Agent LLMs Calls.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715222340.png]]LLM Agents are a capable framework bringing LLM performance to the next level.
![[Agent_Framework_Executing_LLM.png]]LLM Agents are a capable framework bringing LLM performance to the next level.

Agent framework leverages a Large Language Model to act as a decision engine capable of solving complex tasks by multi-stage reasoning.

Expand Down
2 changes: 1 addition & 1 deletion LLMs/Developing an LLMs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715221155.png]]
![[Developing_LLM.png]]

Stage 1: Building
Stage 2: Pre-training
Expand Down
Empty file added LLMs/LLM models.md
Empty file.
Empty file added LLMs/LLM-Generated Content.md
Empty file.
2 changes: 1 addition & 1 deletion LLMs/LLMs APP.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240716000730.png]]
![[first_llm.png]]
File renamed without changes.
2 changes: 1 addition & 1 deletion LLMs/LLMs, chunking strategies.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715234606.png]]
![[LLM_Chunking_Strategies.png]]

[[LLMs, chunking strategies]]

2 changes: 1 addition & 1 deletion LLMs/LLMs.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ Chunking
Embedding
Vector Database

![[20240715210016.png]]
![[Gen_AI.png]]
2 changes: 1 addition & 1 deletion LLMs/Self-Learning LLMs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715215838.png]]
![[Self_Learning_LLMs.png]]

Self-learning LLM framework enables an LLM to independently learn previously unknown knowledge through self-assessment of their own hallucinations.

Expand Down
Empty file added Layer-Wise Training.md
Empty file.
1 change: 0 additions & 1 deletion ML.md

This file was deleted.

1 change: 1 addition & 0 deletions ML/ML.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
![[Choosing_The_Right.png]]
11 changes: 11 additions & 0 deletions Multi-Scale Transformer (MST).md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
A MST é um tipo de transformador que processa o texto em diferentes escalas e níveis de detalhe. Isso permite ao modelo
capturar informações relevantes a diferentes níveis, desde as palavras individuais até as frases e textos inteiros.

Os autores também utilizaram uma técnica chamada "[[Layer-Wise Training]]" para treinar os embeddings do modelo em camadas
separadas. Isso ajuda a evitar o [[Overfitting]] e permite ao modelo aprender relações mais complexas entre as palavras e textos.

A combinação dessas técnicas permitiu ao GPT-4 gerar embeddings de alta qualidade que podem capturar informações relevantes sobre o contexto em que as palavras e textos são utilizados.

Além disso, o GPT-4 também utiliza uma técnica chamada "Knowledge Distillation" para transferir conhecimento do modelo treinado para modelos mais pequenos. Isso permite ao modelo gerar embeddings de alta qualidade mesmo em situações onde não há acesso a grande quantidade de dados de treinamento.

É importante notar que o GPT-4 é uma variação do GPT-3 e utiliza as mesmas técnicas de embedding que o modelo original, com algumas ajustes para melhorar a performance e a eficiência.
2 changes: 1 addition & 1 deletion NLP/Semantic Search.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715221527.png]]
![[Semantic_Search.png]]


[[Text Embedding]]: The first step is to convert the text data (documents, queries, etc.) into high-dimensional vector representations using a pre-trained language model like BERT, GPT, or custom models. These vectors capture the semantic meaning of the text in a dense numerical form.
Expand Down
Empty file added Overfitting.md
Empty file.
Empty file added Query Guardrails.md
Empty file.
2 changes: 1 addition & 1 deletion RAG/AI Agents Use Case.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240715222611.png]]
![[AI_Real_World.png]]
2 changes: 1 addition & 1 deletion RAG/Advanced RAG LimaIndex & Claude 3.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240716000446.png]]
![[Advanced_RAG_LIamaIndex_Claude3.png]]
2 changes: 1 addition & 1 deletion RAG/Advanced RAG Techniques.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240715220423.png]]
![[Advanced_RAG_Tecniques.png]]
2 changes: 1 addition & 1 deletion RAG/Advanced RAG.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240715214731.png]]
![[Advanced_RAG.png]]
2 changes: 1 addition & 1 deletion RAG/Improving RAG Pipeline.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@

![[improving_rag_pipeline.png]]
![[improving_rag_pipeline.png]]
3 changes: 2 additions & 1 deletion RAG/Knowledge Graph RAG with SingleStore.md
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
![[20240715221947.png]]
![[Knowledge_Graph_RAG_Single_Store.png]]

2 changes: 1 addition & 1 deletion RAG/Modular RAG Framework.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
![[20240715220117.png]]
![[Modular_RAG_Framework.png]]
3 changes: 3 additions & 0 deletions RAG/RAG Query Guardrails vs Response Guardrails.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
![[rag_query_response.png]]

[[Query Guardrails]] vs [[Response Guardrails]]
4 changes: 2 additions & 2 deletions RAG/RAG Re-Ranking.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
![[rag_reranking.png]]

LLMs can acquire new information in at least two ways:
1. Fine-tuning
2. RAG (retrieval augmented generation)
1. [[Fine-tuning ]]
2. [[RAG]] (retrieval augmented generation)

Retrieval-augmented generation (RAG) is the practice of extending the “memory” or knowledge of LLM by providing access to information from an external data source.

Expand Down
2 changes: 1 addition & 1 deletion RAG/RAG Retrieval Sources.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240715235013.png]]
![[RAG_Retrieval_Sources.png]]

[[Unstructured Data]] (Text): This includes plain text documents, web pages, and other free-form textual sources.

Expand Down
12 changes: 6 additions & 6 deletions RAG/RAG Stack.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![[20240716000053.png]]
![[RAG_Stack.png]]

But to make RAG work perfectly, here are some key points to consider:
1. Quality of External Knowledge Source
Expand All @@ -7,12 +7,12 @@ But to make RAG work perfectly, here are some key points to consider:

3. [[Chunking]] Size & Retrieval Strategy: Experiment with different chunk sizes to find the optimal length for context retrieval.

4. Integration with Language Model: The way the retrieved information is integrated with the language model's generation process is crucial.
4. [[Integration]] with Language Model: The way the retrieved information is integrated with the language model's generation process is crucial.

5. Evaluation & Fine-tuning: Evaluating the performance of the RAG model on relevant datasets and tasks is important to identify areas for improvement.
5. [[Evaluation]] & Fine-tuning: Evaluating the performance of the RAG model on relevant datasets and tasks is important to identify areas for improvement.

6. Ethical Considerations: Ensure that the external knowledge source is unbiased and does not contain offensive or misleading information.
6.[[ Ethical Considerations]]: Ensure that the external knowledge source is unbiased and does not contain offensive or misleading information.

7. Vector database: Having a vector database that supports fast ingestion, retrieval performance, hybrid search is utmost important.
7. [[Vector Database]]: Having a vector database that supports fast ingestion, retrieval performance, hybrid search is utmost important.

8. LLM models: Consider LLM models that are robust and fast enough to build your RAG application.
8.[[ LLM models]]: Consider LLM models that are robust and fast enough to build your RAG application.
Loading

0 comments on commit a2a6181

Please sign in to comment.