Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(aiml): initial readme goals for ai apis for network configuration #467

Merged
merged 2 commits into from
Dec 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions USECASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ The diagram below shows the packet pipeline and packet processing layers and the
![VNIC and NVME Offload](doc/images/API-VNIC-NVME-Use-Case.png)

The table below provides the datapaths where each one has a specific objective and combining all of these objectives results in the above diagram.

| | Objective | Datapath Service Chain |
| - | :-------- | :--------------------- |
| 1 | Basic NIC | Host ↔ VNIC ↔ IP ↔ Eth ↔ Wire |
Expand Down
66 changes: 66 additions & 0 deletions aiml/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,67 @@
# OPI AI/ML APIs

## Overview

The goal of the Open Programmable Infrastructure AI/ML APIs is to define a common interface for the configuration
and management of AI/ML services and network topologies on DPU/IPUs.

The AI network topologies allow for an underlying ethernet connectivity to create a fabric where GPU clusters can interact.

### Network Topology View

### DPU/IPU Network Fabric Path

Following diagram specifies various objects as seen by the DPU/IPU to help manage network fabric connectivity and
offer network services.

``` mermaid
block-beta
columns 5
block:T
S["GPU"]
end
space
space
space
block:A
columns 1
W["DPU/IPU"]
block:B
columns 1
Z["Policy"]
end
block:C
columns 1
X["Tunnel"]
end
block:D
columns 1
Y["IP"]
end
block:E
columns 1
V["Ethernet"]
end
block:F
columns 1
U["Port/Interface"]
end
end
space:4
if(["Physical Interface"])
T-- "Interface Object" -->A
A-->if
style W fill:#0000,stroke:#0000,stroke-width:0
```

The data traffic path from the GPU traverses the through the DPU/IPU from the PCIe interface between the GPU and the DPU/IPU. The
configuration is handled by the interface objects and focuses on the Policy (QoS parameters), the tunnel, the IP address assignment,
the ethernet configuration settings, and any interface object configuration settings to handle items such as RDMA and MTU size.

## Demos

## Clients

## Reference implementations

## Documentation for reference to other specifications and implementations
Loading