Skip to content

Latest commit

 

History

History
108 lines (87 loc) · 4.53 KB

README.L3fwd.org

File metadata and controls

108 lines (87 loc) · 4.53 KB

L3 Packet Forwarding (L3fwd)

The L3 Packet Forwarding pipeline (name: l3fwd) models a basic L3 router, with MAC destination checking, ether_type lookup, L3 routing, and next-hop processing. The aim is to benchmark the raw L3 routing capabilities of a data-plane technology.

Note that the implemented L3 functionality is rather limited and the pipeline cannot be used in the place of a fully fledged IP router: L3fwd does not do MAC learning, does not include an ARP responder, does not do full RFC 1812 IP header checking, it does not implement IPv6 at all, etc. Moreover, the pipeline does not support VRFs and VLAN support is also for TODO at the moment.

Static pipeline

The upstream direction the pipeline will receive L3 packets encapsulated into an L2 header from the downlink port of the SUT, perform L3 lookup, and do group processing before forwarding to the uplink port. The downstream pipeline is effectively the same, but the IP lookup tables and group tables are separate between the upstream and downstream pipelines.

The steps performed during upstream/downstream processing are as follows:

  • MACfwd: a basic MAC table lookup to check that the L2 header of the receiver packet contains the router’s own MAC address(es) in which case forward to the ARPselect module, drop otherwise (subject to the fakedrop policy)
  • ARPselect: direct ARP packets to the infra (currently unimplemented) and IPv4 packets to the L3FIB for L3 processing, otherwise drop (subject to the fakedrop policy)
  • L3FIB: perform longest-prefix-matching from an IP lookup table and forward packets to the appropriate group table entry for next-hop processing or drop if no matching L3 entry is found
  • Group: Group Table for next-hop processing, which contains the following steps: rewrite source MAC address to the egress port MAC address, rewrite destination MAC address to the (statically generated) next-hop MAC address, decrease TTL, recompute the IPv4 header checksum, and send to the outgoing port

Note that the traffic trace generated for the L3fwd pipeline will exercise the L3 pipeline via the MACfwd - ARPselect - L3FIB - Group path, i.e., the traces will not contain ARP packets, packets with foreign MAC destination addresses, or L3 headers other than IPv4. Further note that ARP entries are statically set at the time of pipeline setup (but may dynamically change in an l3-table-update event, see below).

./fig/l3fwd_pipeline.png

Dynamic scenarios

The L3fwd pipeline defines the following dynamic scenarios.

  • l3-table-update: an entry is added to/removed from the L3FIB
  • group-table-update: an entry is added to/removed from the Group table

Pipeline configuration

A sample TIPSY configuration for the L3fwd pipeline is shown below:

{
    "pipeline": {
         "name": "L3fwd",
         "upstream-l3-table-size": 100000,
         "upstream-group-table-size": 14,
         "downstream-l3-table-size": 20, 
         "downstream-group-table-size": 1,
         "fluct-l3-table": 3,
         "fluct-group-table": 0,
         "fakedrop": false
    }
}

The parameters specific to the MGW pipeline are as follows:

  • name: name of the pipeline, must be set to l3fwd for the L3fwd pipeline
  • upstream-l3-table-size: number of destination entries (prefixes) in the L3FIB lookup table, upstream direction
  • upstream-group-table-size: number of group table entries (next-hops), upstream direction
  • downstream-l3-table-size: number of destination entries (prefixes) in the L3FIB lookup table, downstream direction
  • downstream-group-table-size: number of group table entries (next-hops), downstream direction
  • fluct-l3-table: number of l3-table-update events in the L3FIB per sec
  • fluct-group-table: number of group-table-update events in the Group Table per sec
  • fakedrop: whether to actually drop unmatched packets (false) or send them immediately to the output port (false) for correct rate measurements

OVS Implementation: Caveats and considerations

BESS Implementation: Caveats and considerations