Computer Networking: A Top-Down Approach (7th Edition)
- Contents
- 1.1 what is the Internet?
- Overview
- 1.2 Network Edge
- 1.3 Network Core
- 1.4 Evaluation Metrics in networks
- 1.5 Simplified OSI Model
- 2.1 Structure of Network Application
- 2.2 Protocol of Application Layer
- 2.3 Web Servers
- 2.4 Mail Servers
- 2.5 DNS Servers
- 2.6 video streaming and content distribution networks
- 3.1 transport-layer VS network-layer
- 3.2 transport-layer services
- 3.3 transport-layer protocol
- 3.4 transport-layer segment
- 3.5 principles of reliable data transfer
- 3.6 principles of congestion control
- 3.7 TCP congestion control
- Network of networks
- connection between networks
ex. mobile network, home networks, institutional networks
- end hosts: running network apps on terminals
- interconnection devices: router, switch, ...
- links: copper, fiber, radio, wireless...
- Operating Software
- application programs
- protocols: rules for communications
network edge(end host->edge router)---(link)---network core(router->ISP)----network edge
- network edge ⊃ end hosts = end systems ⊃ clients and servers
- network core: connects edges ⊃ interconnected devices: routers, switches
- physical media: wired, wireless communication links
- bandwidth ⬆ internet speed ⬆
- end host -> Access Network -> edge router => ISP(Internet Service Provider)
- connects subscribers to a particular service provider and, through the carrier network, to other networks such as the Internet.
- (central office) shared / dedicated
dedicated access network
- DSL modem -> splitter
- DSL phone line: only one user monopolies this connection
- voice(~ 4kHz) VS data(4kHz ~)
- (central office) DSL Access Multiplexer
- voice->telephone network VS data->ISP
shared access network
- cable modem -> splitter
- coaxial cable: cable company provide 1 signal to 多 users
- how? by FDM(Frequency Division Multiplexing) : divide the broadband with multiple channels (video-cnn, ... & data)
- (central office) CMTS = Cable Modem Termination System = cable company
- HFC(Hybrid Fiber Coax)
- ISP
- wired devices, (wireless devices ->) WAP, wired Ethernet => router
- cable or DSL modem, Optical Network Terminal -> splitter
- FTTH(Fiber To The Home)
- central office or Headend
- ISP
- Wireless Access Point: a device that creates a wireless Local Area Network. An access point connects to a wired router, switch, or hub via an Ethernet cable, and projects a Wi-Fi signal to a designated area.
- AP, institutional mail and web servers => Ethernet switch
- (gateway) Institutional router
- Institutional link
- ISP
my network edge | network core | opponent's network edge |
---|---|---|
[end host(wireless)---(wireless)---WAP---(wire)---edge router] | (routers) | [edge router -----end host] |
- shared network
- WiFi(Wireless Fidelity):wireless LAN within the building(coverage:35m)
- cellular network(3G,4G(LTE),5G): wireless WAN provided by cellular operator(coverage:Nkm)
the physical materials that are used to store or transmit information in data communications
- guided media: signals propagate in solid(wired) media
- TP(twisted-pair) cable: two insulated copper wires
- behind the ethernet
- Coaxial cable: insulated copper conductor
- provide broadband but too heavy to use now...
- Fiber-optic Cable: glass fiber transmitting pulse
- high speed, low error rate => good for electromagnetic noise
- TP(twisted-pair) cable: two insulated copper wires
- unguided media: signals propagate freely(wireless) by electromagnetic wave
- radio link types: WiFi, cellular, satellite, terrestrial microwave...
- connecting network edges by access network by link
- the mesh of interconnected routers
- 2 functions of network core
- routing: Get a path from source host -> destination host by using routing algorithms
- forwarding: Move packets from the before router -> the next router by using forwarding table
circuit switching | packet switching |
---|---|
physical path | no physical path |
call setup, resource reservation (in adv, the entire bandwidth is reserved) |
no call setup, no resource reservation |
how to allocate channel? FrequencyDM, TimeDM |
how transmit? store-and-forward transmission: storing(waiting) bits of packets in the router until becoming a packet && signal and then forwarding it to the next router |
all packets use same path | packets travel independently |
多 users sharing a link | full link capacity in the time of end-end delay |
low speed, only 10 users | 3.5x speed but after 35 users.. low QoS(Quality of Service) |
traditional telephone networks | handling data |
host < access ISP < regional ISP < IXP < Tier 1 ISP or Content Provider
- Internet eXchange Points & peering link: connects between competitor ISPs
- Tier 1 commercial ISP e.g. AT&T: national & global coverage
- Content Provider e.g. Google: private network that connects its own data center
delay, packet loss, throughput
Delay taken to deliver a packet in the route of "source => nodal processing -> queueing -> transmission -> propagation => destination"
- N = no. of packets
- M = no. of hops
- a packet/sec = average packet arrival rate
- L bits = size of packet (< MTU)
- R bps = link transmission rate = link capacity = link bandwidth
- d meters = distance = length of physical link
- s = signal speed
- processing delay: Time taken to check bits error and destination address in the packet header before forwarding a packet
- dproc -> opt to the quality of router
- queueing delay: Time a job waits in a queue of router until it can be executed
- dqueue ∝ traffic intensity = I = La/R : so variable -> opt to #. network users
- I ≈ 0 -> avg dqueue = small
- I ≈ 1 -> avg dqueue = large
- I > 1 ≈ avg dqueue = infinite
- dqueue ∝ traffic intensity = I = La/R : so variable -> opt to #. network users
- transmission delay: Time needed to transmit packets into link
- dtrans = L/R -> opt to link transmission rate
- propagation delay: Time taken to reach the destination from the start point for bits
- dprop = d/s -> opt to the amount of data
- => End-To-End delay ≈ (M + N) * L/R
- How many packets are lost in transmission = PLR(Packet Loss Rate) (<-> PDR(Packet Delivery Rate))
- In queueing delay, arrived packets when data > queue-capacity in a buffer -> dropped(loss) => host: re-transmission / network: waste of resource / user: feel delay
- = how much traffic that link can transmit between sender <-> receiver in unit time (bits/sec)
- instantaneous: throughput at the peak
- average: throughput on the average
- throughput ∝ bottleneck link == min(Rs, Rc, R/#. connection)
- usually the network core(R) is built not to be the bottleneck.
- why layering? modularization => maintenance 👍, system update 👍 in complex system
explanation | protocol | Protocol Data Unit for encapsulation | controlled by | |
---|---|---|---|---|
application | support network application | HTTP, SMTP, DNS, FTP | message | app developer |
transport | data transfer between processes | TCP, UDP | + segment | OS |
network | Router finds path between hosts | IP, routing protocols | + datagram | OS |
link | Switch transfers data between hosts | Ethernet, WiFi | + frame | OS |
physical | Hubs defines means to transmit bits data on the wire like cable, radio | OS |
network apps(ex. gmail, game, youtube, zoom, netflix) work only on end systems not on network core devices
model | data consumer | data provider | for scaling | e.g. |
---|---|---|---|---|
client-server model | client : can be on/off : has a dynamic IP address |
server : should be always on : has a permanent IP address |
servers↑data centers↑ | |
PeerToPeer model | all the arbitrary end systems: can be on/off : has a dynamic IP address |
peer↑(self-scalability) | file distribution(BitTorrent) , VoIP (Skype) |
- peer = end systems which work equally in equal protocol layer
- file size F
- N: variable #. client
- us: upload capacity of server
- ui: upload capacity of peer i
- di: download capacity of peer i
- dmin: the lowest download capacity between peers
client-server | P2P | |
---|---|---|
1 server -> N clients | 1 server -> 1 client & N peers -> N peers(redistribute) | |
(server)time to send N copies | NF/us | F/us & NF/(us+ ∑ui) |
(client)time to download copies | F/dmin | F/dmin |
=> time to distribute file to clients | Dc-s≥max{NF/us, F/dmin} | Dp-p≥max{F/us, NF/(us+ ∑ui), F/dmin} |
(graph)Distribution-time/N | steeply linear | steadily curved |
torrent: peer group send&receive chunks
- client
- file divided into 256Kb chunks
- get peer list from server
- requesting chunks
- asks chunks list to each peer
- ask rarest chunks first
- server: sending chunks: tit-for-tat
- every 30 secs: A peer randomly selects B peer, starts sending chunks to B peer
- every 10 secs: B peer updates its top 4 providers, starts sending chunks to A peer
- A peer updates its top 4 providers
- open protocols: SMTP, HTTP, FTP, Telnet
- proprietary protocols: skype
- type: request/response message
- syntax: fields of message
- semantics: how to interpret fields
- rules
description | app. proto | trans. proto |
---|---|---|
web | HTTP | TCP(reliable) |
SMTP, POP3, IMAP, HTTP | TCP(reliable) | |
remote terminal access | Telnet | TCP(reliable) |
file transfer | FTP | TCP(reliable) |
Domain Name System | DNS | UDP(fast speed), TCP |
video streaming | RTP, HTTP | UDP(fast speed), TCP |
ex | data integrity | timing | throughput |
---|---|---|---|
web, email, file transfer | no loss | delay ok | elastic |
streaming multimedia(video/audio/games), internet telephony(one-time transaction) | loss-tolerant | time-sensitive | minimum throughput guarantee |
- www: Webpage > base html file(=frame) > objects > url(=loc of obj file)
- HTTP layers: HTTP > TCP > IP > ethernet, WiFi
- HyperText Transfer Protocol
- Uniform Resource Locator = hostname + pathname
- first object = 2RTT + α
- new TCP connection socket is created to initiate tcp connection
- 1RTT for tcp connection request/response
- the socket deleted to terminate tcp connection
- 1RTT + file transmission time for http request(⊃URL) + response(⊃base HTML file)
- second object = 2RTT + α
- 1RTT for tcp
- 1RTT + α for http
- 4RTT+α => long latency
- need parallel TCP connections <- 4 sockets(∈ OS) needed => overhead for OS
- first object = 2RTT + α
- 1RTT for tcp: two tcp connection sockets created
- 1RTT for http: keep two sockets
- second object = 1RTT + α
- 1RTT + α for http
- 3RTT+α
- need only 1 TCP connection <- 1 sockets(∈ OS) needed => no overhead for OS
client can get recommendations, keep user session state(shopping cart, log-on) in cookie file (∋ cookie header line ∋ user-id e.g. amazon id, ebay id)
- HTTP request w/ cookie file
- HTTP response msg
- if first access: provides set-cookie=$id (creates ID entry in the backend db)
- else: provides cookie-specific action
HTTP message format
HTTP request message
- (1 line)request line: method + URL(after hostname) + version + cr + lf - ex. GET /100sun/1.md HTTP/1.0\r\n
- (1<= line)header lines
- field name + : + value + cr + lf
- ex. Host: www.github.com\r\n
- ex. user-agent, accept-language, keep-alive(how long)
- cr + lf (end of header lines)
- ex. \r\n
- field name + : + value + cr + lf
- body: optional - like POST
- how to reduce delay?
- Increase Access Link Speed: expensive(2s)
- Install Local Web Cache: cheaper and faster(1.2s) but need version checking
- Install Local Web Cache + GET method
access link rate ↑ Uaccess link ↓ total delay ↓
assumptions
- 1Gbps LAN
- data object size = 0.1Mbits
- data request rate = 15/s
- RTT from institutional router to origin servers = 2s
- access link rate: 1.54Mbps => 154Mps
consequences
- data rate = data object size * data request rate = 0.1 * 15 = 1.5Mbps
- ULAN = data rate / LAN availability = 1.5Mbps / 1Gbps = 0.15%
- Uaccess link = data rate / access link rate = 1.5Mbps / 1.54Mbps = 99% => 0.99%
delays
- Internet delay ≈ RTT = 2s
- Access delay = queueing delay = La/R = 1.5Mbps / 1.54Mbps ≈ 1 → ∞ ~ minutes => msecs
- LAN delay = μs
=> total delay: 2s + m + μs ≈ >m (+ increase access link rate) => total delay: 2s + msecs + μs ≈ >2s
assumptions
- cache hit rate: 0.4
consequences
- 40% requests satisfied at proxy servers
- LAN delay = μs * 0.4
- 60% requests satisfied at origin servers
- data rate = 1.5Mbps * 0.6 = 0.9Mbps
- Uaccess link = 99% * 0.6 = 58%
- Internet delay = 2s * 0.6
- Access delay ≈ 0 ∵ Uaccess link is less than 0.7 -> it's fine
- LAN delay = μs
=> total delay = 2s * 0.6 + μs * 0.4 ≈ 1.2s
web caches = proxy server
- what is proxy server?
- client for origin server && server for client
- ex? - HTTP request/response
- originally) client <-> proxy server <-> origin server
- if same request) client <-> proxy server
- why web caching?
- to reduce response time for client
- to reduce overhead for origin server
- to support more users for origin server
- to increase utilization of access link for local ISP
- Conditional GET method
- why? objects in web cache have to be up-to-date as same as the original server
- how?
- HTTP request ∋ last update date of caches
- HTTP response ∋ whether cache is up-to-date + data
- User Agents: mail reader program e.g. outlook
- mail servers: gmail, … ⊃ 1 message queue, N user mailboxes
- protocols: SMTP, POP3, IMAP, …
=> UA: write -> (SMTP) -> message queue of mail server -> (TCP < SMTP) -> mailbox of mail server -> (POP, IMAP, HTTP) -> UA: read
Simple Mail Transfer Protocol: delivery to receiver’s server
- direct transfer
- handshaking(TCP connection first) -> message transfer -> closure
SMTP | HTTP |
---|---|
port 25** | port 80** |
user -> push data -> server | server -> pull object -> user |
多 objects in meesage w/ 多 protocols | 1 object in message |
ASCII command(phrase)-response(status-code+phrase) interaction | |
use TCP connection ∵ reliability |
mail access protocol
POP3 | IMAP |
---|---|
Post Office Protocol ver3 | Internet Access Protocol |
1. download & delete mode(retrieve PC 外 X read) 2. download & keep mode(copies on different clients=>read ok on N clients) |
keep all messages in 1 server(=>synchronization) |
after downloading, terminate TCP connection | after downloading, keep TCP connection |
No mail folder organization | Allows user to organize messages in folders |
stateless across sessions | keep user state across sessions |
- HTTP: used in the web-based emails to pull webpages objects e.g. gmail, hotmail
Domain Name System
- mapping service: hostname -> (DNS) -> IP address (32 bit)
- host aliasing: alias name(typed URL) -> (DNS) -> canonical name(real URL)
- mail server aliasing: provide mail server of specific domain
- load distribution: replicated Web servers(many IP addresses correspond to 1 hostname)
- (<->no centralized DNS ∵ single point of failure, traffic volume, distant centralized db, no scaling)
- local DNS server ≒ default name servers ≒ proxy servers ⊃ mapping service
- DNS hierarchy ⊃ root, TLD, authoritative
- Root DNS server: total 13
- TLD(Top Level Domain) server: com, org, top-level country domains(kr)
- authoritative DNS server: amazon.com, google.com(hostname) ⊃ mapping service(IP address)
: between local DNS server and each DNS hierarchy servers
- client -> local <=> (root -> TLD -> authoritative) => client: access to that IP address
-> heavy load on local DNS server
: between at upper level and lower level server
- client -> local => root <-> TLD <-> authoritative => client: access to that IP address
-> heavy load on root DNS server => caching
: Local DNS server caches entries about TLD server
- client -> local <=> (TLD -> authoritative) => client: access to that IP address
- Cached entries can be out-of-date => mapping entries ⊃ TimeToLive entries
- video traffic >= 50% of downstream residential ISP traffic
- then how to stream video content to thousands of simultaneous users?
- mega server: doesn't scale, heterogeneity(capability diff)
- store/serve 多 copies of videos at 多 geographically distributed sites => CDN
how does CDN DNS select "good" CDN node to stream to client? let client decide
- give client a list of several CDN servers
- client picks "best"
- Netflix uploads studio master to Amazon cloud
- create 多 versions of movie (different endodings) in cloud
- upload copies of 多 versions from cloud to CDNs(Akami, limelight, level-3 CDN)
- when client requests(browses) video
- cloud returns the manifest file addressing three 3rd party CDNs host/stream Netflix content
- client sends HTTP to 1 of CDN
- CDN sends streaming
logical communication between...
- transport layer between processes: p1, p2 <-> p1, p2
- network layer between hosts: source <-> destination
- 1 IP datagram (∋ IP address) > 1 transport-layer segment (∋ port#)
- a set of APIs.
- Processes send & receive Messages from app. via Socket.
- like a door: application -> (socket) -> transport -> ...
- after generating socket by client, OS allocate host-local port#
- 1 client > 1 socket
- mux: at sender, from socket to segment, by adding port# to transport header
- demux: at receiver, from segment to appropriate socket, detecting path by IP address + port#
Transmission Control Protocol | User Datagram Protocol |
---|---|
connection-oriented, handshaking | connectionless, no handshaking |
reliable, in-order delivery | each UDP segment handled independently -> unordered delivery, can be lost => unreliable |
error control: ~until no data error(checksum, ARQ) flow control: sender considers receiver's data capability(rwnd) |
no connection establishment -> no need to save connection state => no delay, fast speed |
congestion control | no congestion control => no data overload in router/switch |
for demux, require IP address, port# of dest why? 1 app > 1 process > 1 socket |
for demux, require IP address, port# of source + dest why? 1 app > 多 processes(by fork) > 多 sockets 1 app > 1 process > 多 threads > 多 sockets |
web, email, file transfer | streaming multimedia apps, DNS, high-reliability required apps (∵ add reliability at app. layer) |
- only mux and demux services => small header size ≈ 40% of TCP header
- (16bit)length = header length(=4*16bits=8bytes) + payload length
- (32bit)sender adds seq#(=the current seq#=n)
- (32bit)receiver adds cumulative ack#(=the next expected seq#=n + MSS)
- (16bit)length = header length : variable
- receiver adds rwnd size(os autoadjust)
- rwnd guarantees that receiver buffer won't overflow
- rcv buffer = buffered data + free buffer space(=rwnd)
- ACK: to check validity of ACK#, ACK==0 means ignore ACK# (∵ first packet)
- ReSeT, SYNchronization: to establish tcp connection(handshake)
- SYN: SYN==1 means the first packet from sender
- FINale: to close tcp connection, FIN==1 means the last packet from sender
- Error-free
- Assume connection already established
B. Seq=78, ACK=42, data = 'B'
A. Seq=42, ACK=79, data = 'C' (user sends 'C')
B. Seq=79, ACK=43, data = 'C' (host ACKs receipt of 'C' / echoes back 'C')
A. Seq=43, ACK=80 (host ACKs receipt of echoed 'C')
- seq # = the last received ACK #
- ack # = the last received seq # + size of data(< MSS)
- A's initial seq# is x(451)
- B's initial seq# is y(103)
- MSS = 512
A -> B : seq# 451, no ack#, data 512 bytes, SYN=1, ACK=0, FIN=0 : handshaking 1
B -> A : seq# 103, ack# 963, data 512 bytes, SYN=1, ACK=1, FIN=0 : handshaking 2
A -> B : seq# 963, ack# 615, data 512 bytes, SYN=0, ACK=1, FIN=0 : handshaking 3
============EST finished, no more handshaking===============
B -> A : seq# 615, ack# 1475, data 154 bytes, SYN=0, ACK=1, FIN=0
A -> B : seq# 1475, ack# 1629, data 1 byte, SYN=0, ACK=1, FIN=1
===============FIN_WAIT_1===============
B -> A : seq# 1629, ack# 1476, data 1 byte, SYN=0, ACK=1, FIN=1
===============FIN_WAIT_2===============
===============TIMED_WAIT to get the last ACK bit, after got both FIN bits===============
A -> B : seq# 1476, ack# 1478, no data, SYN=0, ACK=1, FIN=1
===============closed connection===============
- A transmits 1111 bytes to B
- B transmits 666 bytes to A
- there could be error under the network layer but transport layer makes app layer to feel that there is no error
- so let's resolve any occasion that can occur error
- Receiver received packet, but it's wrong data. => bit error
- packet loss, ACK loss, premature timeout, delayed ACK => packet loss
- sender calc checksum and put it in the checksum field
- receiver calc checksum and compare it with the checksum field
- receiver's total sum + sender's checksum field
- result == 1111...111 -> they're same: no error => send ACKnowledgement
- error => not send ACK
- c.f. but if it gets errors from both 1 and 0 -> there is no way to check error
- all the segment data / 16bit
- sum all the 16bits if sum>=16bit: wraparound until the end of data
- checksum = 1's complement of total sum
= packet retransmission method
- timeout: after periods of time allowed to elapse to wait ACK for sender
- sequence number
- => ordered delivery: sender adds the seq# to the packet
- => data duplication prevention: receiver can detect when sender retransmitted packet
- L = 1KB, R = 1Gbps, Dprop = 15ms
Usender
= N* (time just for sending / total time)
= N * {Dtrans / 1 RTT + Dtrans)}
= N * {(L/R) / (2*Dprop + L/R)}
= N * 0.027%
- stop-and-wait method: 1 data packet at a time => Usender = 0.027%
- duplicate packet: discard it and send its ACK again
- pipelining method: N data packets at a time using window => Usender = N * 0.027% => throughput↑
- duplicate packet: not exist
go-back-N | selective repeat | |
---|---|---|
senders sends | unack'ed packets up to N in order | unack'ed packets up to N out of order(=> receiver buffer make order) |
sender sets timer for | the first unack'ed packet | each unack'ed packets |
sender timeout(x) | retransmit N : packets whose seq# >= x in window |
retransmit 1 : for each unack'ed packets |
receiver sends | only cumulative ACK == 1(no window) | all individual ACK == N(==rwnd) |
after packet loss | discard all the packets and go back to the last cumulative ACK and send it again | buffer all the packets |
- sender can send N packets up to windows size N without receiving its before ACK
- sendBase = windows are moved after seq#. of ack'ed when received ACK
- window size↑ throughput↑ packets#. to retransmit↑
- window size ∝ network congestion, receiver buffer overflow
- sequence # >= 2 * window size ∵ when all the packets in the size of window are lost, duplicate data will be accepted as new.
- point-to-point: one sender, one receiver( <=> multi-casting protocol)
- Full-duplex connection: bi-directional data flow <=> both a and b can be either sender or receiver
- (go-back-N) cumulative ACKs
- (selective repeat) buffering packet
when sender receives triple duplicate accumulative ACK(ack# 100), even before timeout, retransmit the packet(seq# 100)
- TimeoutInterval = EstimatedRTT + safety margin
- EstimatedRTT = avg of cumulative RTT values = 0.9 * EstimatedRTT + 0.1 * SampleRTT
- why? when retransmission(didn't receive ACK), throughput↓ timeout⇑ SampleRTT(=the latest RTT value)⇑
- flow control: point-to-point issue
- congestion control: global issue
- network-assisted: detecting loss by feedback of router (single bit || explicit rate)
- end-to-end: detecting loss by timeout || 3 dup. ACKs
- timeout / retransmission ⇑
- app. layer: original data < trans. layer: original data + retransmitted data
- goodput ⇓
= End-To-End congestion control
cwnd ∝ network
- 1 MSSS
- slow start
- 1RTT x2 exponentially
- sshtresh(slow start threshold)
- by OS
- loss event
- CA(Congestion Avoidance): AIMD(Addictive Increase Multiplicative Decrease)
- 1RTT 1MSS linearly
- threshold
- timeout : 1 MSS (TCP Tahoe)
- 3 dup ACKs : 1/2 MSS (TCP RENO)
- TCP Reno or Tahoe
- avg TCP throughput = 3/4 W/RTT bytes/sec
- W = avg. cwnd
- TCP Fairness
- scope:1 <-> decrease in half => equal bandwidth share
- segment --(encapsulation)-> datagram --(decapsulation)-> segment
- protocol in host and router
- in routing processor
- calc a path by routing algorithms by routing protocols -> send just made forwarding table to input port
- SW, speed↓
- in switching fabric
- forwarding packets by forwarding table by IP protocols, from input port to output port
- HW, speed↑
for guaranteed delivery
- for individual datagrams: time delay
- for a flow of datagrams: in-order / min bandwidth / arrival time spacing
input port | output port | |
---|---|---|
physical<->link<->network | switch fabric -> frame -> frame(∋header) -> datagram -> input link | switch fabric -> datagram -> frame(∋header) -> frame -> output link |
why buffering? when queueing? | input line speed > switch fabric speed HOL(Head Of the Line) blocking |
switch fabric speed > output line speed |
how? | switching? 1. memory: 2bus, speed⇓ 2. Bus: 1bus, speed↓ 3. Crossbar: 0bus, speed↑ |
scheduling? 1. FIFO: drop tail/priority/random 2. priority 3. RR: fair 4. WFQ(weighted fair queueing): RR+priority |
- bus : cannot forward simultaneously
- crossbar: via interconnection network
Internet Protocol
interface between host/router <-> link
- ex. WiFi, LAN
- src: fragmentation ∝ MTU of each network links
- dest: reassemble by 1. ID 2.fragflag=0 (last) 3.fragment offset (data bytes #/8)
32bit IP Address = same subnet part + diff host part
- subnet = a group of interfaces who don't need router to identify = all - routers
- address classes
- A=0__(7)__ , B=10__(14)__ , C=110__(21)__
- subnet mask: 255.255.255.0/24
- subnetting
- network ID + subnet ID + host ID
- CIDR (Classless Inter Domain Routing)
- fixed: by admin
- dynamic: DHCP (Dynamic Host Configuration Protocol)
- w/ plug and play
- by yiaddrr via DHCP server
- to get its address, first hop address, DNS server address
- NAT(Network Address Translation)
- 1 Public (WAN) <---(router: NAT)---> 多 Private (LAN) 多 segment port
- by mapping entry in NAT translation table
128bit IP Address
tunneling: IPv6 datagram -> IPv4 payload
- traditional: each router, interact, each forwarding table
- SDN(Software Defined Network): one remoter controller, no interact, each local Control Agency
- global routing algorithm
- update LSA(Link State Advertisement) according to Dijkstra's algorithm
- when neighbor change / LSA change / Periodically
- D(v) = min(D(v), D(w)+c(w, v))
- but oscillations possible
- decentralized routing algorithm
(Bellman-Ford equation)
- OSPF(Open Shortest Path First) : intra-AS routing
Link State algorithm
- RIP(routing information protocol) : intra-AS routing
Distance Vector algorithm
- BGP(Border Gateway Protocol) : inter-AS routing
Path Vector