Wednesday, May 3, 2023

"Mastering the OSI Model and Networking Protocols: A Comprehensive Guide to Network Communication"

 

OSI  :

The OSI (Open Systems Interconnection) model is a conceptual framework used to describe the way data is transmitted and received between devices in a networked environment. The model is divided into seven layers, each with its own unique set of functions and protocols:



OSI layer and protocols

Application layer

NFS, SNMP, SMTP, FTP, SSH, NTP, TELNET, DHCP, DNS, HTTP

Presentation layer

SSL(Secure Socket Tunnelling)

Session layer

SDP, RPC, SMB

Transport layer

TCP, UDP, DCCP

Network layer

ICMP, ARP, NAT, VRRP, HSRP, OSPF, RIP, IP

Data link layer

ARP, LACP, IEEE802, DTP, STP, PAGP, VLAN, VTP, Token ring

Physical layer

Ethernet physical layer 10BASE-T10BASE210BASE5100BASE-TX100BASE-FX1000BASE-T1000BASE-SX and other varieties

 

ICMP Protocol

The ICMP stands for Internet Control Message Protocol. It is a network layer protocol. It is used for error handling in the network layer, and it is primarily used on network devices such as routers. As different types of errors can exist in the network layer, so ICMP can be used to report these errors and to debug those errors. For example, some sender wants to send the message to some destination, but the router couldn't send the message to the destination. In this case, the router sends the message to the sender that I could not send the message to that destination.

The IP protocol does not have any error-reporting or error-correcting mechanism, so it uses a message to convey the information.

The ICMP messages are usually divided into two categories:


  • Error-reporting messages

The error-reporting message means that the router encounters a problem when it processes an IP packet then it reports a message.

  • Query messages

The query messages are those messages that help the host to get the specific information of another host. For example, suppose there are a client and a server, and the client wants to know whether the server is live or not, then it sends the ICMP message to the server.

ICMP Message Format

The message format has two things; one is a category that tells us which type of message it is. If the message is of error type, the error message contains the type and the code. The type defines the type of message while the code defines the subtype of the message.

The ICMP message contains the following fields:

  • Type: It is an 8-bit field. It defines the ICMP message type. The values ranging from 0 to 127 are defined for ICMPv6, and the values from 128 to 255 are informational messages.
  • Code: It is an 8-bit field that defines the subtype of the ICMP message
  • Checksum: It is a 16-bit field to detect whether the error exists in the message or not.

 SNMP: Simple Network Management Protocol

SNMP is an application layer protocol used to manage nodes, like servers, workstations, routers, switches, etc., on an IP network. SNMP enables network admins to monitor network performance, identify network glitches, and troubleshoot them. SNMP protocol is comprised of three components: a managed device, an SNMP agent, and an SNMP manager.

The SNMP agent resides on the managed device. The agent is a software module that has local knowledge of management information and translates that information into a form compatible with the SNMP manager. The SNMP manager presents the data obtained from the SNMP agent, helping network admins manage nodes effectively.

Currently, there are three versions of SNMP: SNMP v1, SNMP v2, and SNMP v3. Both versions 1 and 2 have many features in common, but SNMP v2 offers enhancements such as additional protocol operations. SNMP version 3 (SNMP v3) adds security and remote configuration capabilities to the previous versions.

ARP: Address Resolution Protocol

The Address Resolution Protocol helps map IP addresses to physical machine addresses (or a MAC address for Ethernet) recognized in the local network. A table called an ARP cache is used to maintain a correlation between each IP address and its corresponding MAC address. ARP offers the rules to make these correlations, and helps convert addresses in both directions.

Advantages

  • MAC addresses need not be known or memorized, as the ARP cache contains all the MAC addresses and maps them automatically with IPs.

Disadvantages

  • ARP is susceptible to security attacks called ARP spoofing attacks.
  • When using ARP, sometimes a hacker might be able to stop the traffic altogether. This is also known as ARP denial-of-services.

 

VLAN :

VLANs (Virtual LANs) are logical grouping of devices in the same broadcast domain. VLANs are usually configured on switches by placing some interfaces into one broadcast domain and some interfaces into another. Each VLAN acts as a subgroup of the switch ports in an Ethernet LAN.

VLANs can spread across multiple switches, with each VLAN being treated as its own subnet or broadcast domain. This means that frames broadcasted onto the network will be switched only between the ports within the same VLAN.

A VLAN acts like a physical LAN, but it allows hosts to be grouped together in the same broadcast domain even if they are not connected to the same switch. Here are the main reasons why VLANs are used:

  • VLANs increase the number of broadcast domains while decreasing their size.
  • VLANs reduce security risks by reducing the number of hosts that receive copies of frames that the switches flood.
  • you can keep hosts that hold sensitive data on a separate VLAN to improve security.
  • you can create more flexible network designs that group users by department instead of by physical location.
  • network changes are achieved with ease by just configuring a port into the appropriate VLAN.

The following topology shows a network with all hosts inside the same VLAN:


Without VLANs, a broadcast sent from host A would reach all devices on the network. Each device will receive and process broadcast frames, increasing the CPU overhead on each device and reducing the overall security of the network.

By placing interfaces on both switches into a separate VLAN, a broadcast from host A would reach only devices inside the same VLAN, since each VLAN is a separate broadcast domain. Hosts in other VLANs will not even be aware that the communication took place. This is shown in the picture below:


NOTE

To reach hosts in a different VLAN, a router is needed.

 

R1:-

hostname R1

interface FastEthernet0/0

 no ip address

 duplex auto

 speed auto

interface FastEthernet0/0.10

 encapsulation dot1Q 10

 ip address 192.168.10.254 255.255.255.0

interface FastEthernet0/0.20

 encapsulation dot1Q 20

 ip address 192.168.20.254 255.255.255.0

SW1:-

hostname SW1
!
interface FastEthernet0/1
 switchport trunk encapsulation dot1q
 switchport mode trunk
!
interface FastEthernet0/2
 switchport access vlan 10
 switchport mode access
!
interface FastEthernet0/3
 switchport access vlan 20
 switchport mode access

STP:

A redundant link network topology uses the Spanning Tree Protocol (STP), which is a network protocol that is used to avoid loops. In order to prevent loops from forming in the network, it is used to make sure that there is only one active path between two network devices.

In order for STP to function, one of the many accessible paths is chosen as the "root path," and any further redundant links are then turned off. The root path is chosen based on a number of factors, which include the root's shortest path, lowest path cost, and lowest bridge ID. Then, the STP algorithm decides which links may be activated securely and which ones need to be disabled.

Spanning Tree Protocol supports the five port states: forwarding, learning, listening, blocking, and disabled. It contains the two bits from the flag octet.

STP has some disadvantages such as a slow convergence time and an inability to adjust to quick changes in the network topology. In order to overcome these restrictions, the Rapid Spanning Tree Protocol (RSTP) was created

Advantages of STP:

·       It is a mature protocol that has been widely used in networks for many years.

·       It can handle complex topologies and prevent network loops by blocking redundant links.

·       It provides a stable network topology by ensuring that only one path is active at any given time.

·       It is supported by most network devices and can be configured easily.

·       It does not require special hardware or software.

Disadvantages of STP:

·       It has slow convergence time, which can cause network downtime and performance issues.

·       It can lead to inefficient use of network resources by blocking links even when they are not actually causing a network loop.

·       It cannot detect changes in the network topology quickly and may cause network instability.

·       It may require manual configuration and management in large networks.




1.        H1 sends an ARP request because it’s looking for the MAC address of H2. An ARP request is a broadcast frame.

2.        SW1 will forward this broadcast frame on all it interfaces, except the interface where it received the frame on.

3.        SW2 will receive both broadcast frames.

Now, what does SW2 do with those broadcast frames?

1.        It will forward it from every interface except the interface where it received the frame.

2.        This means that the frame that was received on interface Fa0/0 will be forwarded on Interface Fa1/0.

3.         The frame that was received on Interface Fa1/0 will be forwarded on Interface Fa0/0.

Do you see where this is going? We have a loop! Both switches will keep forwarding over and over again until the following happens:

  • You fix the loop by disconnecting one of the cables.
  • One of your switches will crash because they are overburdened with traffic.

Ethernet frames don’t have a TTL (Time to Live) value, so they will loop around forever. Besides ARP requests, many frames are broadcasted. For example, whenever the switch doesn’t know about a destination MAC address, it will be flooded.

Spanning-tree will help us to create a loop-free topology by blocking certain interfaces. Let’s take a look at how spanning-tree work! Here’s an example:


The STP algorithm is responsible for identifying active redundant links in the network and blocking one of these links, thus preventing possible network loops. The operation of STP is as follows:

·       STP enabled switches exchange BPDU messages between them to agree upon the "root bridge;" the process is called Root Bridge Election.

·       Once the root bridge is elected, every switch has to determine which of its ports will communicate with the root bridge. Therefore Root Port Election takes place on every network switch.

·       Finally, Designated Port Election takes place in order to have only one active path towards every network segment.

STP has several different modes, like Per-VLAN Spanning Tree (PVST) and rapid-PVST. PVST is usually the default setting that runs spanning tree on any specified VLAN. Rapid-PVST is essentially the same as PVST, but with faster convergence time. The main objective they accomplish is the same: preventing switching loops.

Difference between STP and RSTP :

 

STP

RSTP

Its IEEE standard is 802.1D.

Its IEEE standard is 802.1W.

In STP only the root bridge sends BPDU (Bridge protocol data unit) and it is transferred by others.

In RSTP all bridges can forward BPDUs.

STP has three port roles (i.e., Root Port, Designated Port, Blocked Port).

RSTP has four-port roles (i.e., Root Port, Designated Port, Alternate Port, Backup Port).

STP has five port states (i.e., Forwarding, Learning, Listening, Blocking, Disabled).  

RSTP has three port states (i.e., Forwarding, Learning, Discarding).

It doesn’t have any link type.

It has Two link types i.e., Shared link and Point to point link.

STP provides slower network convergence in response.

RSTP provides significantly faster network convergence.

Flag bits used in STP are Bit 0 for TCN (Topology Change Notification) and Bit 7 for TCA (Topology Change Acknowledgement).

Flag bits used in RSTP are Bit 0 for TCN, Bit 1 for Proposal, Bit 2 and 3 for Port role, Bit 4 for Learning, Bit 5 for forwarding, Bit 6 for Agreement, and Bit 7 for TCN.

  VRRP :

VRRP (Virtual Router Redundancy Protocol) is very similar to HSRP (Hot Standby Routing Protocol) and can be used to create a virtual gateway. Its defined by the IETF in RFC 3768.

Hosts are usually connected to an external network through a default gateway. If the gateway fails, the hosts connected to it will not be able to communicate with the external network, causing service interruptions.

VRRP provides a better option. It groups multiple devices into a virtual device, whose IP address is configured as the default gateway address to back up the default gateway. If a gateway fails, VRRP elects a different gateway to forward traffic, thereby ensuring reliable network communication.

 

VRRP states

State

Description

Initialize

VRRP is unavailable. A device in the Initialize state does not process VRRP Advertisement packets.

A device usually enters the Initialize state when it starts or detects a fault.

Master

A VRRP device in the Master state takes over all the forwarding tasks of the virtual routing device and sends VRRP Advertisement packets to the virtual router periodically.

Backup

A VRRP device in the Backup state does not take over the forwarding tasks of the virtual routing device, and receives the VRRP Advertisement packets from the master device periodically to determine whether the master device is working properly.



HSRP:



  •  SW1 and SW2 are multilayer switches. The 192.168.1.0/24 subnet belongs to VLAN 1 and there is one host device.
  • There is a layer two switch in between SW1, SW2, and H1 to connect the 192.168.1.0/24 segment.
  • IP address 192.168.1.254 will be used for the virtual gateway address.
  • The multilayer switches are connected with layer three interfaces to an upstream router called R3.

·       SW1 & SW2

·       (config)#interface Vlan 1

·       (config-if)#standby 1 ip 192.168.1.254

Use the standby command to configure HSRP. 192.168.1.254 will be the virtual gateway IP address. The “1” is the group number for HSRP. It doesn’t matter what you pick just make sure it’s the same on both devices

Use the show standby command to verify your configuration. There’s a couple of interesting things here:

  • We can see the virtual IP address here (192.168.1.254).
  • It also shows the virtual MAC address (0000.0c07.ac01).
  • You can see which router is active or in standby mode.
  • The hello time is 3 seconds and the hold time is 10 seconds.
  • Preemption is disabled.

The active router will respond to ARP requests from computers and it will be actively forwarding packets from them. It will send hello messages to the routers that are in standby mode. Routers in standby mode will listen to the hello messages, if they don’t receive anything from the active router they will wait for the hold time to expire before taking over. The hold time is 10 seconds by default which is pretty slow; we’ll see how to speed this up in a bit.

 

 

 

 

Each HSRP router will go through a number of states before it ends up as an active or standby router, this is what will happen:

State

Explanation

Initial

This is the first state when HSRP starts. You’ll see this just after you configured HSRP or when the interface just got enabled.

Listen

The router knows the virtual IP address and will listen for hello messages from other HSRP routers.

Speak

The router will send hello messages and will join the election to see which router will become active or standby.

Standby

The router didn’t become the active router but will keep sending hello messages. If the active router fails it will take over.

Active

The router will actively forward packets from clients and sends hello messages.

 

HSRP : VRRP : GLBP


LAG :-

LAG (Link Aggregation Group) is an actual technique or instance for link aggregation. A Link Aggregation Group forms when we connect multiple ports in parallel between two switches and configure them as LAG. LAG builds up multiple links between two switches, which expands bandwidth.

Besides, it provides link-level redundancy in network failure and load-balance traffic. Even if one link fails, the remaining links between the two switches will still be running. They also take over those traffic supposed to traverse via the failed one, so data packet won’t get lost.

LAG (link aggregation group) refers to the initial technology to realize link bundling and load balancing without any protocol involved. It is also called as the manual mode because of its working process — users need to manually create a port-channel and add member interfaces to that port-channel.

After the aggregation links being established, all those links are active links to forward data packets. If one active link fails, the other remaining active links will load balance the traffic. However, this mode can only detect disconnections of its member links, but cannot detect other faults such as link-layer faults and incorrect link connections.

LACP is a protocol for auto-configuring and maintaining LAG. Under the mode of LACP, the port-channel is created based on LACP. he LACP provides a standard negotiation mechanism for a switching device so that the switching device can automatically form and start the aggregated link according to its configuration. After the aggregated link is formed, LACP is responsible for maintaining the link status. When the link aggregation condition is changed, LACP adjusts or removes the aggregated link. If one active link fails, the system selects a link among backup links as the active link. Therefore, the number of links participating in data forwarding remains unchanged. In addition, this mode cannot only detect disconnections of its member links, but also other faults such as link-layer faults and incorrect link connections.

The two primary types of LAGs are static (also known as manual) and dynamic. Dynamic LAGs use Link Aggregation Control Protocol (LACP) to negotiate settings between the two connected devices.

benefits of link aggregation

·   Increased reliability and availability. If one of the physical links in the LAG goes down, traffic is dynamically and transparently reassigned to one of the other physical links.

·   Better use of physical resources. Traffic can be load-balanced across the physical links.

·   Increased bandwidth. The aggregated physical links deliver higher bandwidth than each individual link.

·   Cost effectiveness. A physical network upgrade can be expensive, especially if it requires new cable runs. Link aggregation increases bandwidth without requiring new equipment.

 

LACP :-

Link Aggregation Control Protocol is an IEEE standard defined in IEEE 802.3ad. LACP lets devices send Link Aggregation Control Protocol Data Units (LACPDUs) to each other to establish a link aggregation connection. You still need to configure the LAG on each device, but LACP helps prevent one of the most common problems that can occur during the process of setting up link aggregation: misconfigured LAG settings. If the devices detect that they cannot establish a link aggregation connection, they do not try to establish it, and the link shows as “down” in the admin interface.

Another useful feature of LACP is that when one member link stops sending LACPDUs (if the cable is unplugged, for example), it is removed from the LAG. This helps to minimize packet loss.

Both devices must support LACP for you to set up a dynamic LAG between those devices. We recommend using LACP instead of a static LAG whenever both devices support LACP.

 

To set up link aggregation between two devices in your network:

1.   Make sure that both devices support link aggregation.

2.   Configure the LAG on each of the two devices.

3.   Make sure that the LAG that you create on each device has the same settings for port speed, duplex mode, flow control, and MTU size (on some devices, this setting might be called jumbo frames).

4.   Make sure that all ports that are members of a LAG have the same virtual local area network (VLAN) memberships.
If you want to add a LAG to a VLAN, set up the LAG first and then add the LAG to the VLAN; do not add individual ports.

Do not connect the devices to each other using more than one Ethernet cable until after you set up the LAG on each device. If you form multiple connections between the two devices and neither device has loop prevention, you create a network loop. Network loops can slow or stop normal traffic on your network.

5.   Note which ports on each device you add to the LAG, and make sure that you connect the correct ones.
The LAG issues an alert and rejects the configuration if port members have different settings for port speed, duplex mode, or MTU size, or if you accidentally connect ports that are not members of the LAG.

6.   Use Ethernet or fiber cable to connect the ports that you added to the LAG on each device.

7.   Verify that the port LED for each connected port on each NETGEAR switch is blinking green.

8.   Verify in the admin interface for each device that the link is UP.

 

LACP (LINK AGGREGATION CONTROL PROTOCOL) :

LACP is an open standard protocol and published under the 802.3ad specification. It uses the multicast address of 01-80-c2-00-00-02.

Switch/Router Ports can form an EtherChannel when they are in different LACP modes as per the below criteria –

§  A port in the active mode can form an EtherChannel with another port that is in the active or passive mode.

§  A port in the passive mode cannot form an EtherChannel with another port that is also in the passive mode because neither port starts LACP negotiation.

 

The port in active mode negotiates with the other side to form Etherchannel while the interface in passive mode indicates using LACP, but responds to requests only and does not send any request.

LACP Modes negotiation –

LACP

Active

Passive

Active

Yes

Yes

Passive

Yes

No

 

EtherChannel Configuration between Switch1 and Switch2 using LACP modes is shown below –

Switch1(config)#interface range Gi0/0-3Switch1(config-if-range)#channel-group 1 mode Active

Switch1(config-if-range)#interface port-channel 1

Switch1(config-if)#switchport mode trunk

Switch2(config)#interface range Gi0/0-3Switch2(config-if-range)#channel-group 1 mode Passive

Switch2(config-if-range)#interface port-channel 1

Switch2(config-if)#switchport mode trunk

Note – Switch1 is configured under Active mode while Switch2 under Passive mode to negotiate and form EtherChannel

 

 PAGP (PORT-AGGREGATION PROTOCOL) :

PAgP is Cisco proprietary Etherchannel technology. It uses the multicast address of 01-00-0C-CC-CC-CC for communication.

Switch/Router Ports can form an EtherChannel when they are in different PAgP modes as per below criteria –

§  A port in the desirable mode can form an EtherChannel with another port that is in the desirable or auto mode.

§  A port in the auto mode can form an EtherChannel with another port in the desirable mode.

 

The port in desirable mode is one which sends requests to the other side to see if it is also using PAgP. The port in auto mode defines using PAgP but does not send requests.

PAgP Mode negotiation –

PAgP Mode

Desirable

Auto

Desirable

Yes

Yes

Auto

Yes

No

 

EtherChannel Configuration between Switch1 and Switch2 using PAgP is shown below –

Switch1(config)#interface range Gi0/0-3Switch1(config-if-range)#channel-group 1 mode desirable

Switch1(config-if-range)#interface port-channel 1

Switch1(config-if)#switchport mode trunk

Switch2(config)#interface range Gi0/0-3Switch2(config-if-range)#channel-group 1 mode auto

Switch2(config-if-range)#interface port-channel 1

Switch2(config-if)#switchport mode trunk

Note – Switch1 is configured under desirable mode while Switch2 under Auto mode to negotiate and form EtherChannel

 

 

 








No comments:

What is differrence between STP and RSPT ?

  RSTP has faster convergence than STP. This is because  RSTP does not rely on forwarding delay timers , making it faster and more efficient...