OSPFv2: Database and SPF

When a router receives LSAs from a its neighbors it places them into the respective link-state database (LSDB) for that particular area.

The router then follows a procedure to calculate the best loop free path to each destination. This computation is called the shortest-path first (SPF or Dijkstra) algorithm and uses three databases to evaluate which routes are the most favoured to be installed in the routing table.

Each router applies this algorithm by calculating the least cost path to the root (itself) from each node that has been discovered via an LSA and installed in the LSDB.

 

What is The Internet of Things?

First, some history on The Internet…

The Internet has been with us for almost 50 years. The original concept was created in 1968 by the American Defense Advanced Research Projects Agency; abbreviated to ARPA, or DARPA as it’s now known.

The first demonstration of the technology was a pioneering piece of work, created as a concept for sharing digital resources between computer systems that were geographically seperate from each other.

The name for this first digital network was ARPANET, and interestingly the packet switching technologies used in 1969 are still fundamentally the same as the technologies used within the contemporary Internet as we know it today.

This interesting diagram shows the evolution of The ARPANET from its inception in 1969 until 1977. Initially the network was connected between universities in a small geographic area in western America, before expanding further into universities in eastern and central regions.

arpanet
ARPANET existed before The Internet

The birth of TCP/IP and “The Internet Suite”

In the 1970s several significant advances were made, one of which was email, which was developed by Ray Tomlinson, who at the time was a programmer working for a US company called BBN Technologies.

In 1974 a technical proposal was developed by a researcher called Vinton Cerf, which was conceived to change the architecture of how ARPAnets would connect together, by using connection orientated communication controlled directly between the hosts that were attached to the network, rather than by the existing method of using the network itself to control communication centrally.

This development was the beginining of the “inter-network” and was the formula for the TCP/IP protocol that is currently still the fundamental transport communication layer that exists within The Internet today.

The arrival of TCP/IP was a very important step in the history of The Internet, as it allowed hosts connected to the network to communicate directly; transmitting data at a layer that was logically independent of the physical transmission paths provided by the underlying internetwork.

Evolution into the commerical Internet…

The ARPANET continued to evolve through three decades of innovation, which eventually led to the commercial Internet as we know it today.

VRF Table Label

This command is primarily used to overcome a security feature within Junos that prevents a multiaccess/broadcast attachment circuit from being advertised into MP-BGP. Under normal operation a PE router will only announce a multipoint interface when it receives a dynamic route that uses the link as a next-hop, or a locally configured static route that uses the multiaccess interface as its next hop.

In cases where there is no static or dynamic routing the vrf-table-label command can be added to the VRF configuration. This overrides the default security feature and forces the interface to be exported into MP-BGP without the need for a dependent route.

When the command is applied to a VRF it also adds a second behaviour that changes the way VPN labels are allocated in the core network; normally each prefix is given a different VPN label, however when vrf-table-label is applied a single VRF label is added to all exported prefixes, which has the benefit of optimising the amount of labels used in the provider network.

When adding vrf-table-label into the VRF configuration a software based LSI interface is created. This interface can be used to provide an additional lookup process when a VPNv4 packet enters the PE. This can be useful in scenarios where ingress firewall filters, CoS or additional policy is present, to allow the LSI interface to handle the first lookup before a second ARP lookup is carried out through the PFE.

On older Juniper routers the software lookup was used when a tunnel-services PIC was not present; the tunnel-service PIC provides a hardware based logical vt-interface for additional secondary look up capabilities. On newer MX routers tunnel services is built into the line card.

The example below shows a VRF with the command configured and the output showing the LSI interface

routing-instances {
green-vpn-a {
instance-type vrf;
interface ae0.20;
route-distinguisher 172.25.1.1:200;
vrf-target target:45501:200;
vrf-table-label;
}
}
lab123@r1> show interfaces terse lsi
Interface Admin Link Proto Local Remote
lsi up up
lsi.0 up up inet
iso

Understanding and configuring Rib Groups

Summary

RIB groups (or Routing Table Groups) are similar to the Auto Export feature, allowing a PE to share local routes across multiple VRFs (or RIBs).

The end goal of Auto Export and a Rib Group is the same however the latter is more flexible as it allows more specific configuration where by individual routes can be leaked between different VRFs based on configuration options and policy; this is more flexible and powerful than the more basic VRF import/export statements that Auto Export offers.

Rib Groups are also bound to specific routing protocols, static routes or directly connected routes, so by nature the policies for leaking routes are more structured and defined.

How to configure a Rib Group

The first configuration step should be applied at the global routing-options level to define a Rib Group name. Within that heirarchy statements are made to request which RIB (or VRF) routes are taken from and where they are placed. The “import-rib” statement is used for this by taking copies of routes from the first listed rib in the square brackets and placing those routes into the second listed rib in the square brackets.

Furthermore the optional import-policy statement allows the Rib Group to be linked to a policy that can be used to specify certain routes to be leaked; this is where the functionality of a Rib Group becomes more flexible, as Auto Export relies specifically on the policies applied within the VRF import/export statements; these statements are considered to be more fundamental to the structure of the wider VPN topology and therefore should be kept as simplistic as possible.

In the following configuration green-vpn-a.inet.0 will place a route into green-vpn-b.inet.0:

routing-options {
rib-groups {
green-a-to-b {
import-rib [ green-vpn-a.inet.0 green-vpn-b.inet.0 ];
import-policy rib-group-green-vpn-policy;
}
}
router-id 172.25.1.1;
autonomous-system 45501;
}

The import into green-vpn-a.inet.0 is also passed through a policy called rib-group-green-vpn-policy which has been explicitly configured to only leak a direct 1.1.1.1/32 route:

policy-options {
policy-statement rib-group-green-vpn-policy {
term a {
from {
protocol direct;
route-filter 1.1.1.1/32 exact accept;
}
}
term z {
then reject;
}
}
}

The Rib Group is then applied within the routing options hierarchy in the source VRF; which in this case is green-vpn-a. In addition to applying the rib group within the VRF the “interface-routes” statement is also added to instruct the PE to also copy directly connected routes within the source VRF that are associated as next hops to the routes being leaked. This is an important aspect of the configuration because without this statement the routes that are leaked would become hidden, because the protocol next-hop would not be available in the destination RIB.

Because rib groups operate at a protocol level the rib group must also be applied to the respective routing protocols that exist within the source VRF; this allows the routes to be ‘picked up’ from the source protocol and delivered into the rib group. In this configuration the rib group a-to-b is placed at the BGP inet unicast level and thus will match on all routes being received across this BGP neighbourship.

One important aspect of rib groups that differs from auto export is that the BGP split horizon rule is not considered when leaking routes. When configuring rib groups it’s considered best practice to change the vrf export policy on the destination table to stop the announcement of the leaked routes. This protects from any potential routing loops or sub optimal routing. It may not always be required, it depends on the VPN topology and requirements for applying route leaking.

http://kb.juniper.net/InfoCenter/index?page=content&id=KB16133&smlogin=true

PHP and the use Implicit and Explicit Null

Penultimate Hop Popping (Implicit Null)

As MPLS packets pass through an LSP the tail end MPLS router (the ultimate router) will by default instruct the penultimate router to not push an MPLS Shim Label onto any packets destined towards itself i.e the final hop of the LSP. The feature is often referred to as Penultimate Hop Popping (or PHP) and is used to reduce label and control plane overhead by popping the Shim Label on the penultimate router so the final router only needs to process the VPN label.

The tail end router terminating the LSP signals downstream to the penultimate router using a label value of 3, which is interpreted by the penultimate router as an instructionq to not push an MPLS Shim Label onto any packet destined for the final LSP hop; this can also be referred to as Implicit Null and is the default behaviour within Junos.

Explicit Null

When Explicit Null is configured a label value of 0 is signalled to the penultimate router which instructs the router to push an MPLS Shim Label onto packets that are destined for the final hop. In certain cases the requirement for Explicit Null may be desirable, for example to maintain CoS values attributed to the LSP, as the CoS value would normally be discarded at the penultimate router if PHP were enabled.

Configuration Examples

When enabled Explicit Null will pass a label value of 0 downstream from the tail end router so should be applied at this point in the network.

For RSVP the feature is configured under the MPLS protocol heirarchy and will apply to all egress LSPs signalled by RSVP:

set protocols mpls explicit-null

To apply the feature to LDP the statement is configured directly under the LDP protocol heirarchy and will apply to all LDP signlled LSPs:

set protocols ldp explicit-null

Using MP-BGP to signal VPLS

Overview

VPLS is a multipoint Layer 2 VPN technology used to emulate an Ethernet broadcast domain across a service provider network. The technology is in many ways similar to traditional Layer 2 VPNs, however the fundamental difference with VPLS is the use of Ethernet MAC addresses to learn the source of frames within the VPLS, storing them in VPLS MAC tables on PE routers.

Using attachment circuits VPLS can support Ethernet untagged or Ethernet-VLAN tagged encapsulation for either a transparent service or a provider provisioned VLAN tagged service.

Draft Kompella BGP Signalling Approach RFC471

Juniper’s preferred signalling approach is to use the MP-BGP auto-provisioning scheme based on theory that by using the existing MP-BGP infrastructure the provider network avoids the potential need for the many targeted LDP sessions that could be required for complex VPLS topologies.

In addition to this a VPLS that’s signalled using MP-BGP can be over provisioned allowing for the addition of new customer connectivity by simplying adding the respective VRF configuration to the PE router serving the newly provisioned site.

Inter-AS operations are also harmonised with the MP-BGP approach, allowing VPLS domains to be easily extended over existing Option B or Option C Interprovider VPNs.

VPLS Instances and BGP NLRI Information

The VRF table handles all of the label allocations and blocks for each VPLS instance and creates a VT interface from transport of label switch packets ingressing and egress the PE.

Each VRF table is populated with information received from other PEs by use of the L2 VPN NLRI. This attribute delivers labels and encapsulation so that each local PE can map local site IDs to remote sites.

The site ID, layer 2 encapsulation, logical attachment circuits and label base parameters are used to associate inbound and outbound traffic to each logical VPLS attachment circuit.

The VPLS label mapping process uses the same approach as the L2VPN NRLI. VPLS label mapping information is distributed for each VPN site between PE routers. The PE routers advertise the label association for all of their local attachment circuits in one label block. The BGP L2VPN calculation is used by each remote site to calculate the labels to use for inter site transport.

BGP NRLI Extended Community

The L2VPN NLRI extended community carries the route target, encapsulation type, MTU (of PE to CE link), control flags and site-preference (used for multi-homing). The MTU must match on each PE to CE link within the VPLS as fragmentation is not supported. The preference value is copied into the BGP local preference field and relates to the preference given to different sites, which is used for resilience or multi-homing.

Each PE receives L2VPN NRLI extended community from all remote PEs that are associated with the VPLS. In order for a PE to calculate the egress label used for forwarding to a remote site, the following calculation is used:

remote label base + local site ID – remote label offset.

For both the BGP and LDP approach Junos creates a logical tunnel interface with the PFE for each remote VPLS site. This allows ingress frames to pass through the PFE twice. The first pass pops the MPLS VPN label from the frame and the second pass carries out MAC address learning and forwarding using the VPLS forwarding tables contained within the VRF.

This approach is different to the L2VPN operation which simply binds labels to attachment circuits and does not rely on MAC learning and forwarding, due to the point to point nature of a traditional L2VPN.

Configurating the Attachment Circuits

To provision a VPLS instance the following parameters must be included in the configuration:

  • Attachment circuit interfaces
  • VPLS routing instance
  • Route Target Community
  • Site ID (unique value in the context of each VPLS)
  • Site range (maximum number of sites to which the local site can connect, this is defined by the label range)
  • Remote sites (labels for remote sites are learnt dynamically via BGP NLRI process)
  • Encapsulation must be VPLS

It’s possible to configure a VPLS using a pre-provisioned VLAN scheme which is presented directly to the CE. The benefit of this approach is that it allows over provisioning, however the control of VLAN tagging in the scenario is governed by the PE.

In the example below unit 513 is encapsulated with vlan-id 513 and family vpls. In this scenario the physical interface must be set to vlan-tagging. The example uses the flexible-ethernet-services encapsulation type which allows for multiple per-unit encapsulation types, allowing the interface to support other transport types such as L2VPN using CCC.

It’s worth noting that all L2VPNs in Junos must use a specific VLAN range, starting from 512.

vpls-vlan-attachment

For transparent attachment circuits a different encapsulation type is used. This approach allows the CE to have the flexibility of pushing any VLAN tag into a frame entering the VPLS

interfaces {
encapsulation ethernet-vpls;
ge-0/0/2 {
unit 0 {
family vpls;
}
}
}
}

VRF Configuration

For BGP NLRI signalling a standard VRF is configured but with a VPLS instance type. Interfaces are configured as the attachment circuits within the VPLS, however only one routing instances is created for one VPLS. If an additional interface is configured within the routing instance it will be used for multi-homing CE sites into the VPLS. Route-targets are used in the same manner as L2 and L3 VPN. The VPLS protocol is configured with a site range value that

routing-instances vpn-a {
instance-type vpls;
interface ge-0/0/1.515;
vrf-target target:65001:100;
protocols {
vpls {
site-range 20;
site ce-a {
site-identifier 1;
}
}
}
}

VPLS Multi-Homing

Multi-homing allows two PEs to connect to a CE device to provide resilience in a multi-homed state. In order to prevent a loop the downstream switch has a primary and secondary forwarding path towards the PEs. This is controlled via BGP with a preference feature configured within each PE routing-instance that’s carried into BGP within the local preference attribute.

The primary PE is configured as follows :

routing-instances vpn-a {
instance-type vpls;
interface ge-0/0/1.515;
route-distinguisher 192.168.2.2:100;
vrf-target target:65001:100;
protocols {
vpls {
site-range 20;
site ce-b {
site-identifier 2;
multi-homing;
site-preference 300;
}
}
}
}

routing-instances vpn-a {

instance-type vpls;
interface ge-0/0/1.515;
route-distinguisher 192.168.2.3:100;

vrf-target target:65001:100;

protocols {

vpls {

site-range 20;

site ce-b {

site-identifier 2;

multi-homing;

site-preference 100;

}

}

}

}/div>

Primary and Backup Interfaces

If a PE has multiple VPLS interfaces towards one CE device then the primary and backup feature can be used to prevent loops.

routing-instances vpn-a {
instance-type vpls;
interface ge-0/0/1.515;
interface ge-0/0/2.515;
interface ge-0/0/3.515;
vrf-target target:65001:100;
protocols {
vpls {
site-range 20;
site ce-a {
site-identifier 1;
interface ge-0/0/1.515;
}
site ce-c {
site-identifier 3;
active-interface primary interface ge-0/0/2.515;
interface ge-0/0/2.515;
interface ge-0/0/3.515;

BGP L2VPN

Overview

This approach uses MP-BGP to signal the VPN across the MPLS backbone. The Kompella approach allows auto provisioning of circuits within a VPN mesh. Circuit ids, VPN labels and notification to other routers in the mesh are all handled by the Kompella BGP signalling approach. The BGP L2 VPN uses the Martini encapsulation approach. A Martini control word is carried instead of the native L2 header when the frames are carried across the MPLS core.

Standards for L2 VPNs

draft-kompella-l2vpn-l2vpn (BGP L2), RFC 4447 (LDP L2 Circuit), RFC 4761 (BGP VPLS), RFC 4762 (LDP VPLS)

BGP NLRI and Control Plane

The PE router provisions the customer circuit as a logical unit towards the CE device within a VRF that’s created for each CE device. The PE provisions four elements for each CE site; local site ID, logical interface, interface encapsulation and a label base. The label base is used to associate inbound traffic with the locally provisioned circuit. Based on this process the PE receives MP-BGP NLRI updates from remote sites with their information for their circuits which would contain remote site ID, remote label base and layer 2 encapsulation.

The VPN NRLI is a subset of the information contained as a connection table in each VRF. One VPN NRLI is sent per VPN site and combination of local and remote NLRI information allows the PE to map traffic and circuits across the LSPs connecting the PEs together.

The following process allows the PE routers to exchange NRLIs and map labels to local circuits within their VRF connection tables. For a single circuit both PEs send NRLIs to each other with their respective label base, block, offset and site id. For auto provisioning the order in which the circuits are provisioned towards the CE is important as labels are allocated locally from the label block in order of circuit.

To provision the L2 VPN the following must be met:

  • A VRF is configured for each local CE site
  • Import and export route targets are configured
  • A site id must specifically identify the local site in context of that particular VPN
  • A label range (or label block) is defined (this defines the maximum number of CE devices remotely connected by the L2 VPN)
  • The label base is defined and assigned to the first sub interface ID. The router reserves at set of contiguous labels that are defined by the label range/block
  • Sub interfaces are configured and a label from the label block is assigned contiguously to each sub interface and advertised outwards to remote VPN members (VLANs)
One VPN can connect many sites together using multiple interfaces towards each CE and then adding them to the VRF configuration as a site.

Layer 2 NRLI

NRLI is sent per label block. Contains a circuit status vector that can detect a failure within a specific local circuit. When a circuit is detected as failed a BGP NRLI is sent to the remote PE to inform them that the remote PE to CE circuit has failed.

Configuration of p2mp L2 VPN with over provisioning

The following configuration demonstrates a point to multipoint L2 VPN that connects to two remote sites and has been over provisioned for two additional sites. The local site is represented by the site-identifier and each remote site is sequentially assigned a site id in the order that the sub interfaces are configured within the VPN. The remote site allocation avoids the local site configuration so the sequential allocation is not duplicated. This process happens with the remote sites, which always begin the allocation from 1 and miss out their local site whilst allocating sub interfaces to site ids.

Additional interfaces representing other sites in the VPN are over provisioned in preparation for the deployment of those sites which are configured with the respective ordered site ids. If all interfaces are ordered correctly the over provisioning will work and sites and interfaces will be allocated appropriately.

routing-instances vpn-a {
instance-type l2vpn;
interface ge-0/0/0.512;
interface ge-0/0/0.513;
interface ge-0/0/0.514;
interface ge-0/0/0.515;
route-distinguisher 172.25.0.1:100;
vrf-target target:65001:100;
protocols {
l2vpn {
encapsulation-type ethernet-vlan;
site CE-A {
site-identifier 1;
interface ge-0/0/0.512;
interface ge-0/0/0.513;
interface ge-0/0/0.514;
interface ge-0/0/0.515;

The configuration below shows how the site 2 would be configured to connect the logical .512 circuit back to site 1.

routing-instances vpn-a {
instance-type l2vpn;
interface ge-0/0/0.512;
route-distinguisher 172.25.0.2:100;
vrf-target target:65001:100;
protocols {
l2vpn {
encapsulation-type ethernet-vlan;
site CE-B {
site-identifier 2;
interface ge-0/0/0.512;

The interface must be configured with vlan-ccc encapsulation at both the physical and logical levels. When using CCC encapsulation VLAN id’s must be above 512.

interfaces {
ge-0/0/1.10 {
vlan-tagging;
encapsulation vlan-ccc;
unit 10 {
encapsulation vlan-ccc;
vlan-id 10;
family inet {
address {
10.100.0.1/30;
}
}
}
}
}

routing-instances {
L2VPN_A {
instance-type l2vpn;
interface ge-0/0/1.10;
vrf-target target:65001:100;
protocols {
l2vpn {
encapsulation-type vlan-ccc;
site-name CE_A {
site-identifier 1;
interface ge-0/0/1.10;
}
}
}
}
}

Interworking Protocol

Interworking is a protocol within Junos that allows layer 2 VPNs to be stitched together. Traditionally a loopback was performed within the tunnel services PIC to provided the stitching of the VPNs together. This is now handled within software via logical iw0 interface. This process removes some of the bandwidth constraints associated with the traditional approach.

The interworking between the two VPNs is carried out on an intermediary PE router. In order to configure this feature two logical iw0 interfaces must be configured with a peer unit statement under each unit to link the units together.

interface {
iw0 {
unit 100 {
encapsulation vlan-ccc;
vlan-id 100;
peer unit 101;
}
unit 101
encapsulation vlan-ccc;
vlan-id 101;
peer-unit 100;
}
}
}
The protocol must be enabled.
protocols {
l2iw {
}
}

The routing instances that require interworking must have the logical iw0 interfaces associate at the VRF level and the l2vpn site level. The routing-instance does not have any association with the interfaces that form part of the existing VPN as the feature is purely enable to interconnect two existing VPN sites from different routing instances.

routing-instances vpn-a {
instance-type l2vpn;
interface iw0.100;
route-distinguisher 172.25.0.1:100;
vrf-target target:65001:100;
protocols {
l2vpn {
encapsulation-type ethernet-vlan;
site CE-3 {
site-identifier 3;
interface iw0.100;
remote-site 1;

routing-instances vpn-b {

instance-type l2vpn;
interface iw0.101;
route-distinguisher 172.25.0.1:101;
vrf-target target:65001:101;
protocols {
l2vpn {
encapsulation-type ethernet-vlan;
site CE-3 {
site-identifier 3;
interface iw0.101;
remote-site 2;

CoS and Path Selection