Implicit and Explicit Null

Section 1: Implicit Null (Penultimate Hop Popping)

By default as MPLS packets pass through an LSP the tail end MPLS router instructs the penultimate router to not push an MPLS Shim Label onto packets destined towards itself i.e the final hop of the LSP.

This behaviour is often referred to as Penultimate Hop Popping (or PHP) and is used to reduce label and control plane overhead by popping the MPLS Shim Label on the penultimate router, so the final PE router need only process the inner VPN label, before sending packets onward to an attached device.

The process works by the tail end router signalling downstream to the penultimate router a label value of 3. This label value is interpreted by the penultimate router as an instruction to not push an MPLS Shim Label onto packets destined for the final hop.

This action is the default behaviour in Junos, and applies to both LDP and RSVP signalled LSPs.

Section 1.1: Explicit Null

The opposite of the above would be to carry the MPLS Shim Label on packets destined for the final router within the LSP, which requires changes to the configuration.

In certain MPLS networks this requirement may be desirable, for example to maintain end to end QoS values that could be carried within the Shim Label, and thus to potentially use those QoS values to apply some sort of policy or action on the tail end PE.

When Explicit Null is configured a label value of 0 is signalled to the penultimate router, which instructs this downstream router to push an MPLS Shim Label onto packets that are destined for the final hop.

Section 2: Demonstrating on the CLI

We will first look at the configuration using LDP as the signalling protocol, using a simple topology of four MPLS enabled PEs signalling LSPs between to each other. Once that is understood we will follow with an example using RSVP.

Figure 1: Network Topology

lab-core

Section 2.1: LDP Configuration

By default Junos uses Implicit Null to signal MPLS LSP’s. The information in figures 2.1-2.2 show the LDP database from the perspective of DC-PE1; the LDP database is a reflection of the LDP control plane, so the reference to “output” refers to the label that the local PE is signalling downstream, and the reference to “input” refers to the label that the local PE has received from its LDP neighbour.

Under normal operation signalling between PE1 in DC1 and DC2 uses a label value of 3 for both input and output. This value instructs each PE to not push an MPLS Shim label onto packets being sent to and from each another.

Figure 2.1: Labels received from DC2-PE1

lab@lab-dc1-pe1> show ldp database 
Input label database, 172.16.1.1:0--172.16.2.1:0
Labels received: 2
  Label     Prefix
 299920      172.16.1.1/32
      3      172.16.2.1/32

Output label database, 172.16.1.1:0--172.16.2.1:0
Labels advertised: 2
  Label     Prefix
      3      172.16.1.1/32
 299952      172.16.2.1/32

Figure 2.1: Labels received from DC1-PE1

lab@lab-dc2-pe1> show ldp database    
Input label database, 172.16.2.1:0--172.16.1.1:0
Labels received: 2
  Label     Prefix
      3      172.16.1.1/32
 299952      172.16.2.1/32

Output label database, 172.16.2.1:0--172.16.1.1:0
Labels advertised: 2
  Label     Prefix
 299920      172.16.1.1/32
      3      172.16.2.1/32

Section 2.2: Changing LDP to Explicit Null

When enabled Explicit Null will pass a label value of 0 downstream from each tail end PE. Let’s now apply the relevant command to the LDP configuration on both DC1-PE1 and DC2-PE1 and see what happens to the LDP control plane; the command is applied directly under the LDP protocol heirarchy, which globally sets all configured LSP’s to Explicit Null.

set protocols ldp explicit-null

Figure 3.1: Labels received from DC1-PE1

As predicted once the above command has been applied we now see DC1-PE1 signalling and receiving a label value of 0.

lab@lab-dc1-pe1> show ldp database 
Input label database, 172.16.1.1:0--172.16.2.1:0
Labels received: 2
  Label     Prefix
 299888      172.16.1.1/32
      0      172.16.2.1/32

Output label database, 172.16.1.1:0--172.16.2.1:0
Labels advertised: 2
  Label     Prefix
      0      172.16.1.1/32
 299920      172.16.2.1/32

Figure 3.1: Labels received from DC2-PE1

As expected the input and output labels are also reflected on DC2-PE1.

lab@lab-dc2-pe1> show ldp database 
Input label database, 172.16.2.1:0--172.16.1.1:0
Labels received: 2
  Label     Prefix
      0      172.16.1.1/32
 299920      172.16.2.1/32

Output label database, 172.16.2.1:0--172.16.1.1:0
Labels advertised: 2
  Label     Prefix
 299888      172.16.1.1/32
      0      172.16.2.1/32

Section 3.1: RSVP Configuration

For RSVP signalled LSP’s we will first observe the Implicit Null default behaviour, signalling a label value of 3.

Figure 4.1: Labels sent and received from DC1-PE1

From the output below we can see that the LSP from DC1-PE1 to DC2-PE1 is signalled with a label value of 3, indicated in the “Labelout” section – this is interpreted by the remote PE as Implicit Null.

In the opposite direction DC2-PE1 has established an LSP towards the local PE, informing the local PE that it will receive packets with Implicit Null applied (using a label value of 3), as indicated in the “Labelin” section.

lab@lab-dc1-pe1> show rsvp session    
Ingress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.2.1      172.16.1.1      Up       0  1 FF       -        3 dc1-pe1-to-dc2-pe1
Total 1 displayed, Up 1, Down 0

Egress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.1.1      172.16.2.1      Up       0  1 FF       3        - dc2-pe1-to-dc1-pe1
Total 1 displayed, Up 1, Down 0

Transit RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

lab@lab-dc1-pe1> 

Figure 4.1: Labels sent and received from DC2-PE1

From the perspective of DC2-PE1 we effectively see the same behavior in reverse. The local PE is signally an LSP to DC1-PE1 with a label value of 3, meaning all packets entering this LSP with have Implicit Null applied.

The LSP arriving from DC1-PE1 is signalled with a Labelin value of 3, informing the local PE that Implicit Null is applied to packets arriving on the LSP.

lab@lab-dc2-pe1> show rsvp session 
Ingress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.1.1      172.16.2.1      Up       0  1 FF       -        3 dc2-pe1-to-dc1-pe1
Total 1 displayed, Up 1, Down 0

Egress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.2.1      172.16.1.1      Up       0  1 FF       3        - dc1-pe1-to-dc2-pe1
Total 1 displayed, Up 1, Down 0

Transit RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

lab@lab-dc2-pe1> 

Section 2.2: Changing RSVP to Explicit Null

Let’s now apply the same command to the RSVP configuration on both DC1-PE1 and DC2-PE1 and observe the results; in the case of RSVP the command is applied under the MPLS protocol heirarchy, where all RSVP signalled LSP’s are configured in Junos.

set protocols mpls explicit-null

Figure 3.1: Labels received from DC1-PE1

As predicted once the above command has been applied we now see DC1-PE1 signalling and receiving a label value of 0.

lab@lab-dc1-pe1> show rsvp session    
Ingress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.2.1      172.16.1.1      Up       0  1 FF       -        0 dc1-pe1-to-dc2-pe1
Total 1 displayed, Up 1, Down 0

Egress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.1.1      172.16.2.1      Up       0  1 FF       0        - dc2-pe1-to-dc1-pe1
Total 1 displayed, Up 1, Down 0

Transit RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

Figure 3.1: Labels received from DC2-PE1

As expected the input and output labels are also reflected on DC2-PE1.

lab@lab-dc2-pe1> show rsvp session 
Ingress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.1.1      172.16.2.1      Up       0  1 FF       -        0 dc2-pe1-to-dc1-pe1
Total 1 displayed, Up 1, Down 0

Egress RSVP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
172.16.2.1      172.16.1.1      Up       0  1 FF       0        - dc1-pe1-to-dc2-pe1
Total 1 displayed, Up 1, Down 0

Transit RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

Manipulating the AS

Within Junos several tools are available to manipulate the AS Path, which can be useful when BGP is deployed as a PE to CE routing protocol across multiple VPN sites. Due to the built in loop prevention mechanisms of BGP a VPN site announcing a BGP route into a VRF may cause an AS loop to be detected if the route is advertised into the same AS number again in another VPN site. In order to overcome this limitation several features are available in Junos.

AS Override

AS Override is used by a BGP speaker (normally a PE router) to replace a remote AS number with the AS number of the local router receiving the route. The route is then advertised further downstream with two AS numbers in the path that are identical. This feature is often used in MPLS networks that are providing connectivity to a single AS that is distributed geographically across multiple VPN sites.

Depending on the requirements AS Override can be configured under the BGP global configuration, group configuration or neighbour configuration. The following configuration is an example of the group configuration overriding the remote AS of 2 with the local AS of 1.

protocols {
bgp {
group ASN1 {
neighbor 1.1.1.2 {
local-address 1.1.1.1;
family inet {
unicast;
}
as-override;
peer-as 2;
}
}
}
}

AS Loops

AS Loops is another approach that can be used to overcome the limitations of AS path loops when multiple VPN sites are announcing BGP routes from the same AS. The feature allows a CE device to accept a BGP route that has been advertised with the CE routers’ local AS in the path.

The following example demonstrates how the AS loops function is configured on the CE router. The command will instruct the CE to receive routes with its local AS in the path x amount of times.

protocols {
bgp {
group ASN2 {
neighbor 1.1.1.1 {
local-address 1.1.1.2;
family inet {
unicast;
}
autonomous-system loops 1;
peer-as 1;
}
}
}
}

Configuration is also required on the PE router, to allow the PE to announce a BGP route to a neighbour that is already in the AS path.

protocols {
bgp {
group ASN1 {
neighbor 1.1.1.2 {
local-address 1.1.1.1;
family inet {
unicast;
}
advertise-peer-as;
peer-as 2;
}
}
}
}

 

Internet Access Options

According to RFC4364 there are several methods for access the public internet using MPLS VPNs. Each approach utilities either “VRF aware” or “non VRF aware” procedures for delivering Internet access to a CE device.

Option 1.1 (Third Party Provider)

In this scenario the provider simply does not participate in any Internet gateway service and the access is provided by a third party network.

Option 1.2 (PE provides Layer 2 VPN towards Internet Gateway)

In this scenario a separate peering router is used to connect the providers network to the Internet. Access to the customer CE is provided using an MPLS layer 2 VPN, using CCC Ethernet encapsulation to present the attachment circuit to the CE. The attachment circuit does not necessarily need to be connected on a separate physical interface as it can be presented as a different logical interface over the customers existing circuit.

Option 2.1 (Separate interfaces for VPN and Internet Gateway)

In this scenario the CE router connects to the PE using two separate logical interfaces. The interface that connects to the Internet is part of the global routing table on both the CE and PE and thus makes it non VRF aware. A second VRF aware interface is also connected between the CE and PE to carry the customers VPN routes as part of their internal routing domain.

The PE carries all customer public routes within its global routing table and advertises them to the Internet. A default route, or full or partial Internet table is advertised from the PE to the CE over the non VRF aware interface. Using a rib-group (or static route with a next-hop table statement) each CE that requires Internet access exports the default route into the internal VPN and also in order to provide inbound routing from the Internet the customer public address space would be exported from the internal VPN into the CEs global routing table.

Option 2.2 (Internet routes within the VRF table on the PE)

In this scenario the CE router again connects to the PE using two separate logical interfaces, one non VRF aware and one VRF aware. All outbound traffic is carried over the VRF aware interface and when it arrives at the PE a default static route is configured to direct the outbound traffic to the global routing table using the next-hop table statement.

Traffic arriving at the PE from the Internet is directed to the CE over the non VRF aware interface. Either a rib-group or static next-hop table configuration routes traffic towards the customer VPN table.

Option 2.3 (Single interface for VPN and Internet access)

In this scenario a single VRF aware interface is used between the CE and the PE. All public and private routes are carried within the VPN and if BGP is used between the CE and PE the public routes can be tagged with community so when they arrive at the PE they can be exported into the global routing table using a rib-group that matches on this community. All other private VPN routes are carried within the customer domain.

In order to carry outbound traffic outside of the customer VRF a default route with a next-hop table statement is configured on the PE. The next-hop table is configured as the global routing table which allows outbound traffic to reach the Internet.

Option 3 (Central hub site with separate interfaces for VPN and Internet gateway)

In this scenario a central CE router is used to connect a non VRF and a VRF interface to the PE router. For outbound traffic a default route is generated by the PE in its global routing table and sent to the central CE over the non VRF interface. The default route is then set a next-hop table from with the CEs VRF table; this default state route is also advertised to all route CEs within the VPN to provide outbound connectivity for the whole VPN domain.

Inbound traffic is routed to the PE within its global routing tables and customer Internet routes are advertised by the central CE towards the PE using the non VRF interface. The central CE then use the next-hop table statement to forward inbound Internet traffic into the VPN tables to provide connectivity for the rest of the VPN domain.

BGP Independent Domain

What Exactly Does This Feature Do?

Imagine a scenario where a service provider has a requirement to peer their PE to a remote device using iBGP, and this remote device could be a customer CE, or a third party router that needs to send traffic across the service provider core to another remote site.

As the customer routes are sent over the provider core, the service providers AS would be added to the AS path of the customer routes, before these routes are announced to the end site.

There are two issues with this setup. Firstly the BGP Split Horizon rule the would prevent the routes from being announced to the remote site, as the customer AS would already exist in the path. The second issue could be that the provider (or customer) may wish for the routes to announced transparently between the two end points, without the provider AS being seen in the path.

How To Implement An Independant Domain

The VRF on the PE router is configured with the CE routers AS number, however this AS is not carried within the AS path attribute when the customer prefixes are carried across the providers core.

The reason for the above is due to the customer AS being removed from the AS path attribute and added into another attribute called ATTRSET; this effectively means that the customer AS is carried independently through the providers core, and does not affect the AS path used within the providers MPLS network.

When the routes arrive at a remote PE that is configured with the BGP independent domain, the original customer AS is taken from the ATTRSET, and replaced back into the AS path.

Configuration Example

The configuration is applied to the customer VRF under the routing-options heirarchy. The “independent-domain” command is appended to the AS number of the VRF, which in this case is 65501.

The command should be applied at all PE routes that are forming part of the VPN, and independent domain.

routing-instances {
vpn-a {
instance-type vrf;
route-distinguisher 172.21.0.2:4;
vrf-import vpn-a-import;
vrf-export vpn-a-export;
routing-options {
autonomous-system 65501 independent-domain;
}
}
}

Wrapping It Up

This feature is a pretty straight forward way of getting around a problem, however in most operational scenario’s it would probably be more wise to use some sort of layer 2 connection and allow the customer to peer directly between the two end points.

If the remote device is a managed CE, then there might be a case to use BGP independent domain, however other methods offer similar solutions such as AS-override, which I’ll be discussing in another article.

 

RIB Groups

Section 1: The Requirement For RIB Groups

This article is going to take you through the process of setting up a RIB group, and will help to clear up any misunderstanding of how this feature works, and how it differs from other methods of routing leaking.

Due to the way in which Juniper uses certain terminology, this feature can seem confusing and convoluted. What’s more confusing is that both auto-export and RIB groups can provide route leaking, so why bother having both?

Well, the answer is simply that RIB groups are more powerful than traditional route leaking methods. RIB groups effectively leak routes by using a router’s RIB-in/import process, as opposed to the more commonly understood method of using route-target import and export leaking policies between VRFs.

The benefit of using a RIB group over traditional configuration is that by decoupling the policy away from the VRF import/export policy, and associating the route leaking at a protocol level (which is how a RIB group works), you have more flexibility to apply granular policy, that is not convoluted and tied into another process the router is handling.

So by design, it makes more sense to use VRF policies for their sole purpose (which is to control routes that are being imported and exported into MP-BGP), and to apply route leaking policies at a local protocol level, using a different set of policies and commands.

Section 2: Setting Up The Lab

In order to demonstrate how this works I’m going to use the lab environment detailed below to leak a number of routes between two VRFs.

The lab will consist of these elements:

  • Two PE routers known as r2 and r4 connected using MP-BGP
  • Two VRFs known as vpn-b and vpn-c configured on each of the PEs
  • Site A CPE2 attached to the r2 PE
  • Site B CPE1 attached to the r4 PE

The IP addresses and associated VRFs shown below will be used to simulate route leaking between the two VRFs known as vpn-b and vpn-c:

  • Site A CPE2 Lo.10 172.18.0.1/32 inside vpn-b
  • Site B CPE1 Lo.11 172.18.1.2/32 inside vpn-c

Figure 2.1: Lab Topology

The diagram below shows the network topology with each CPE attached to each PE. The CPE in site A has an interface in vpn-b, and the CPE in site B has an interface in vpn-c.

physical_rib

The purpose of the lab will be to provide connectivity between the loopback addresses on each of the CPEs; this will require bidirectional route leaking on each local PE, using a RIB group.

I will document each step of the configuration, and will explain how each step relates to the process of route leaking within the RIB group.

At the end of the lab some pings will be sent between the two CPEs, to confirm the RIB group configuration has been successful.

Section 3: Creating The RIB Group

Under the global routing-options heirarchy the name of the RIB group should be configured first. The RIB group can be referenced at various protocol and RIB levels, so having the main policy and configuration location at global level makes sense.

Once the RIB group has been named subsequent parameters should define the source RIB that routes should be taken from, and the destination RIB they should be sent to.

As shown in figure 3.1 the import-rib statement is used to name the RIB group on PE r2, and is followed by a list of RIBs that are in scope for route leaking.

The router interpretes the first RIB in the square brackets (vpn-b) as the source table, (which is where routes are copied from), and any subsequent RIBs as the destination tables where the routes will be copied to.

Note: The configuration created here includes an optional import policy – this is to be used to explicitly state the prefix and protocol that will be imported into the RIB group.

Figure 3.1

routing-options {
vpn-b-to-vpn-c {
import-rib [ vpn-b.inet.0 vpn-c.inet.0 ];
import-policy rib-group-vpn-b-to-c;
}
}

Figure 3.2

The import-policy referenced in the above configuration is shown below. Applying this gives granularity to the routes that are going to be leaked into vpn-c.

Without this policy all routes in vpn-b would be leaked into vpn-c.

policy-options {
policy-statement rib-group-vpn-b-to-c {
term 10 {
from {
protocol bgp;
route-filter 172.18.0.1/32 exact;
}
then accept;
}
term 100 {
then reject;
}
}
}

Figure 3.3

I have included some outputs of the current VRFs on PE r2. The outputs are specifically  querying the route that I wish to leak into vpn-c.

lab@r2# run show route table vpn-b.inet.0 172.18.0.1/32 exact
vpn-b.inet.0: 11 destinations, 12 routes (11 active, 0 holddown, 0 hidden)
+ = Active Route, – = Last Active, * = Both
172.18.0.1/32 *[BGP/170] 03:54:32, localpref 100
AS path: 65501 I, validation-state: unverified
> to 172.17.0.2 via ge-0/0/0.30

Figure 3.4

As we can see PE r2’s vpn-c table currently has no route to 172.18.0.1/32, because this route is learnt via an attached CPE in vpn-b.

lab@r2# run show route table vpn-c.inet.0 172.18.0.1/32 exact
[edit]
lab@r2#

Section 4: Applying The Configuration To BGP

As previously explained route leaking using RIB groups is carried out at the protocol level, as routes enter RIB-in.

Based on that theory, the configuration shown in figure 4.1 will be applied to the BGP protocol in vpn-b, under the respective address family; applying at the address family level makes sense, as Junos stores address families in different RIBs, such as inet0 for IPv4, and inet6 for IPv6.

The configuration calls the RIB group previously setup under the global routing-options heirarchy, thus inheriting all policy defined within that configuration.

Figure 4.1

routing-instances vpn-b {
protocols {
bgp {
group vpn-b {
neighbor 172.17.0.2 {
description vpn-b-test-ebgp;
mtu-discovery;
family inet {
unicast {
rib-group vpn-b-to-vpn-c;
}
}
peer-as 65501;
}
}
}
}
}

Figure 4.2

Once the above configuration has been commited the vpn-c routing tables on the r2 PE can be checked again.

The output shown below confirms that the route leaking configuration has been successful, as vpn-c now has a route to the loopback address of site A CPE2.

This isn’t the end of this task though, as there are a number of additional steps that need to be taken to successfully ping from site B CPE1 (via vpn-c), to the route that has just been leaked from vpn-b.

lab@r2# run show route table vpn-c.inet.0 172.18.0.1/32 exact
vpn-c.inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, – = Last Active, * = Both
172.18.0.1/32 *[BGP/170] 01:14:50, localpref 100
   AS path: 65501 I, validation-state: unverified
> to 172.17.0.2 via ge-0/0/0.30
[edit]
lab@r2#

Section 5: Next Hop Interfaces

The previous steps have enabled the loopback address of 172.18.0.1/32 from site A CPE2 to be successfully route leaked into vpn-c, with a next hop of address of 172.17.0.2, via ge-0/0/0.30.

This next hop interface presents a problem though, as it exists exclusively in vpn-b, and we need this next hop interface available in vpn-c, otherwise routing to the destination loopback address in vpn-b is not going to be possible.

Figure 5.1

To confirm the current situation, the output below from PE r2 shows there is currently no route in vpn-c to the next hop address of 172.17.0.2.

lab@r2# run show route table vpn-c 172.17.0.2
[edit]
lab@r2#

Section 6: Understanding Interface Routes

Because RIB groups act on protocols using RIB-in, some additional configuration is going to be needed to route leak the connected next hop interface into vpn-c – this is taken care of using the interface-routes command.

Now we know that RIB groups operate at a protocol RIB-in level, so it should be expected that when handling connected routes, the commands are going to look slightly different to that of importing from a dynamic routing protocol.

Figure 6.1

Below you can see the interface-routes command being applied under the routing options heirarchy within the source RIB, which in our case is vpn-b.

This command will use the previously defined RIB group policy setup under the global routing-options heirarchy to match any directly connected routes within vpn-b, thus leaking them into vpn-c.

routing-instances vpn-b {
routing-options {
interface-routes {
rib-group inet vpn-b-to-vpn-c;
}
}
}

Figure 6.2

Because the interface-routes command references the main RIB group configuration it will of course inherit any policy that’s previously been applied to the RIB group.

In order to allow the next hop address to leak into vpn-c a change is also going to be required to the initial policy I created to control which routes can be leaked into vpn-c.

Shown below is an additional term (term 20) that has been added to that policy to enable this requirement.

policy-options {
policy-statement rib-group-vpn-b-to-c {
term 20 {
from {
protocol direct;
route-filter 172.17.0.0/30 exact;
}
then accept;
}
}
}

Figure 6.3

Once the configuration is committed vpn-c has aquired the next hop route 172.17.0.0/30.

lab@r2# run show route table vpn-c 172.17.0.0
vpn-c.inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, – = Last Active, * = Both
172.17.0.0/30 *[Direct/0] 00:47:50
> via ge-0/0/0.30
[edit]
lab@r2#

Section 7: Final Steps

Theoretically vpn-c now has all of the required routes to forward packets towards the CPE2 loopback address in vpn-b at site A.

To establish end to end connectivity routing has to work in both directions, and in order to achieve this the steps we have take above (from vpn-c’s perspective) will need to be implemented from vpn-b’s perspective; so a loopback address that’s originated in vpn-c can also be reached from the CPE at site A.

Because the fundamental steps of route leaking have been detailed in the previous sections I’m going to summarise this configuration in a few figures below.

Figure 7.1

Here you can see the creation of the RIB group on PE r2 at site B, where everything is effectively reversed to leak back into vpn-b.

routing-options {
vpn-c-to-vpn-b {
import-rib [ vpn-c.inet.0 vpn-b.inet.0 ];
import-policy rib-group-vpn-c-to-b;
}
}

Figure 7.2

The RIB group policy that stipulates route leaking of the loopback address and next-hop interface at site B in vrf-c is shown below.

policy-options {
policy-statement rib-group-vpn-c-to-b {
term 10 {
from {
protocol bgp;
route-filter 172.18.1.2/32 exact;
}
then accept;
}
term 20 {
from {
protocol direct;
route-filter 172.17.1.4/30 exact;
}
then accept;
}
term 100 {
then reject;
}
}
}

Figure 7.3

The final configuration step is to apply the RIB group on PE r2 under the BGP protocol heirarchy in vpn-c, which is shown below.

routing-instances vpn-c {
protocols {
bgp {
group vpn-c {
neighbor 172.17.1.6 {
description vpn-c-test-ebgp;
mtu-discovery;
family inet {
unicast {
rib-group vpn-c-to-vpn-b;
}
}
peer-as 65503;
}
}
}
}
}

Section 8: Testing Connectivity

Now that everything is in place we can test connectivity using a ping sourced from the site A CPE1 device, pinging from the test loopback address to the test loopback address at site B on CPE2.

The results of which are shown below in figure 8.1.

Figure 8.1

lab123@site-a-cpe-1# …rce 172.18.0.1 routing-instance vpn-b
PING 172.18.1.2 (172.18.1.2): 56 data bytes
64 bytes from 172.18.1.2: icmp_seq=0 ttl=62 time=24.225 ms
64 bytes from 172.18.1.2: icmp_seq=1 ttl=62 time=20.661 ms
64 bytes from 172.18.1.2: icmp_seq=2 ttl=62 time=20.334 ms
64 bytes from 172.18.1.2: icmp_seq=3 ttl=62 time=20.884 ms

Appendix A: Link-State Protocols (OSPF/ISIS)

Because route leaking with RIB groups is applied at the RIB in (or import) level, configuring this feature with link-state protocols is effectively the same process as when using BGP.

For this reason I have not included an example of how to this with OSPF or ISIS. One can assume that the configuration is exactly the same, you simply create the RIB group at the global routing-options heirarchy and then apply it directly under the protocol heirarchy.

Appendix B: Notes regarding BGP Split Horizon

One important aspect of RIB groups that differs from auto export is that the BGP split horizon rule is not considered when leaking routes. When configuring RIB groups it’s considered best practice to change the vrf export policy on the destination table to stop the announcement of the leaked routes. This protects from any potential routing loops or sub optimal routing. It may not always be required, it depends on the VPN topology and requirements for applying route leaking.

Continue reading “RIB Groups”