Reply
New Member
Posts: 6
Registered: ‎02-21-2018

Traffic Policy not Applied to Interface

[ Edited ]

I'm using the EdgeRouter Pro-8. I'm trying to set a drop-tail queue on interface eth7. Prior to setting the queue, the queue depth seems to 8ms, and it appears to be a drop-tail queue. After congesting the interface, pings consistently to the far side of the link took 8ms, as expected.

 

I need to reduce the queue depth to 100 packets to improve latency during congestion. Following the instructions https://help.ubnt.com/hc/en-us/articles/216787288-EdgeRouter-Quality-of-Service-QoS-#3, I entered the commands in the CLI:

set traffic-policy drop-tail policy1 queue-limit 100
set traffic-policy drop-tail policy1 description "limit queue 100"
set interfaces ethernet eth7 traffic-policy out policy1

I then congested the interface with TCP traffic, but the ping time increased to 12ms. It should have been 100packets*1500Bytes*8bits=1.2ms. Looking at the command 

show queueing

had 0 dropped packets, which should not be the case.

 

Is there another step to applying the traffic policy?

Veteran Member
Posts: 7,217
Registered: ‎03-24-2016
Kudos: 1858
Solutions: 820

Re: Traffic Policy not Applied to Interface

1st reconsider:

Is the ER port congested.....or is some upstream device congested?

 

For example , if the ER has it's 1Gb/s port connected to a DSL modem, this DSL modem is the bottleneck, not the ER port.

For any type of QoS to be effective, you've got to "own the queue."

New Member
Posts: 6
Registered: ‎02-21-2018

Re: Traffic Policy not Applied to Interface

That's definitely true. I'm testing the router before putting it in production, because I need a small drop tail queue. My current testing setup has 5 computers plugged directly into the EdgeRouter on eth0-5. eth7 is connected to another router with 4 computers directly plugged in. The 5 on the EdgeRouter send as fast as they can (1gbps each) to the 4 connected to the switch over the eth7 interface.

 

There is no queueing on the second router, because the other 4 don't send anyting back, so almost the entire latency (minus a fraction of a millisecond) should come from the queuing delay.

Veteran Member
Posts: 7,217
Registered: ‎03-24-2016
Kudos: 1858
Solutions: 820

Re: Traffic Policy not Applied to Interface

Internal packet processing takes time, a Gb/s eth port might not be the bottleneck.

Note ping isn't offloaded.

 

8 ms is pretty bad for local network,  pinging through a single L3 switch over here takes <1ms

 

New Member
Posts: 6
Registered: ‎02-21-2018

Re: Traffic Policy not Applied to Interface

Let me clarify, without any traffic the ping takes 0.2ms. With congestion, the ping experiences the queing delay, which is roughly the depth of the queue in this setup. The delay is the same whether the 5 computers send UDP floods, or initiate TCP connections to the other 4.

Veteran Member
Posts: 7,217
Registered: ‎03-24-2016
Kudos: 1858
Solutions: 820

Re: Traffic Policy not Applied to Interface

OK

 

Note the default pfifo_fast queue is done in hardware.

Most QoS stuff in ER is done in software, probably drop-tail too.

This costs CPU, and even worse, breaks offload

 

New Member
Posts: 6
Registered: ‎02-21-2018

Re: Traffic Policy not Applied to Interface

Are you suggesting that QoS done in software is worse than no QoS?

 

As a side point, do you know if the more expensive routers (Cisco, Juniper, etc.) do QoS, such as drop tail, in hardware?

New Member
Posts: 6
Registered: ‎02-21-2018

Re: Traffic Policy not Applied to Interface

[ Edited ]

Follow up to my initial post. It seems like none of the "normal" queueing techniques (drop tail, RED, etc.) work. This is true whether I configure them using EdgeOS commands in the CLI, the web GUI, or using actual Linux tc with the command:

sudo tc qdisc add dev eth7 root pfifo limit 100

However, for some reason I don't understand, I'm able to get different queue depths if I use the bfifo queue through tc, although at first glance it appears to be in addition to whatever the default queuing is.

Veteran Member
Posts: 7,217
Registered: ‎03-24-2016
Kudos: 1858
Solutions: 820

Re: Traffic Policy not Applied to Interface

To see if commands work.....try 1st setting ethernet speed to 10Mb/s.

Now ethernet is the bottleneck, and you can test your queues, without CPU load becoming the bottleneck.

 

For just improving latency, you can use smart-queue....  (but I can't stop repeating myself....also won't work when you don't own the queue)

 

 

 

 

New Member
Posts: 6
Registered: ‎02-21-2018

Re: Traffic Policy not Applied to Interface

I've realized since yesterday that it's not necessary to use bfifo. It turns out that when you set txqueuelen, then use linux tc to set the queue type and parameters you can see changes.

sudo tc qdisc del dev eth7 root // clears the qdisc (not sure if necessary)
sudo ifconfig eth7 txqueuelen 100 // must be > 0
sudo tc qdisc add dev eth7 root pfifo limit 4000 // would normally be a 48ms drop tail queue

The actual value for txqueuelen doesn't seem to have any effect. Normally, the limit specified through tc overrides txqueuelen, which seems to be happening here. I have no clue why a txqueuelen of 0 was causing problems with tc.

 

As I mentioned before, this queueing appears to be in addition to the default queueing (maybe NIC buffers?). I've estimated that the unavoidable buffering is approximately 1050 packets, since the total queueing delay is around 62ms with pfifo limit of 4000 packets. So to set a 48ms buffer (the often used 6MB buffer for GbE interfaces), I can set a pfifo limit of 2950 packets.

 

This still doesn't solve the problem of buffer sizes smaller than 1000. Does anyone have an idea why this is happening?

Veteran Member
Posts: 7,217
Registered: ‎03-24-2016
Kudos: 1858
Solutions: 820

Re: Traffic Policy not Applied to Interface

fwiw, with commands below, I can set/see txqueuelen on ER-X port:

admin@ERX:~$ sudo ip link set eth3 txqueuelen 100
admin@ERX:~$ sudo ip link show eth3
7: eth3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 100
    link/ether 80:2a:a8:5d:05:a3 brd ff:ff:ff:ff:ff:ff
    alias TEST

Note, this has changed length from default 1000 to 100

Reply