QOS Settings for HMPElements

From VESupport

Jump to: navigation, search

Settings for the config files:

           <setting name="QosSettings" serializeAs="Xml">
             <value>
               <QosSettings>
                 <JitterSettings>
                   <LogAllPortsIntervalMs>20000</LogAllPortsIntervalMs>
                   <LogAlarmedPortsIntervalMs>5000</LogAlarmedPortsIntervalMs>
                   <LogSpecificPort>iptB1T1</LogSpecificPort>
                   <LogSpecificPortTrigger>6</LogSpecificPortTrigger>
                   <DebouncerSettings>
                             <TimeIntervalMs>100</TimeIntervalMs>
                             <FaultThreshold>10000</FaultThreshold>
                             <FailTimeMs>4000</FailTimeMs>
                             <FailTimePercent>50</FailTimePercent>
                             <RecoveryTimeMs>4000</RecoveryTimeMs>
                             <RecoveryTimePercent>50</RecoveryTimePercent>
                   </DebouncerSettings>
                 </JitterSettings>
               </QosSettings>
              </value>
           </setting>
           <setting name="JitterBufferMinStart" serializeAs="String">
               <value>0</value>
           </setting>

The "LogAllPortsIntervalMs" is how often the jitter value is printed to the QOS log for each channel. Only active channels are printed. 20000 is every 20 seconds. If the jitter gets so high as to set the alarm for a port then the logging will increase to the "LogAlarmedPortsIntervalMs" setting.

For both settings, 0 will disable logging.

"LogSpecificPort" will list the jitter for one channel for EVERY packet that comes in. At 50pps that's 50 log entries per second so this should be used sparingly. It will only work for an existing pinned up call. Once the call is over, then it stops logging.

To start logging for a specific port, you must set the "LogSpecificPortTrigger" value. Set it to anything other than what is it is now. By changing the value, HMPE knows to start logging again for the specified port.

The "DebouncerSettings" settings are what determines an alarm state:

The "FailTimeMs" is divided by the "TimeIntervalMs". In this case it's 4sec / 100ms, or 40 Samples. If 50% ("FailTimePercent") of the 40 samples are over the threshold, then the alarm is set and the logging increases.

The same goes for the recovery time which will turn off the alarm and return to normal logging.

The FaultThreshold is the jitter value that triggers an alarm. Note that this value is in uSec. So 10000 is 10ms.

The log entries look like this:

15/02/02 - 18:38:10.904 000B StrandProvider.MainC Jitter:	iptB1T3		False	882419254329	-626117954	19482	1037
15/02/02 - 18:38:12.134 000B StrandProvider.MainC Jitter:	iptB1T3		True	882420484129	-626108194	19543	5307
15/02/02 - 18:38:12.684 000B StrandProvider.MainC Jitter:	iptB1T3		False   882421034147	-626103714	19571	3568
15/02/02 - 18:38:13.914 000B StrandProvider.MainC Jitter:	iptB1T50	False	882422264217	158800		24009	7
15/02/02 - 18:38:32.684 000B StrandProvider.MainC Jitter:	iptB1T3		False   882441033990	-625943714	20571	94
15/02/02 - 18:38:33.914 000B StrandProvider.MainC Jitter:	iptB1T50	False	882442264042	318800    	25009 	6


The first column is the port name, the second is the alarm status, the third is the receive time of the packet (in uSec), the 4th is the time stamp from the packet, and then the sequence number and then the jitter.

I made the columns "tab" delimited so that they can be put into Excel and a graph of the jitter can be made.

Be advised that the graph will look similar to a wireshark graph but won't be exact because the wireshark traps the time the packet arrives at the NIC card, not the time the app gets the packet after it is filtered by the firewall and passed through the windows UDP stack. In my tests the graphs are shaped the same and you can generally see the same peaks and valleys.

As a bonus...

The JitterBufferMinStart value is a value from 0 to 5. 0 disables this feature. Be advised that it is experimental. What it does is that it requires that HMPE get up to 5 buffers of data before I start to transcode the data. Normally the system tries its best to reduce latency between streams, but here it introduces latency in order to reduce or eliminate jitter.

Correct setting are: 1=20ms, 2=40ms, ... 5=100ms (Assuming a ptime of 20)

Using this setting might help with your customers that are having difficulty. It can be changed on the fly in the config file without stopping the system. It will not take effect until a stream is restarted or if the transcoder catches up with the incoming stream. At that point I re-read the value from the config and start to buffer again until I get the number of specified packets. Be advised that it is a system wide setting so it would add latency to calls even if they are not experiencing jitter. It could also have an adverse effect on conferencing if set too high.

Personal tools