myEvalvid-NT (myEvalvid Network Trace)



1)     Previous studies often use publicly available real video traces to evaluate their proposed network mechanisms in a simulation environment. Results are usually presented using different performance metrics, such as the packet/frame loss rate, and packet/frame jitter. Nevertheless, packet loss or jitter rates are network performance metrics and may be insufficient to adequately rate the perceived quality by a (human) end user.


2)     How to best simulate and evaluate the performance of video quality delivery in a simulated network environment is a recursive open issue in network simulation forums.


3)     Many studies have adopted the MPEG4 traffic traces provided at But to the best of my knowledge, no tool-set is publicly available to perform a comprehensive video delivered quality evaluation using these traffic traces in network simulation environment.


[What I have done]

Refer to Evalvid system, I develop myEvalvid-NT version.


[Overview of myEvalvid-NT]



















In this framework, we can use two different input sources: network traffic trace download at and the trace from encoding your own raw YUV video.

        MyTrafficTrace2: It is employed to extract the frame type and the frame size of the video trace file. Then it fragments the video frames into smaller segments and sends these segments to the lower UDP layer at the appropriate time according to the user settings specified in the simulation script file.

        MyUDP: Essentially, MyUDP is an extension of the UDP agent. This new agent allows users to specify the output file name of the sender trace file and it records the timestamp of each transmitted packet, the packet id, and the packet payload size.

        MyEvalvid_Sink2: MyEvalvid_Sink2 is the receiving agent for the fragmented video frame packets sent by MyUDP. This agent also records the timestamp, packet ID, and payload size of each received packet in the user specified file.

       The fraction of decodable frame rate: Standard MPEG encoders generate three distinct types of frames, namely I, P, and B frames. Due to the hierarchical structure of MPEG, I frames are more important than P frames, and in turn P frames are more important than B frames. Therefore, a frame is considered decodable if, and only if, all the fragmented packets of this frame and the other packets that this frame depends on are completely received. Thus, the decodable frame rate (Q) is defined as the number of decodable frames over the total number of frames sent by a video source. The larger the Q value, the better the video quality perceived by the end user.



        I provide two examples to demonstrate the usefulness of this evaluation framework. If you want to run the examples in your computer, please follow each step at here first.


[Example1---using the network trace]

1.      Download the trace. (Take the StarWarsIV trace as an example)


2.      Open it and remove the first two lines


3.      change the path to ns-allinone-2.28/ns-2.28/myexample/myEvalvid_NT/example1


4.      Run the simulation script. (A video is transmitted from a wired node to a mobile node through Access Point. The wireless channel adopts random uniform error model. The parameter setting for wireless channel can be referred to Video transmission over wireless error channels.

The packet loss rate: (163682-161975)/163682 = 0.0104 ~ 0.01


5.      Before evaluating the delivered video quality, we have to convert the trace file format.


6.      Evaluate the trace files.

From the simulation results, we can find that

a.      The total packet loss rate is 1707/163682 = 0.0104

b.      The I frame packet loss rate is 298/163682 = 0.00182

c.      The P frame packet loss rate is 451/163682 = 0.002755

d.      The B frame packet loss rate is 958/163682 = 0.005852

e.      The total frame loss rate is 1695/89998 = 0.0188

f.        The I frame loss rate is 293/89998 = 0.00325

g.      The P frame loss rate is 447/89998 = 0.004966

h.      The B frame loss rate is 955/89998 = 0.01061

i.        The decodable frame rate (Q) = 0.907087

j.         The average delay is 0.024069 second and the maximum delay is 0.266918 second


7.      Change the error rate to 0.1 (repeat the simulation again).

The decodable frame rate (Q) is 0.406776.



In this example, I will show how to encode your own raw YUV video and measure the Q. Also, I will show the relationship between Q and PSNR.

1.      Change the path to ns-allinone-2.28/ns-2.28/myexample/myEvalvid_NT/example2.


2.      Encode the foreman_qcif.yuv with ffmpeg codec.



3.      Create ISO MP4 files containing the video samples (frames) and a hint track which describes how to packetize the frames for the transport with RTP.


4.      Send a hinted mp4-file per RTP/UDP to a specified destination host and save the sent information to a file (traffic trace file).


5.      Run the simulation script.


6.      Evaluate the trace files.



7.      Use myEvalvid system to get the distorted video and calculate the average PSNR.



8.      Use yuvviewer.exe to see the distorted video


9.      Set the error rate to different values and repeat the step 5 to 8 and you will know the relationship between Q and PSNR.


Last modified date: 2006/3/6


Author: Chih-Heng, Ke (柯志亨)