An Adaptive Cross-layer Mapping Algorithm for MPEG-4 Video Transmission over IEEE 802.11e EDCA



   Now it is easy to run all the simulations in this research work. Just download the cygwin and ns2 from here. Execute it and it will decompress the files automatically. Then use the cygwin and ns2 provided in this package to prepare the simulation environment. If you don’t know how to install it, please refer to this pape. The simulation script and related tool can be found here. After re-compiling ns2, you can directly run the simulation script.


1. Introduction

To support the varying Quality-of-Service (QoS) requirements of emerging applications, a new standard IEEE 802.11e [1] has been specified. The 802.11e standard defines four access categories (ACs) that have different transmission priorities. The transmission priority is the probability of successfully earning the chance to transmit when individual ACs are competing to access the wireless channel; the higher the transmission priority, the better is the opportunity to transmit. However, for a wireless channel, the unavoidable burst loss, excessive delays, and limited bandwidth become challenges for efficient multimedia transmission over wireless network. Consequently, several advanced mechanisms were proposed based on 802.11e to support multimedia transmissions and in particular video transmission quality. Most of the proposed mechanisms improved the performance by adjusting the operation of 802.11e MAC, such as Contention Window size, TXOPlimit and data transmission rate. However, the mechanisms did not exploit the significance of a specific traffic type (such as video) into consideration, thereby limiting the performance improvements that can be obtained.

For video traffic, the significance of the encoded video data varies. The priority transmission of hierarchical coding video is expected to play an important role in supporting multimedia service in a wireless network. However, 802.11e provides QoS through traffic distribution where all video data in the same access category. As a result, the channel access mechanism and the transmission scheme do not take the significance information of video data into consideration. If the transmission mechanism exploits the characteristics of video data content by considering the video significance information generated from the application layer, the video data will have priority service and the perceived quality at the receiver side can be improved.


2. Background


2.1 MPEG-4 video structure

The MPEG-4 standard defines three types of video frames for the compressed video stream, including I (Intra-coded) frame, P (Predictive-coded) frame, and B (Bi-directionally predictive-coded) frame. The MPEG I frame is encoded independently and decoded by itself. Thus, the I frame is just a frame coded as a still image, without any relationship to any previous or successive frames. The P frame is encoded using prediction from the preceding I or P frame in the video sequence. Thus the P frame requires the information of the most recent I frame or P frame for encoding and decoding. The B frame is encoded using predictions from the preceding and succeeding I or P frames. According to the coding relation, in MPEG-4 video stream the most important video type is the I frame, with the P frame being more important than B frame.

Figure 1: Prediction encoding of MPEG-4, GOP (N=9, M=3)


        The video sequence can be decomposed into smaller units, GOP (Group Of Picture), similar to a deterministic periodic sequence of frames (as shown in Figure 1). A GOP pattern is characterized by two parameters, G (N, M): N is the I-to-I frame distance and M is the I-to-P frame distance. For example, G (9, 3) means that the GOP includes one I frame, two P frames, and six B frames. Similarly, the second I frame in the figure marks the beginning of the next GOP. The arrows indicate that the B frames and P frames decoded are dependent on the preceding or succeeding I or P frames.   


2.2 IEEE 802.11e Enhanced Distributed Channel Access (EDCA)

IEEE 802.11e EDCA classifies traffic into four different AC (as illustrated in Figure 2). The four access categories include AC _VO (for voice traffic), AC_VI (for video traffic), AC_BE (for best effort traffic), and AC _BK (for background traffic). To simplify the notations, we assign AC_VO as AC3, AC_VI as AC2, AC_BE as AC1, and AC_BK as AC0. Each AC has its own buffered queue and behaves as an independent backoff entity. The priority among ACs is then determined by AC-specific parameters, called the EDCA parameter set. The EDCA parameter set includes minimum Contention Window size (CWmin), maximum Contention Window size (CWmax), Arbitration Inter Frame Space (AIFS), and Transmission Opportunity limit (TXOPlimit). The preferred values of each mechanism parameters that the standard recommends are shown in Table 1.

Figure 2: Four access categories in IEEE 802.11e


Table 1: 802.11e EDCA parameter set.
























Best Effort













Figure 3 demonstrates the operations in 802.11e EDCA. The AC with the smallest AIFS has the highest priority, and a station needs to defer for its corresponding AIFS interval. The smaller the parameter values (such as AIFS, CWmin and CWmax) the greater the probability of gaining access to the medium. Each AC within a station behaves like an individual virtual station: it contends for access to the medium and independently starts its backoff procedure after detecting the channel being idle for at least an AIFS period. When a collision occurs among different ACs within the same station, the higher priority AC is granted the opportunity to transmit, while the lower priority AC suffers from a virtual collision, similar to a real collision outside the station.

Figure 3: IEEE 802.11e EDCA operations.



3. QoS mapping algorithm for video stream over IEEE 802.11e EDCA


3.1 Static Mapping

To support QoS transmission of hierarchical coding video over an IEEE 802.11e network, a cross-layer design architecture has been proposed [2]. As shown in figure 4, the authors proposed a mapping algorithm, based on the traffic specification of IEEE 802.11e EDCA, and encoded H.264 video data is allocated into different precedence AC queues according to the video coding significance. However, the mapping is static and not adaptive. When the network load is light, the video data which is mapped to lower priority AC will result in unnecessary transmission delays and packet losses. Accordingly, if MPEG-4 video streams are transmitted as the traffic for the mapping algorithm proposed in [2], the I frame will always be mapped to AC[2], while the P frame will be mapped to AC[1] and the B frame will be mapped to AC[0]. If the AC[2] queue is empty (which means the video traffic load is light) such a static mapping algorithm will result in unnecessary transmission delays as well as high packet loss if AC[1] and AC[0] are almost full at the same time.

Figure 4: Cross Layer architecture in [2].


3.2 Dynamic Mapping

We propose an adaptive, cross-layer mapping algorithm for improving the MPEG-4 video transmission quality over an IEEE 802.11e wireless network [3]. In the proposed cross-layer approach, MPEG-4 video packets are dynamically mapped to the appropriate AC based on both the significance of the video data and the network traffic load. By exploiting the cross-layer mapping approach, we could prioritize the transmission of essential video data and improves the queue space utilization.

Figure 5: Architecture of adaptive cross-layer mapping scheme


Figure 5 depicts the cross-layer architecture, and shows the significance information of video data being passed from the application layer to the MAC layer. To guarantee the quality of delivered video the proposed mapping algorithm dynamically allocates the video to the most appropriated AC at the MAC layer according to both the significance of video type and the network traffic load. For MPEG-4 video stream, the loss of more important video frames would deteriorate the delivered video quality. For example, one I frame loss will cause all frames in the same GOP to be undecodable; at the same time, one B frame loss just affects itself. Based on the significance of video frame, the channel access priorities used to prioritize the transmission opportunity at the MAC layer are set with the I frame as the highest; the P frame below I but above B’s priority, and the B frame set at the lowest priority. To allocate important video data into higher priority AC queue in 802.11e MAC layer as far as possible, we give different mapping probabilities, defined as Prob_TYPE, to different video frame types according to its coding significance. If allocating a frame into a lower priority queue is inevitable, the transmission allocating probability of lower significant frames is higher than that of important video frames. Less important video frame types will be assigned larger Prob_TYPE. As a result, for the MPEG-4 codec the downward mapping probability relationship of these three video frame types is Prob_B > Prob_P > Prob_I, and all of the probabilities are between 0 and 1.

Furthermore, to support dynamic adaptation to changes in network traffic loads, we use the MAC queue length as an indication of the current network traffic load. According to the IEEE 802.11e specification, when transmitted over an IEEE 802.11e wireless network, MPEG-4 video packets are placed in AC2 category which has better opportunity to access the channel than lower priority ACs. The tradeoff is, when the video stream increases, this queue rapidly jams and drops occur. For this reason, the proposed mapping algorithm re-arrange most recently received video packets into other available lower priority queues, while the AC2 queue is getting filled. We adopted two parameters, threshold_low and threshold_high, to predicatively avoid the upcoming congestion by performing queue management in advance. The integrated function to introduce these two parameters in the algorithm is in the following expression:            (1)

In this function, the original predefined downward mapping probability of each type of video frame, Prob_TYPE, will be adjusted according to the current queue length and threshold values, and about the result is a new downward mapping probability, Prob_New. The higher Prob_New, the greater the opportunity for the packet to be mapped into a lower priority queue. Table 2 lists the notations used in the proposed adaptive cross-layer mapping algorithm.


Table 2: Parameter notations in our proposed adaptive mapping algorithm


Downward mapping probability of each type video packet

e.g. Prob_I, Prob_P, Prob_B


New calculated downward mapping probability


The lower threshold of queue length


The upper threshold of queue length


The queue length of access category 2


When a video data frame arrives:

if(qlen(AC[2]) < threshold_low)

     video packet à AC[2];

else if(qlen(AC[2]) < threshold_high) {

       RN = a random number generated from Uniform function (0.0, 1.0);

      if(RN > Prob_New)

            video frame à AC[2];


            video frame à AC[1];


else if(qlen(AC[2]) > threshold_high){

    if(RN > Prob_TYPE){

            video frame à AC[1];


            video frame à AC[0];


Figure 6: The proposed adaptive cross-layer mapping algorithm


In the mapping algorithm shown in figure 6, when a video packet arrives, first the queue length of AC2 (qlen(AC[2])) is checked and compared against a set of threshold values, threshold_high and threshold_low. If the queue length is lower than the lower threshold value, threshold_low (light load), the video data is mapped to AC[2] (no matter what type of video data is being transferred). But if the queue length is greater than the upper threshold value, threshold_high (heavy video traffic load) the video data is directly mapped to lower priority queues, AC[1] or AC[0]. However, while the queue length of AC[2] falls within threshold_high and threshold_low, the mapping decision is determined based on both the mapping probability (prob_TYPE) and the current buffering size condition of the queue as given by formula (1). Hence, the video data packet will be mapped to AC[2], AC[1] or AC[0] according to the calculated downward mapping probability. By exploiting such a priority scheme and queue length management strategy, the transmissions prioritized and the drop rate of video minimized, along with efficient utilization of network resources.



4.     Simulation setting


4.1 Simulation topology

        To evaluate the performance of our proposed cross-layer mapping algorithm, we have conducted simulations using a widely adopted network simulator NS-2, and integrated with EvalVid. The results of the proposed mapping algorithm are compared with the results derived from IEEE 802.11e EDCA [1] and the static mapping algorithm in [2]. The video source used in the simulation is YUV QCIF (176 x 144), Foreman. Each video frame was fragmented into packets before transmission, and the maximum transmission packet size over the simulated network is 1000 bytes. Table 4.1 shows the number of video frames and packets of the video sources. Figure 7 presents the simulation topology in the experiments. There are eight ad-hoc wireless nodes where one is video server and another is the video receiver. The data rate of the wireless link is 1Mbps.


Table 4.1. The amounts of video frames and packets of the video sources.



Frame number


Packet number


















Figure 7: Network topology used in our simulation tests


There are two kinds of scenario in the simulations for evaluating the video transmission performance:

·         Scenario 1: in this case only video stream is transmitted from the video sender node to the video receiver node. In this scenario, the performance evaluation focused on the queue space utilization by witnessing the queue length variation of each AC.

·         Scenario 2: in this case we used light and heavy loading cases, including different loads of voice traffic (64k, in AC [3]), CBR (in AC [1]), and TCP (in AC [0]). Traffic flows were randomly generated and transmitted over the entire simulation environment. In this scenario, we analyzed the received video quality to evaluate the efficacy of our proposed scheme under various network loading conditions.


4.2 TCL file

proc getopt {argc argv} {

        global opt

        lappend optlist nn

        for {set i 0} {$i < $argc} {incr i} {

                set opt($i) [lindex $argv $i]




#1 mapping type (0: 802.11e, 1: static mapping, 2: dynamic mapping);

#2 number of voice flows (AC_3) ;

#3 number of video flows (AC_2) ;

#4 number of TCP flows  (AC_1) ;

#5 number of CBR flows  (AC_0) ;


getopt $argc $argv


set packetSize   1500

set max_fragmented_size   1024


set val(chan)           Channel/WirelessChannel    ;# channel type

set val(prop)           Propagation/TwoRayGround   ;# radio-propagation model

set val(netif)          Phy/WirelessPhy            ;# network interface type

set val(mac)            Mac/802_11e                ;# MAC type

set val(ifq)            Queue/DTail/PriQ         ;# interface queue type

set val(ll)             LL                         ;# link layer type

set val(ant)            Antenna/OmniAntenna        ;# antenna model

set val(ifqlen)         50                         ;# max packet in ifq

set val(rp)               AODV

set opt(choice)    $opt(0)

set opt(voiceflow)       $opt(1)

set opt(videoflow)         $opt(2)

set opt(TCPflow)    $opt(3)

set opt(CBRflow) $opt(4)


Mac/802_11e set dataRate_          1.0e6           ;# 1Mbps

Mac/802_11e set basicRate_         1.0e6           ;# 1Mbps


set ns [new Simulator]


set f [open w]

$ns trace-all $f


# set up topography object

set topo       [new Topography]

$topo load_flatgrid 500 500


# Create God

create-god 2


# create channel

set chan [new $val(chan)]


$ns node-config -adhocRouting $val(rp) \

                -llType $val(ll) \

                -macType $val(mac) \

                -ifqType $val(ifq) \

                -ifqLen $val(ifqlen) \

                -antType $val(ant) \

                -propType $val(prop) \

                -phyType $val(netif) \

                -channel $chan \

                -topoInstance $topo \

                -agentTrace OFF \

                -routerTrace OFF \

                -macTrace OFF \

                -movementTrace OFF


for {set i 0} {$i < 2} {incr i} {

        set node_($i) [$ns node]

        $node_($i) random-motion 0



#MH(0) --> MH(1)  only tow host

$node_(0) set X_ 30.0

$node_(0) set Y_ 30.0

$node_(0) set Z_ 0.0


$node_(1) set X_ 200.0

$node_(1) set Y_ 30.0

$node_(1) set Z_ 0.0


#1st priority traffic (VoIP)

for {set i 0} {$i < $opt(voiceflow) } {incr i} {

        set udpA_($i) [new Agent/UDP]

        $udpA_($i) set prio_ 0

        $ns attach-agent $node_(0) $udpA_($i)

        set nullA_($i) [new Agent/Null]

        $ns attach-agent $node_(1) $nullA_($i)

        $ns connect $udpA_($i) $nullA_($i)


        set voip_($i) [new Application/Traffic/CBR]

        $voip_($i) attach-agent $udpA_($i)

        $voip_($i) set packet_size_ 160

        $voip_($i) set rate_ 64k

        $voip_($i) set random_ false

        $ns at 5.0 "$voip_($i) start"

        $ns at 50.0 "$voip_($i) stop"



#2nd priority traffic (Video, Foreman)

for {set i 0} {$i < $opt(videoflow) } {incr i} {

        set udp($i) [new Agent/my_UDP]

        $ns attach-agent $node_(0) $udp($i)

        $udp($i) set packetSize_ $packetSize

        $udp($i) set_filename sd_foreman_$i

        set null($i) [new Agent/myEvalvid_Sink]

        $ns attach-agent $node_(1) $null($i)

        $ns connect $udp($i) $null($i)

        $null($i) set_filename rd_foreman_$i


        set original_file_name($i)

        set trace_file_name($i) video($i).dat

        set original_file_id($i) [open $original_file_name($i) r]

        set trace_file_id($i) [open $trace_file_name($i) w]


        set pre_time 0

        set totalByte_I 0

        set totalByte_P 0

        set totalByte_B 0

        set totalPkt_I 0

        set totalPkt_P 0

        set totalPkt_B 0


        while {[eof $original_file_id($i)] == 0} {

            gets $original_file_id($i) current_line


            scan $current_line "%d%s%d%d%f" no_ frametype_ length_ tmp1_ tmp2_

            set time [expr int(($tmp2_ - $pre_time)*1000000.0)]


            if { $frametype_ == "I" } {

              set type_v 1

              set prio_p 1

              set totalByte_I  [expr int($totalByte_I + $length_)]

              set totalPkt_I [expr int($totalPkt_I + $tmp1_)]



            if { $frametype_ == "P" } {

              set type_v 2

              set prio_p 1

              set totalByte_P  [expr int($totalByte_P + $length_)]

               set totalPkt_P [expr int($totalPkt_P + $tmp1_)]



            if { $frametype_ == "B" } {

              set type_v 3

              set prio_p 1

              set  totalByte_B  [expr int($totalByte_B + $length_)]

              set totalPkt_B [expr int($totalPkt_B + $tmp1_)]



            if { $frametype_ == "H" } {

              set type_v 1                         

              set prio_p 1

              set totalByte_I  [expr int($totalByte_I + $length_)]

              set totalPkt_I [expr int($totalPkt_I + $tmp1_)]



            puts  $trace_file_id($i) "$time $length_ $type_v $prio_p $max_fragmented_size"

            set pre_time $tmp2_



        set totalPkt  [expr int($totalPkt_I+$totalPkt_P+$totalPkt_B)]

        set totalByte  [expr int($totalByte_I+$totalByte_P+$totalByte_B)]


        close $original_file_id($i)

        close $trace_file_id($i)

        set end_sim_time $tmp2_

        puts "end_sim_time: $end_sim_time"


        set trace_file($i) [new Tracefile]

        $trace_file($i) filename $trace_file_name($i)

        set video($i) [new Application/Traffic/myEvalvid]

        $video($i) attach-agent $udp($i)

        $video($i) attach-tracefile $trace_file($i)


        $ns at [expr 10.0 ] "$video($i) start"

        $ns at [expr 50.0] "$video($i) stop"

        $ns at [expr 50.0] "$null($i) closefile"

        $ns at [expr 50.0] "$null($i) printstatus"




#3rd priority traffic (CBR)

for {set i 0} {$i < $opt(CBRflow) } {incr i} {

        set udpB_($i) [new Agent/UDP]

        $udpB_($i) set prio_ 2

        $ns attach-agent $node_(0) $udpB_($i)

        set nullB_($i) [new Agent/Null]

        $ns attach-agent $node_(1) $nullB_($i)

        $ns connect $udpB_($i) $nullB_($i)


        set cbr_($i) [new Application/Traffic/CBR]

        $cbr_($i) attach-agent $udpB_($i)

        $cbr_($i) set packet_size_ 200

        $cbr_($i) set rate_ 125k

        $cbr_($i) set random_ false

        $ns at 20.0 "$cbr_($i) start"

        $ns at 35.0 "$cbr_($i) stop"



#4th priority traffic (TCP)

for {set i 0} {$i < $opt(TCPflow) } {incr i} {

        set tcp($i) [new Agent/TCP]

        $tcp($i) set prio_ 3

        $ns attach-agent $node_(0) $tcp($i)

        set sink($i) [new Agent/TCPSink]

        $ns attach-agent $node_(1) $sink($i)

        $ns connect $tcp($i) $sink($i)

        set ftp($i) [new Application/FTP]

        $ftp($i) set type_ FTP

        $ftp($i) attach-agent $tcp($i)

        $ns at 15.0 "$ftp($i) start"

        $ns at 30.0 "$ftp($i) stop"



set n0_ifq [$node_(0) set ifq_(0)]

$n0_ifq set choice $opt(choice)

$n0_ifq set threshold1 10       # low threshold of queue length

$n0_ifq set threshold2 40       # high threshold of queue length

$n0_ifq set prob0 0         # Prob_I

$n0_ifq set prob1 0.6              # Prob_P

$n0_ifq set prob2 0.9              # Prob_B


$ns at 0.0 "record4"


set f4 [open queuelength_choice_$opt(choice)_voice_$opt(voiceflow)_video_$opt(videoflow)_FTP_$opt(TCPflow)_CBR_$opt(CBRflow).txt w]


proc record4 {} {

        global ns n0_ifq f4

        set time 0.01

        set now [$ns now]

        set qlen0 0

        set qlen1 0

        set qlen2 0

        set qlen3 0

        puts $f4 "$now      [$n0_ifq set qlen0]  [$n0_ifq set qlen1]  [$n0_ifq set qlen2]  [$n0_ifq set qlen3]  "

        $ns at [expr $now+$time] "record4"



for {set i 0} {$i < 2} {incr i} {

        $ns initial_node_pos $node_($i) 30

        $ns at 50.0 "$node_($i) reset";



$ns at 50.0 "finish"

$ns at 50.1 "puts \"NS EXITING...\"; $ns halt"




proc finish {} {

        global ns f

        $ns flush-trace

        close $f



puts "Starting Simulation..."

$ns run


5.     Simulation Results


Part I. Only video traffic


l          In this simulation, we will observe the mapping of video packets in MAC layer. In the simulation, there are three mapping algorithm, including IEEE 802.11e [1], static mapping [2], and dynamic mapping [3].


(1)   IEEE 802.11e




static mapping:




Dynamic mapping:




After simulation:



There are five parameters of the simulation. The first is the choice of mapping algorithm (0: IEEE 802.11e, 1: static mapping, 2: dynamic mapping). The follow four mean the numbers of traffic flows of AC[3], AC[2], AC[1], and AC[0]. In this simulation, we give two video flows to transmit in IEEE 802.11e network without other traffic. Then, a new file (queuelength_choice_0_voice_0_video_2_FTP_0_CBR_0.txt) is created. This file is the trace of the queue length. First field is the time stamp. The next four steps are the queue length of AC [3], AC[2], AC[1], and AC[0]. Then, you can change the mapping algorithm to static and dynamic by setting the parameter of mapping choice. Figure 6.1 shows the variation of queue length from the results of three mapping algorithms (queuelength_choice_0_voice_0_video_2_FTP_0_CBR_0.txt, queuelength_choice_1_voice_0_video_2_FTP_0_CBR_0.txt, queuelength_choice_2_voice_0_video_2_FTP_0_CBR_0.txt). As shown in figure 6 (a), in 802.11e all video packets are mapped to the same queue (AC[2]). Both the static and dynamic mapping algorithms use three queues (AC[2], AC[1], AC[0]) to transmit video packet, as showed in figure 6 (b) and (c). The dynamic mapping provides better utilization of high priority queue than static mapping.


The following figures can be done through the following steps.



After that, run “startxwin.bat” and “gnuplot”.



With similar steps, you can get the following three figures.

Figure 8 (a): Queue space utilization of 802.11e EDCA (Foreman)

Figure 8 (b): Queue space utilization of static mapping (Foreman)

Figure 8 (c): Queue space utilization of adaptive mapping (Foreman)


Part II. Four traffic flows


        In order to evaluate performance of different mapping algorithm, we give light load and heavy load in the simulation. In light load, there are 5 voice flows, 1 video flow, and 1 CBR, 1 TCP. In heavy load, there are 1 voice flow, 3 video flows, 1 CBR, and 1 TCP. The simulation results of video frame loss and average PSNR are presented in table 3.


l          Light load

(1)   ./ns 802_11e.tcl 0 5 1 1 1



(2)   ./et.exe sd_foreman_0 rd_foreman_0 1 1

 (Please do awk command first: to replace H with I in the trace file)


By comparing the traces of sending and receiving, we can find the loss of packet and video frame. Moreover, the video quality is also evaluated by the Decodable Frame Rate (Q). The result shows that the total number of loss video frame is 0. The Decodable Frame Rate is 1.00.


(3)   ./etmp4.exe sd_foreman_0 rd_foreman_0 foreman_qcif.mp4 foreman_qcife


        This step is to produce the video of MP4 format after the network simulation. The names of produced video files are according to the final parameter (foreman_qcife) in this step.


(4)   ./ffmpeg.exe -i foreman_qcife.mp4 foreman_qcife.yuv


This step is to produce a video file in YUV format based on the video file from step 4. The YUV video file will be used to calculate the PSNR of video quality in next step. You may be asked to input “y” during the process of this step.


(5)   ./avgpsnr.exe 176 144 420 foreman_qcif.yuv foreman_qcife.yuv


        The average PSNR of the result video is 34.89.


You can repeat the simulation steps for other two mapping algorithms and get the results as shown in Table 3.

Table 3: The average PSNR and number of frames lost (light load)


Average PSNR


Frame loss number

I frame

P frame

B frame


802.11e EDCA






Static mapping






Adaptive mapping







l          Heavy load

(1)   ./ns 802_11e.tcl 0 1 3 1 1

Then, you can change the mapping algorithm to static and dynamic, and repeat the simulation steps to get the result as Table 4. Because all steps are in the same for light and heavy load, we skip the step figures for heavy load.

Table 4: The average PSNR and number of frames lost (heavy load)


Average PSNR


Frame loss number

I frame

P frame

B frame


802.11e EDCA






Static mapping






Adaptive mapping









[1]   IEEE Std 802.11e-2005;"Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 8: Medium Access Control (MAC) Quality of Service Enhancements", November 2005

[2]   Ksentini, A., Naimi, M., and Gueroui, A.; “Toward an improvement of H.264 video transmission over IEEE 802.11e through a cross-layer architecture,” IEEE Communications Magazine, Jan. 2006.

[3] C. H. Lin, C. K. Shieh, C. H. Ke*, N. Chilamkurti, S. Zeadally, “An Adaptive Cross-layer Mapping Algorithm for MPEG-4 Video Transmission over IEEE 802.11e WLAN”, Telecommunication Systems Journal (Springer): special issue: Mobility Management and Wireless Access, vol. 42, no. 3-4, pp. 223-234, 2009 (SCI)



If you have any question about this experiment, please mail to the authors.



If you use this tool to publish, you should refer the source of the tool in your published material.


Last modified: 2010/10/17




1.Chih-Heng, Ke(柯志亨) -- Henry




Assistant professor, National Quemoy University- Department of Computer Science and Information Engineering


2. Cheng-Han Lin (林政翰)



Postdoctoral Research Fellow (National Kaohsiung University of Applied Sciences)