Abstract
Introduction
Background and Related Work
Methodology
Experimental Results and Analysis
Conclusions  and Future Work
Bibliography
Appendices
Downloads


 

Chapter 4. Experimental Results and Analysis


4.1 Default Values for Input Parameters
4.2 Performance of Messenger with respect to Node Speed
4.3 Performance of Messenger with respect to Pause Time
4.4 Performance of Messenger with respect to Sliding Window Size (SWS)
4.5 Performance of Messenger with/without Receipt Packets
4.6 Performance of Messenger with/without Cache Structure
4.7 Summary

           We have performed a series of experiments to evaluate the performance of the ad hoc messenger application with respect to the following factors: speed, pause time, Sliding Window Size (SWS), use of receipt packets, and use of cache structure. These factors are used to recognize the sensitivity of our wireless ad hoc messenger implemented with the DSR protocol under alternative designs and network environments. In particular, the first two parameters (speed and pause time) reflect the network operating environments, while the last three parameters (SWS, use of receipt packets, and use of cache structure) represent design alternatives in implementing our ad hoc messenger application.
           First, we vary the speed and pause time to control the stability of the network topology generated by the random waypoint model. In the random waypoint model, the next location to which each node moves is randomly generated. The speed with which to move to the next location is also randomly generated. Then a pause time is also randomly generated specifying the time the node stays at the next location. The cycle repeats. In manipulating the network environment, when the speed varies (changing between 0 m/s and 12 m/s), the pause time will be set as 0 second in order to get rid of its effect on the changing network topology. More specifically, since the random waypoint model produces a somewhat stable network topology even with a very high speed and a relatively long pause time (for example, over 20 seconds), the default pause time is set to 0 second in order to eliminate its effect. Also, when the pause time varies (changing between 15 seconds and 960 seconds), the speed will be fixed at one default value, 80 m/s, in order to eliminate its effect. The reason to select 80 m/s as the default speed is that the pause time alone can control the stability of the network topology when each node moves from a current position to the next position very quickly.
           Second, we vary the Sliding Window Size (SWS) parameter to test the system’s capability to allow multiple data packets to be transmitted simultaneously. When the network bandwidth is abundant and connectivity is stable, increasing SWS will allow more data packets to be transmitted simultaneously and thus will increase throughput due to parallelism. However, in an ad hoc network where nodes are resource constrained and bandwidth is limited, increasing SWS may cause more errors in packet delivery and may have an adverse effect on throughput. We will evaluate the effect of SWS on the performance of our wireless ad hoc messenger in a dynamically changing network topology.
           Third, we examine the effect of using receipt packets so as to detect route errors more quickly for hop-to-hop communication. Even though the IEEE 802.11 MAC protocol tries retransmission for a number of times, a receipt packet is still needed to inform the occurrence of a route error to the source node. However, in a bandwidth-limited and resource-constrained ad hoc network, the use of receipt packets may degrade the system performance needlessly because of a higher traffic load introduced into the network. We will evaluate the effect of receipt packets on the system performance.
           Finally, we examine the effect of using cache. Using cache has been reportedly to improve the performance of the DSR protocol. When cache is used, it means that a source node’s route cache can store more than one route acquired from Route Discovery Reply Packets returned by the destination node (as replies to a Route Discovery packet). We design an experiment to evaluate if using cache is beneficial or harmful in the presence of a changing network topology.

4.1 DEFAULT VALUES FOR INPUT PARAMETERS

4.1.1 Fixed Input Parameters

           The fixed default input values in all experiments are as follows:

  • Simulation Area: Width (m) * Height (m) = 200 (m)*200(m) = 40,000 m˛

  • Power Range (Transmission Range): 100 m

  • Total Number of Data Packets to Transmit: 100

  • 4.1.2 Changeable Input Parameters

               The default values for changeable input parameters in all experiments are as follows:

  • Speed: 80 m/sec.

  • Pause Time: 60 sec.

  • All nodes are Connected with Multi-hops (ACM): On

  • Use of Receipt Packets: Off

  • Sliding Window Size (SWS): 1

  • Use of Cache Structure: On

  •            The above input parameters are changeable in order to set up a specific scenario for each experiment. More specifically, five input variables are manipulated to establish a specific scenario, including the speed, pause time, use of receipt packets, SWS, and use of cache structure. The speed and pause time are manipulated to control the rate at which a network topology changes. The use of receipt packets, SWS, and the use of cache structure are manipulated as design alternatives to the DSR protocol implemented in our wireless ad hoc messenger.

    4.2 PERFORMANCE OF MESSENGER WITH RESPECT TO NODE SPEED

               This Section reports the effect of speed on the performance of the wireless ad hoc messenger application.

    4.2.1 Scenario

               In this experiment, all parameters assume their default values except the speed parameter, which changes from 0 m/s to 12 m/s in increment of 2:

  • Speed: 0, 2, 4, 6, 8, 10, 12 (m/s)
  •            As explained in Chapter 3 (Chapter 3 on Methodology), the random waypoint mobility model generates a stable network even at a high speed if the pause time is long (i.e. over 20 seconds). In order to eliminate the possibility that a stable network is generated because of a long pause time, the pause time is set as 0 second. Therefore, in this experiment, “speed” is the only parameter that controls the network stability. In general, a low speed will generate a stable network while a high speed will generate a frequently changing topology.

    4.2.2 Results

    4.2.2.1 Speed vs. Average Latency to Find a New Route

               Figure 4.1 shows how increasing the speed affects the average latency to find a new route. Figure 4.1 displays the mean value obtained from three different experiment rounds using three different configuration files for the network topology. Correspondingly, Figure 4.2 displays results for these three different rounds.
               As expected, the average latency increases as the speed increases since the random waypoint model produces a very dynamic network with the node speed is high, resulting in a high average latency to find a new route.

    4.2.2.2 Speed vs. Average Latency to Deliver a Data Packet

               Figure 4.3 shows the average latency to deliver a data packet increases as the speed increases. The reason again is due to the fact that the random waypoint mode produces a more fluctuating network at a high node speed. Thus, it takes more time for a data packet to reach the destination node in an unstable network topology.

    4.2.2.3 Speed vs. Delivery Ratio of Data Packets

               Figure 4.4 shows that a network topology generated with a high node speed and a short pause time negatively influences the delivery ratio of data packets. That is, as the speed increases, the network becomes more and more dynamic, and the delivery ratio of data packets decreases because more packets need to be retransmitted due to errors.

    4.2.2.4 Speed vs. Normalized Control Overhead

               Figure 4.5 demonstrates that the normalized control overhead increases as the node speed increases. The reason is again due to the fact that the random waypoint model generates a highly dynamic network with a high node speed and a short pause time, causing more control packets to be used to deliver the same number of data packets. That is, because there will be more broken routes when nodes move with a high speed, so more control packets, such as Route Discovery packets, may be used to repair broken routes.

    4.2.2.5 Speed vs. Throughput

               Figure 4.6 shows the negative effect of increasing the node speed on system throughput since a network with nodes moving with a high speed can break routes easily, thus causing the number of data packets delivered per second to be decreased.

    4.2.3 Summary of Result

               The experiment to test the effect of node speed on the performance of our wireless ad hoc messenger shows that our wireless ad hoc messenger performs better when the speed is low. In this experiment, the node speed is the only factor to control the stability of network topology using the random waypoint model. We observe that as the speed decreases, the network generated is more stable and our wireless ad hoc messenger will result in a shorter latency to find a new route, a shorter latency to deliver a data packet, a higher delivery ratio of data packets, a lower normalized control overhead, and a higher throughput.

    4.3 PERFORMANCE OF MESSENGER WITH RESPECT TO PAUSE TIME

    4.3.1 Scenario

               In this experiment, all parameters assume their default values except the pause time parameter, which varies between 15 seconds and 960 seconds as follows:

  • Pause Time: 15, 30, 60, 120, 240, 480, 960 (seconds)
  • 4.3.2 Results

    4.3.2.1 Pause Time vs. Average Latency to Find a New Route

               Figure 4.7 shows the mean average latency obtained from three experiment rounds in which the pause time is the x-parameter and the average latency for finding a new route is the y-parameter. Figure 4.7 illustrates that a source node takes the most time to find a new route when the pause time is short, which is 15 seconds in this experiment. When the pause time increases to 60 seconds, the average latency decreases dramatically. However, the latency increases again when the pause time increases further.
               The reason for the latency time behavior is that when the pause time is sufficiently long (>240 seconds in this case), the network generated by the random waypoint model is very stable, so the latency to find a new route is attributed to the first Route Discovery process without any cache entry being available in the source node. When the pause time decreases to 60 seconds in this experiment scenario, the latency to find a new route is calculated out of the average of that due to the first Route Discovery process, and those due to the use of cache entries. As a result, the average latency is the minimum. As the pause time decreases further to 15 seconds, the network generated is very dynamic for any cache entries to be useful and a new Route Discovery process is required which has to take a long time to find a new route in a dynamic network, resulting in the average latency to be very high.

               We verify the reason given above by reporting the latency to find the “first” route as shown in Figure 4.8. When the pause time is 15 seconds, the network topology is very frequently changing, causing all cache entries stored in the source node to be invalid. Thus, when the pause time is 15 seconds, it incurs more time to find a new route than to find the first route because of the extra time used to try invalid cache entries. This is indicated by the very long “average” latency compared with the much smaller latency for finding the first route when the pause time is 15 seconds. When the pause time increases to 60 seconds, we see that the average latency is at its minimum because most cache entries now are valid and can be utilized to decrease the amount of time to find a route when performing a Route Discovery process. This is also verified in Figure 4.8 where we see the latency of the first route is higher than the average latency when the pause time is 60. Finally, when the pause time is 240 seconds or higher, the average latency to find a new route and the latency to find the first route are the same, as shown in Figure 4.8. Since the Route Discovery process when the pause time is sufficiently high (> 240 seconds) is only performed in the initial Route Discovery stage (because the network topology generated is very stable and not changed throughout the entire experimental period), the source node does not need to perform any more Route Discovery process after the initial Route Discovery.

    4.3.2.2 Pause Time vs. Average Latency to Deliver a Data Packet

               Figure 4.9 shows the average latency to deliver a data packet decreases as the pause time increases. The average latency to deliver a data packet dramatically drops as the pause time becomes sufficiently high (> 240 seconds) beyond which the network generated becomes very stable so the latency to deliver a data packet becomes very low.

    4.3.2.3 Pause Time vs. Delivery Ratio of Data Packets

               Figure 4.10 shows that the delivery ratio of all transmitted data packets noticeably increases as pause time increases. When the pause time is sufficiently high (>240 seconds), the delivery ratio of data packets is almost 100%. That is, when 100 data packets are sent, 100 data packets are successfully delivered without retransmission.

    4.3.2.4 Pause Time vs. Normalized Control Overhead

               Figure 4.11 shows that as the pause time increases the normalized control overhead decreases. This is because more control packets are used for performing Route Discovery and Maintenance processes when participating nodes are highly mobile.

    4.3.2.5 Pause Time vs. Throughput

               As expected, Figure 4.12 shows that throughput also increases as the pause time grows. That is, as the network becomes more stable, it takes less time for a source node to send the same amount of data packets.

    4.3.3 Summary of Result

               In conclusion, our wireless ad hoc messenger using the DSR protocol performs better in terms of the five metrics selected when the pause time is long at which the random waypoint model generates a more stable network. More specifically, when the pause time is sufficiently long (>240 seconds in our experiment), the latency to deliver a data packet is low, the delivery ratio of data packets is high, the number of control packets used per a data packet is small, and the throughput is high. An interesting result is that when the pause time is relatively long, an initial Route Discovery takes more time compared to the latency for Route Discovery over the entire simulation period, because a long pause time creates a stable network so a subsequent Route Discovery can utilize valid cache entries to find valid routes to reduce the search time. On the other hand, when the pause time is very small resulting in a very dynamic network, an initial Route Discovery takes less time compared to the latency for Route Discovery because most cache entries are not valid in this case, thus causing a subsequent Route Discovery process to take more time to try invalid routes stored in the cache. We will examine the effect of using cache more in Section 4.6.

    4.4 PERFORMANCE OF MESSENGER WITH RESPECT TO SLIDING WINDOW SIZE (SWS)

    4.4.1 Scenario

               All parameters in this experiment assume their default values except the Sliding Window Size (SWS) parameter, which changes its value from 1 to 6 as follows:

  • SWS: 1, 2, 3, 4, 5, 6
  • 4.4.2 Results

    4.4.2.1 SWS vs. Average Latency to Find a New Route

               Figure 4.13 shows how much time is spent to find a new route on average, as SWS grows. When SWS is 1 or 2, our wireless ad hoc messenger can endure the amount of network traffic load and the latency to find a new route is reasonably small. However, as SWS is larger than 2, the latency to find a new route increases. A possible reason is that the underlying MAC layer protocol (IEEE 802.11) for wireless transmission is based on RTS/CTS with CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance), so a lot of collision may occur when multiple packets are transmitted simultaneously. That is, when multiple packets are sent at the same time, the Collision Avoidance mechanism utilized in IEEE 802.11 may cause a high packet transmission delay when the medium is busy. Furthermore, a packet may be thrown away after a given number of retransmissions, or it may be garbled and not received by the receiver. These reasons contribute to a long delay in finding a new route. In particular, when a packet arrives late at the destination node such that the ACK of the packet is not received by the sender within the end-to-end timeout period, a new Route Discovery would be triggered by the source node, causing a further delay.

    4.4.2.2 SWS vs. Average Latency to Deliver a Data Packet

               Figure 4.14 shows that as SWS increases, the average latency to deliver a data packet increases. The reason is that as SWS increases, more packets are in transit and the Collision Avoidance mechanism of IEEE 802.11 tends to cause a packet to be delivered late or thrown away after a given number of retransmissions or garbled and not received by the receiver.

    4.4.2.3 SWS vs. Delivery Ratio of Data Packets

               Figure 4.15 demonstrates the effect of increasing the SWS on the delivery ratio of data packets in our wireless ad hoc messenger. It shows that the delivery ratio of data packets decreases, as SWS increases. The reason is that more packets need to be retransmitted as SWS increases because packets could arrive late or are lost due to a higher level of medium contention. Thus, more data packets are sent than those actually delivered to the receiver.

    4.4.2.4 SWS vs. Normalized Control Overhead

               Figure 4.16 shows the number of control packets incurred per data packet, as a function of SWS. As expected, more control packets are required as SWS increases. Since the RTS/CTS mechanism with CSMA/CA tends to cause more Route Discovery processes to be invoked when the network traffic increases, more control packets (i.e. Route Discovery Packets and Reply Route Discovery Packets) are required as SWS increases.

    4.4.2.5 SWS vs. Throughput

               Figure 4.17 shows the number of packets delivered per second as a function of SWS. When SWS is 2, the throughput is the highest, beyond which the throughput decreases as SWS grows. Due to parallelism, the throughout when SWS is 5 is better than the throughput when SWS is 1. However, when SWS is 6, the high latency experienced due to heavy medium contention causes the throughput to fall below the throughput at SWS equal to 1. Thus although data parallelism introduced through the use of SWS > 1 could help improve the throughput, the medium contention effect in IEEE 802.11 could overweigh the benefit and degrade the throughput. As a result, there exists an optimal SWS at which the throughout is maximized.

    4.4.3 Summary of Result

               This experiment tested the effect of SWS on the performance of our ad hoc messenger using DSR in terms of the five metrics selected. Since a large SWS encourages parallelism in data processing, our wireless ad hoc messenger application exhibits better performance in throughput when SWS is 2 than when SWS is 1. However, when SWS is larger than 2, the throughput decreases because of the high latency experienced in packet delivery due to the collision avoidance mechanism used in IEEE 802.11 to handle multiple packets being transmitted simultaneously. We also observe that at SWS equal to 2, the latency to find a new route is at its minimum with a similar reasoning applied. The optimal SWS size depends on the system bandwidth capacity; in our experiment we observe the optimal SWS being 2 because the latency to deliver a data packet, the delivery ratio of data packets, and the normalized control overhead all deteriorate badly beyond this optimal SWS value.

    4.5 PERFORMANCE OF MESSENGER WITH/WITHOUT RECEIPT PACKETS

    4.5.1 Scenario

               In this experiment, all parameters assume their default values except the pause time and the use of receipt packets. The pause time parameter changes from 15 seconds to 960 seconds and the use of receipt packets is turned On/Off as follows:

  • Pause Time: 15, 30, 60, 120, 240, 480, 960 (seconds)
  • Use of Receipt Packets: On or Off
  • 4.5.2 Results

    4.5.2.1 Pause Time vs. Average Latency to Find a New Route with/without Receipt Packets

               Figure 4.18 shows two curves for the average latency to find a new route when receipt packets are used or not used, respectively. It shows that not using receipt packets gives a shorter latency to find a new route than using receipt packets. This indicates that using receipt packets may adversely bring higher network traffic and cause each packet to arrive late. Thus, the delayed arrival of each packet leads to a longer latency in finding a new route when Route Discovery is performed.

    4.5.2.2 Pause Time vs. Average Latency to Deliver a Data Packet with/without Receipt Packets

               Figure 4.19 shows that not using receipt packets gives better performance in terms of the average latency to deliver a data packet in general. However, when the pause time is shorter or the network is very unstable, using receipt packets may give better performance sometimes, i.e., for the case when the pause time is 30 seconds. Also, the performance of two experiments diverges especially when the pause time is 60 seconds. Since the system using receipt packets detects route failure by intermediate nodes more quickly than that of not using receipt packets, the benefit of the rapid detection of route failure may exceed the overhead of using more control packets. When receipt packets are not used, a source node tries to resend the same data packet three times each with an end-to-end timeout interval. After then, if the source node still has not received the ACK of the data packet sent, it performs a Route Discovery process because the source node regards the data packet as lost. Thus, when receipt packets are not used, the amount of network traffic is much less than using receipt packets. However, in the experiment without receipt packets, if there is a route failure, the source node needs to timeout three times before the failure is detected. Therefore, if the network bandwidth can sustain the amount of control packets generated by using receipt packets, and the network topology is unstable, then using receipt packets can lower the average latency to deliver a data packet.

    4.5.2.3 Pause Time vs. Delivery Ratio of Data Packets with/without Receipt Packets

               Figure 4.20 shows how the pause time affects the delivery ratio of data packets when receipt packets are used or not, respectively. The two curves diverge when the pause time is at 30 seconds. When the pause time is shorter or the network topology frequently changes, using receipt packets improves the delivery ratio of data packets. However, as the network topology becomes stable, using receipt packets is not useful and can even be harmful as it introduces more control packets. While the use of receipt packets is to detect route errors by intermediate nodes for rapid detection of route failures, it is not useful in a stable network. Consequently, receipt packets should be used only in a network with relatively fast moving nodes so that the benefit of rapidly detecting route errors by intermediate nodes outweighs the disadvantage of more control packets being introduced into the network.

    4.5.2.4 Pause Time vs. Normalized Control Overhead with/without Receipt Packets

               Figure 4.21 shows two curves for the normalized control overhead when receipt packets are being used or not, respectively. This confirms that our wireless ad hoc messenger using receipt packets generates much more control packets.

    4.5.2.5 Pause Time vs. Throughput with/without Receipt Packets

               Figure 4.22 shows two curves for the throughput when receipt packets are being used or not, respectively. The result correlates well with the packet delivery ratio result. That is, when the network topology is dynamic, or participating nodes move frequently, using receipt packets detects broken routes quickly by intermediate nodes and, as a result, improves the throughput.
               However, in a stable network in which a route error does not occur often, using receipt packets increases the amount of network traffic and causes packets to be delivered out of the timeout period, and, consequently, degrades the throughput.

    4.5.3 Summary of Result

               We tested the effect of using receipt packets for detecting route errors more rapidly on the performance of our ad hoc messenger application under stable and dynamic network topologies. The pause time was varied to generate stable or dynamic network topologies based on the random waypoint model. Since using receipt packets generates more network traffic, it may cause more data delivery delay due to the collision avoidance mechanisms of IEEE 802.11. Thus, when the network is stable, using receipt packets is not beneficial and could be harmful because of a higher traffic load being introduced. However, since the use of receipt packets facilitates route failures to be detected more rapidly, the use of receipt packets may be advantageous if the network topology is very dynamically changing.

    4.6 PERFORMANCE OF MESSENGER WITH/WITHOUT CACHE STRUCTURE

               By default, our cache structure includes all available routes obtained from Route Discovery. When the current route is broken, it is removed from the cache of the source node. On the other hand, new routes acquired from a new Route Discovery are added to the cache of the source node. When the source node uses a route in its cache, it always selects the shortest route to reach the destination. When the selected route is not valid, the source node will try alternative routes in its cache if they are available.
               When cache is not utilized, only one route is available in the source node. Thus, if the currently using route is not valid any more, the source node must invoke a Route Discovery process again to find a new route.

    4.6.1 Scenario

               In this experiment, all parameters assume their default values except the pause time and the use of cache. The pause time parameter changes from 15 seconds to 960 seconds and the use of cache is turned On/Off as follows:

  • Pause Time: 15, 30, 60, 120, 240, 480, 960 (seconds)
  • Use of Cache Structure: On or Off
  • 4.6.2 Results

               We first report the average number of source-destination routes maintained in the cache of the source node. Figure 4.23 shows that average number of routes maintained is between 1 and 3 for the case in which cache is used, and is between 0 and 1 for the use in which the cache structure is not utilized.

    4.6.2.1 Pause Time vs. Average Latency to Find a New Route with/without Cache Structure

               Figure 4.24 illustrates how the cache structure affects the average latency to find a new route. There is a break-even point at pause time = 60. When the pause time is small (<60) and thus the network is dynamic, the use of cache provides a higher latency. On the other hand, when the pause time is large (>60) and thus the network is stable, using cache provides a lower latency.
               The result means that when the network topology is frequently changing, the use of cache does not provide too much benefit and can even degrade performance. The reason is that when the network is very dynamic, cache entries tend to become invalid so it increases the time to find a route since the source will try invalid cache entries before invoking a Route Discovery process to search for a route. As shown in Figure 4.24, our application performs better with the use of cache when the network is stable. When the network topology does not change often, the alternative routes stored in the cache are helpful to find a new route because the source node does not have to perform an expensive Route Discovery process, and rather simply uses a valid route in its cache.

               There are optimization techniques, such as refreshing the route cache periodically, or turning on or off the option of using cache based on the network topology status as reported in [Lou2002, Wu2002, Hu2000, Hu2002]. These optimization techniques were not explored in this experiment and will be left as future work.

    4.6.2.2 Pause Time vs. Average Latency to Deliver a Data Packet with/without Cache Structure

               Figure 4.25 shows how the use of cache structure affects the average latency to deliver a data packet. The trend is the same as before. However, different from the result on the average latency to find a new route, even under a stable network topology, there is no significant difference in the average latency to deliver a data packet with or without the use of cache. This is because the latency to deliver a data packet represents the delay for one data packet to travel from a source node to a destination node, and there would not be much difference if one route used is valid for a long period of time.

    4.6.2.3 Pause Time vs. Delivery Ratio of Data Packets with/without Cache Structure

               Figure 4.26 shows the effect of cache on the delivery ratio of data packets. Again, when the pause time is small (<240 seconds) and the network is dynamic, the use of cache causes more data packets to be retransmitted because the source uses invalid routes stored in the cache to transmit data packets.

    4.6.2.4 Pause Time vs. Normalized Control Overhead with/without Cache Structure

               Figure 4.27 compares the normalized control overhead of our system when the cache structure is used vs. not used. More control packets are used under high mobility of participating nodes in the network especially when cache structure is used. As explained before, when nodes in the network are moving frequently, the use of cache produces more route error packets. That means that more control packets would be transmitted because of the use of stale route information in the cache.

    4.6.2.5 Pause Time vs. Throughput with/without Cache Structure

               Figure 4.28 shows the impact of using cache structure on the throughout. A distinct break-even point exists. That is, the use of cache improves the throughput only when the pause time is relatively long, confirming that using cache is helpful in improving the performance of our application when the network is relatively stable.

    4.6.3 Summary of Result

               In this experiment, we tested the effect of using cache on the performance of our wireless ad hoc messenger application. The results showed that the use of cache structure as a design alternative in DSR is not helpful when the network topology is unstable and likely to change frequently. That is, when the pause time is relatively short, and thus the network topology is dynamic, using cache may even be harmful to the system performance because of the extra delay incurred through using invalid cache entries to find a route. There exists a break-even point (with respect to the degree of changes in the network topology) at which the use of cache can improve the performance of the system although the improvement obtained is in general small.

    4.7 SUMMARY

               In this Chapter, we have described the experiment setup and reported results for evaluating the performance of our wireless ad hoc messenger application with physical interpretations given. Below we summarize the experimental results obtained.

               First, as the node speed increases, the performance of our wireless ad hoc messenger decreases because as the speed increases, a more dynamic network will result, thus causing routes to be broken more easily.
               Second, as the pause time increases, the performance of our wireless ad hoc messenger also increases. The reason is that a long pause time produces a relatively stable network topology.
               Third, the increase of SWS degrades the performance of our ad hoc messenger application on the one hand due to a higher level of medium contention between multiple packets being transmitted simultaneously, but improves the performance on the other hand due to a higher degree of parallelism introduced. We discovered that there exists an optimal SWS value under which these two factors are best balanced and the system performance is maximized. In our experiment, we observed that the optimal SWS value is 2, beyond which the system performance degrades as SWS increases.
               Fourth, the use of receipt packets does not provide any benefit when the network topology is stable because of extra control packets being introduced into the system. However, when the network topology is changing rapidly, the use of receipt packets may improve the system performance because route errors can be quickly detected by intermediate nodes.
               Finally, the use of cache may improve performance if the network topology is relatively stable. However, when the network topology is rapidly changing, the use of cache could be harmful to system performance because most cache entries may be invalid, causing the system to spend more time to search for a route during a Route Discovery or Maintenance process.

    Last updated: Thursday, July 29, 2004

    Home Contents Contact Us