Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 5, March
Previous Issue
Volume 4, September
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

J. Sens. Actuator Netw., Volume 4, Issue 4 (December 2015) – 6 articles , Pages 274-409

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
809 KiB  
Article
The Efficacy of Epidemic Algorithms on Detecting Node Replicas in Wireless Sensor Networks
by Narasimha Shashidhar, Chadi Kari and Rakesh Verma
J. Sens. Actuator Netw. 2015, 4(4), 378-409; https://doi.org/10.3390/jsan4040378 - 11 Dec 2015
Cited by 6 | Viewed by 6813
Abstract
A node replication attack against a wireless sensor network involves surreptitious efforts by an adversary to insert duplicate sensor nodes into the network while avoiding detection. Due to the lack of tamper-resistant hardware and the low cost of sensor nodes, launching replication attacks [...] Read more.
A node replication attack against a wireless sensor network involves surreptitious efforts by an adversary to insert duplicate sensor nodes into the network while avoiding detection. Due to the lack of tamper-resistant hardware and the low cost of sensor nodes, launching replication attacks takes little effort to carry out. Naturally, detecting these replica nodes is a very important task and has been studied extensively. In this paper, we propose a novel distributed, randomized sensor duplicate detection algorithm called Discard to detect node replicas in group-deployed wireless sensor networks. Our protocol is an epidemic, self-organizing duplicate detection scheme, which exhibits emergent properties. Epidemic schemes have found diverse applications in distributed computing: load balancing, topology management, audio and video streaming, computing aggregate functions, failure detection, network and resource monitoring, to name a few. To the best of our knowledge, our algorithm is the first attempt at exploring the potential of this paradigm to detect replicas in a wireless sensor network. Through analysis and simulation, we show that our scheme achieves robust replica detection with substantially lower communication, computational and storage requirements than prior schemes in the literature. Full article
Show Figures

Figure 1

Figure 1
<p>Group-deployed sensor network topology.</p>
Full article ">Figure 2
<p>Cache structure and exchange. Note that the duplicate cache only needs to store the identity of the duplicate node and no more than two distinct locations regardless of the number of locations in which the replica has been deployed.</p>
Full article ">Figure 3
<p>The proportion of nodes that have not learned about the duplicate, as a function of consecutive cycles of the <math display="inline"> <mrow> <mi mathvariant="bold">Discard</mi> </mrow> </math> algorithm. The expected size of the home zone is 45 nodes. The plot points are averages of 100 runs.</p>
Full article ">Figure 4
<p>Number of incoming cache exchange requests per cycle, per sensor.</p>
Full article ">
402 KiB  
Article
Colorful Textile Antennas Integrated into Embroidered Logos
by Asimina Kiourti and John L. Volakis
J. Sens. Actuator Netw. 2015, 4(4), 371-377; https://doi.org/10.3390/jsan4040371 - 8 Dec 2015
Cited by 25 | Viewed by 13486
Abstract
We present a new methodology to create colorful textile antennas that can be embroidered within logos or other aesthetic shapes. Conductive threads (e-threads) have already been used in former embroidery unicolor approaches as attributed to the corresponding conductive material, viz. silver or [...] Read more.
We present a new methodology to create colorful textile antennas that can be embroidered within logos or other aesthetic shapes. Conductive threads (e-threads) have already been used in former embroidery unicolor approaches as attributed to the corresponding conductive material, viz. silver or copper. But so far, they have not been adapted to ‘print’ colorful textile antennas. For the first time, we propose an approach to create colorful electronic textile shapes. In brief, the embroidery process uses an e-thread in the bobbin case of the sewing machine to embroider the antenna on the back side of the garment. Concurrently, a colorful assistant yarn is threaded through the embroidery needle of the embroidery machine and used to secure or ‘couch’ the e-threads onto the fabric. In doing so, a colorful shape is generated on the front side of the garment. The proposed antennas can be unobtrusively integrated into clothing or other accessories for a wide range of applications (e.g., wireless communications, Radio Frequency IDentification, sensing). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Former technologies used to realize wearable logo-shaped antennas: (<b>a</b>) rigid antennas © John Wiley and Sons [<a href="#B9-jsan-04-00371" class="html-bibr">9</a>]; (<b>b</b>) copper tape antennas [<a href="#B10-jsan-04-00371" class="html-bibr">10</a>]; (<b>c</b>) conductive ink antennas [<a href="#B10-jsan-04-00371" class="html-bibr">10</a>,<a href="#B11-jsan-04-00371" class="html-bibr">11</a>]; (<b>d</b>) conductive fabric antennas © IEEE [<a href="#B12-jsan-04-00371" class="html-bibr">12</a>]; (<b>e</b>) other embroidered antenna © IEEE [<a href="#B13-jsan-04-00371" class="html-bibr">13</a>]; <span class="html-italic">versus</span> (<b>f</b>) proposed technology.</p>
Full article ">Figure 2
<p>Proposed technology for colorful logo antennas: (<b>a</b>) STEP 1: antenna design; (<b>b</b>) STEP 2: digitization; (<b>c</b>) STEP 3: embroidery of conductive parts; (<b>d</b>) STEP 4: embroidery of non-conductive parts.</p>
Full article ">Figure 3
<p>Side view of proposed colorful vs. former unicolor textile antennas.</p>
Full article ">Figure 4
<p>Measured performance of the textile and copper tape dipole prototypes: (<b>a</b>) reflection coefficient, |S<sub>11</sub>|; (<b>b</b>) E-plane realized gain radiation pattern at 2.4 GHz; (<b>c</b>) H-plane realized gain radiation pattern at 2.4 GHz.</p>
Full article ">
2964 KiB  
Article
Critical Infrastructure Surveillance Using Secure Wireless Sensor Networks
by Michael Niedermeier, Xiaobing He, Hermann De Meer, Carsten Buschmann, Klaus Hartmann, Benjamin Langmann, Michael Koch, Stefan Fischer and Dennis Pfisterer
J. Sens. Actuator Netw. 2015, 4(4), 336-370; https://doi.org/10.3390/jsan4040336 - 25 Nov 2015
Cited by 8 | Viewed by 10606
Abstract
In this work, a secure wireless sensor network (WSN) for the surveillance, monitoring and protection of critical infrastructures was developed. To guarantee the security of the system, the main focus was the implementation of a unique security concept, which includes both security on [...] Read more.
In this work, a secure wireless sensor network (WSN) for the surveillance, monitoring and protection of critical infrastructures was developed. To guarantee the security of the system, the main focus was the implementation of a unique security concept, which includes both security on the communication level, as well as mechanisms that ensure the functional safety during its operation. While there are many theoretical approaches in various subdomains of WSNs—like network structures, communication protocols and security concepts—the construction, implementation and real-life application of these devices is still rare. This work deals with these aforementioned aspects, including all phases from concept-generation to operation of a secure wireless sensor network. While the key focus of this paper lies on the security and safety features of the WSN, the detection, localization and classification capabilities resulting from the interaction of the nodes’ different sensor types are also described. Full article
(This article belongs to the Special Issue Security Issues in Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Overall system structure.</p>
Full article ">Figure 2
<p>The visualization program Spyglass.</p>
Full article ">Figure 3
<p>Hardware components of a clusterhead.</p>
Full article ">Figure 4
<p>Overview of the different sensor nodes and iSense module components.</p>
Full article ">Figure 5
<p>Architecture of the Testbed Runtime software.</p>
Full article ">Figure 6
<p>Communication security measures.</p>
Full article ">Figure 7
<p>Status byte used for error flag signaling.</p>
Full article ">Figure 8
<p>Channel mask used to signal jammed frequencies.</p>
Full article ">Figure 9
<p>Information flow of detection, localization and classification algorithms.</p>
Full article ">Figure 10
<p>Results of the geophone event picking algorithm.</p>
Full article ">Figure 11
<p>Experimental evaluation of the detection ranges of the different PIR sensor types. Green marks the detection area specified in the datasheet and red the actual detection area for walks with normal speed. (<b>a</b>) AMN34111 (Single). (<b>b</b>) AMN31111 (Multi). (<b>c</b>) AMN33111 (Multi). (<b>d</b>) Multi-PIR. (<b>e</b>) Long-Range-PIR: IS392.</p>
Full article ">Figure 12
<p>Typical PIR sensor activity of a Multi-PIR sensor node when an object passes the node from right to left and vice versa.</p>
Full article ">Figure 13
<p>Field test of the AMR sensor.</p>
Full article ">Figure 14
<p>Sensor data with fast car movement (50 km/h).</p>
Full article ">Figure 15
<p>Sensor data with slow car movement (5 km/h).</p>
Full article ">Figure 16
<p>Adaptive frequency hopping and communication robustness test setup.</p>
Full article ">Figure 17
<p>Absorber hall with jamming device and clusterhead.</p>
Full article ">Figure 18
<p>Spyglass renderings of the maps for the field tests performed within the scope of the system development (grid size is <math display="inline"> <mrow> <mn>5</mn> <mspace width="0.277778em"/> <mi mathvariant="normal">m</mi> </mrow> </math>×<math display="inline"> <mrow> <mn>5</mn> <mspace width="0.277778em"/> <mi mathvariant="normal">m</mi> </mrow> </math>) (<b>a</b>) Large field test; (<b>b</b>) Small field test.</p>
Full article ">Figure 19
<p>Events occurred during the AFH and jamming tests.</p>
Full article ">Figure 20
<p>Example of a walking test during the second field trial.</p>
Full article ">
855 KiB  
Article
Lesson Learned from Collecting Quantified Self Information via Mobile and Wearable Devices
by Reza Rawassizadeh, Elaheh Momeni, Chelsea Dobbins, Pejman Mirza-Babaei and Ramin Rahnamoun
J. Sens. Actuator Netw. 2015, 4(4), 315-335; https://doi.org/10.3390/jsan4040315 - 5 Nov 2015
Cited by 38 | Viewed by 11672
Abstract
The ubiquity and affordability of mobile and wearable devices has enabled us to continually and digitally record our daily life activities. Consequently, we are seeing the growth of data collection experiments in several scientific disciplines. Although these have yielded promising results, mobile and [...] Read more.
The ubiquity and affordability of mobile and wearable devices has enabled us to continually and digitally record our daily life activities. Consequently, we are seeing the growth of data collection experiments in several scientific disciplines. Although these have yielded promising results, mobile and wearable data collection experiments are often restricted to a specific configuration that has been designed for a unique study goal. These approaches do not address all the real-world challenges of “continuous data collection” systems. As a result, there have been few discussions or reports about such issues that are faced when “implementing these platforms” in a practical situation. To address this, we have summarized our technical and user-centric findings from three lifelogging and Quantified Self data collection studies, which we have conducted in real-world settings, for both smartphones and smartwatches. In addition to (i) privacy and (ii) battery related issues; based on our findings we recommend further works to consider (iii) implementing multivariate reflection of the data; (iv) resolving the uncertainty and data loss; and (v) consider to minimize the manual intervention required by users. These findings have provided insights that can be used as a guideline for further Quantified Self or lifelogging studies. Full article
(This article belongs to the Special Issue Mobile Computing and Applications)
Show Figures

Figure 1

Figure 1
<p>Smartwatch manual data collection interface.</p>
Full article ">Figure 2
<p>The number of participants who churned providing their mood or sleep data in mobileStd 2. This figure represents the problem of manual intervention requirements in a voluntary setting that participants will not receive reward while experimenting.</p>
Full article ">Figure 3
<p>Overview of five sensor data for all users collected in mobileStd 2 based on time of the day.</p>
Full article ">Figure 4
<p>Three days visualization of user lifelog data in mobileStd 2.</p>
Full article ">Figure 5
<p>Five examples of Lifelogging and Quantified Self applications that are currently available in the market. (<b>a</b>) Apple Health; (<b>b</b>) Google Fit; (<b>c</b>) Fitbit; (<b>d</b>) Sony Lifelog; and (<b>e</b>) Huawei Wear. All of these applications are capable collecting data from multiple source of information, but only Apple Health provides multivariate visualization on its historical data. Google Fit also fuses different physical activities, but use separate visualizations for other data such as heart rate.</p>
Full article ">
6499 KiB  
Article
On Optimal Multi-Sensor Network Configuration for 3D Registration
by Hadi Aliakbarpour, V. B. Surya Prasath and Jorge Dias
J. Sens. Actuator Netw. 2015, 4(4), 293-314; https://doi.org/10.3390/jsan4040293 - 4 Nov 2015
Cited by 4 | Viewed by 7566
Abstract
Multi-sensor networks provide complementary information for various taskslike object detection, movement analysis and tracking. One of the important ingredientsfor efficient multi-sensor network actualization is the optimal configuration of sensors.In this work, we consider the problem of optimal configuration of a network of coupledcamera-inertial [...] Read more.
Multi-sensor networks provide complementary information for various taskslike object detection, movement analysis and tracking. One of the important ingredientsfor efficient multi-sensor network actualization is the optimal configuration of sensors.In this work, we consider the problem of optimal configuration of a network of coupledcamera-inertial sensors for 3D data registration and reconstruction to determine humanmovement analysis. For this purpose, we utilize a genetic algorithm (GA) based optimizationwhich involves geometric visibility constraints. Our approach obtains optimal configurationmaximizing visibility in smart sensor networks, and we provide a systematic study usingedge visibility criteria, a GA for optimal placement, and extension from 2D to 3D.Experimental results on both simulated data and real camera-inertial fused data indicate weobtain promising results. The method is scalable and can also be applied to other smartnetwork of sensors. We provide an application in distributed coupled video-inertial sensorbased 3D reconstruction for human movement analysis in real time. Full article
(This article belongs to the Special Issue 3D Wireless Sensor Network)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Investigation of the criteria for visibility of a general convex polygon. (<b>a</b>) An exemplary convex polygon is being observed by two cameras. The images are shown from the top view of the inertial reference plane <math display="inline"> <msub> <mi>π</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> </math>. (<b>b</b>) The registration of the polygon corresponding to left picture. The registration includes the object and some extra areas (coloured in red) which do not belong to the polygon. This red area has appeared because of not having visibility on the lowest edge of the polygon.</p>
Full article ">Figure 2
<p>The figure shows the involved vectors. Green vectors, <math display="inline"> <msub> <mi mathvariant="bold">l</mi> <mi>i</mi> </msub> </math> and <math display="inline"> <msub> <mi mathvariant="bold">r</mi> <mi>i</mi> </msub> </math>, respectively, indicate the left and right tangents (bounding vectors) of a camera <math display="inline"> <msub> <mi>c</mi> <mi>i</mi> </msub> </math>. The bisector vector for each camera bounding pair (the tangents) of <math display="inline"> <msub> <mi mathvariant="bold">l</mi> <mi>i</mi> </msub> </math> and <math display="inline"> <msub> <mi mathvariant="bold">r</mi> <mi>i</mi> </msub> </math> is shown in red (<math display="inline"> <msub> <mi mathvariant="bold">b</mi> <mi>i</mi> </msub> </math>). <math display="inline"> <msub> <mi mathvariant="bold">n</mi> <mi>i</mi> </msub> </math> stands for the normal of the edge <math display="inline"> <msub> <mi>e</mi> <mi>i</mi> </msub> </math>. After performing the registration process based on the proposed algorithm, the area colored in red also become registered as a part of the object.</p>
Full article ">Figure 3
<p>Defined function to measure the cost between a camera and a polygon edge. The maximum cost is equal to <span class="html-italic">λ</span> and happens when <math display="inline"> <mrow> <mi>α</mi> <mo>&lt;</mo> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </math> or in other words the edge is invisible by the camera.</p>
Full article ">Figure 4
<p>Defined function to measure the cost between a camera and a polygon edge. The maximum cost is equal to <span class="html-italic">λ</span> and happens when <math display="inline"> <mrow> <mi>α</mi> <mo>&lt;</mo> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </math> or in other words the edge is invisible by the camera.</p>
Full article ">Figure 5
<p>The local minima problem for a triangular polygon and three cameras. Using just the cost value for each gene (camera) regardless of the other genes (cameras) in the same chromosome (camera network) can lead to have one edge perfectly observed by many cameras and other edges lacking. In this case, all three cameras are observing the edge <math display="inline"> <msub> <mi>e</mi> <mn>12</mn> </msub> </math> (the line between <math display="inline"> <msub> <mi>v</mi> <mn>1</mn> </msub> </math> and <math display="inline"> <msub> <mi>v</mi> <mn>2</mn> </msub> </math>) with cost values at zero since <math display="inline"> <msub> <mi mathvariant="bold">n</mi> <mn>1</mn> </msub> </math> is opposite to their bisector vectors (<math display="inline"> <msub> <mi mathvariant="bold">b</mi> <mn>1</mn> </msub> </math>, <math display="inline"> <msub> <mi mathvariant="bold">b</mi> <mn>2</mn> </msub> </math> and <math display="inline"> <msub> <mi mathvariant="bold">b</mi> <mn>3</mn> </msub> </math>), whereas the two other edges (<math display="inline"> <msub> <mi>e</mi> <mn>23</mn> </msub> </math> and <math display="inline"> <msub> <mi>e</mi> <mn>31</mn> </msub> </math>) are not observed at all since their cost value can not be zero. The second part of Algorithm 4 is dedicated to eliminating this problem using the penalty function in Equation (<a href="#FD2-jsan-04-00293" class="html-disp-formula">2</a>).</p>
Full article ">Figure 6
<p>Extension of the proposed algorithm to search for an optimal camera placement from 2D to 3D. In the case of 3D, instead of considering the normals of the edges of the polygon, the normal vectors of the faces must be considered. Moreover, the position part of each gene (<math display="inline"> <mi mathvariant="bold">p</mi> </math>) must be considered as a 3D vector. The rest of the algorithm would be the same as the 2D case.</p>
Full article ">Figure 7
<p>Results for camera placement optimization using the proposed GA. (<b>a</b>–<b>c</b>) depict three different samples. In each sample, a polygon with <span class="html-italic">k</span> vertices is randomly generated and the purpose of the algorithm is to search for an optimal coverage using <math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math> number of cameras. The convergences for the samples are plotted in (<b>d</b>). The vertical axis depicts the cost value for the fittest chromosome in each iteration, once it gets divided into the number of genes (<math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math>). The dimension of the search space is 1200 × 1200 cm<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>.</p>
Full article ">Figure 8
<p>Results for camera placement optimization using the proposed GA. (<b>a</b>–<b>c</b>) depict three different samples. In each sample, a polygon with <span class="html-italic">k</span> vertices is randomly generated and the purpose of the algorithm is to search for an optimal coverage using <math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math> number of cameras. The convergences for the samples are plotted in (<b>d</b>). The vertical axis depicts the cost value for the fittest chromosome in each iteration, once it gets divided into the number of genes (<math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math>). The dimension of the search space is 1200 × 1200 cm<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>.</p>
Full article ">Figure 9
<p>Results for camera placement optimization using the proposed GA. (<b>a</b>–<b>c</b>) depict three different samples. In each sample, a polygon with <span class="html-italic">k</span> vertices is randomly generated and the purpose of the algorithm is to search for an optimal coverage using <math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math> number of cameras. The convergences for the samples are plotted in (<b>d</b>). The vertical axis depicts the cost value for the fittest chromosome in each iteration, once it gets divided into the number of genes (<math display="inline"> <msub> <mi>n</mi> <mi>c</mi> </msub> </math>). The dimension of the search space is 1200 × 1200 cm<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>.</p>
Full article ">Figure 10
<p>Experimental setup for a smart sensor. AVT Prosilica GC650C camera coupled with a Xsens MTx inertial sensor mounted on the wall. We setup a network of these smart sensors around the room (Videos are available at YouTube <a href="https://www.youtube.com/watch?v=rPibqw4cAxc" target="_blank">https://www.youtube.com/watch?v=rPibqw4cAxc</a> or in the Supplementary file and more details are available at the website: <a href="http://sites.google.com/site/hdakbarpour/research" target="_blank">http://sites.google.com/site/hdakbarpour/research</a>).</p>
Full article ">Figure 11
<p>Virtual camera via fusion and downward looking co-ordinate system.</p>
Full article ">Figure 12
<p>Illustration of the 3D registration framework using homography concept. (<b>a</b>) A scene including a human and three cameras is depicted. <math display="inline"> <msub> <mi>π</mi> <mi>k</mi> </msub> </math> is one inertial-based virtual world plane. The cameras <math display="inline"> <msub> <mi>c</mi> <mn>1</mn> </msub> </math>, <math display="inline"> <msub> <mi>c</mi> <mn>2</mn> </msub> </math>, and <math display="inline"> <msub> <mi>c</mi> <mn>3</mn> </msub> </math> are observing the scene. (<b>b</b>) The registration layer (top view of the plane <math display="inline"> <msub> <mi>π</mi> <mi>k</mi> </msub> </math> of (a)). Each camera can be interpreted as a light source and our GA (Algorithm 5) based optimal configuration was used to obtain the final placements and 3D reconstruction experimental results.</p>
Full article ">Figure 13
<p>(<b>a</b>) Our 3D human movement analysis system is implemented using CUDA enabled GP-GPU enabling real-time performance; (<b>b</b>) Processing time with respect to number of inertial Euclidean planes and size (<math display="inline"> <mrow> <mi>c</mi> <msup> <mi>m</mi> <mn>2</mn> </msup> </mrow> </math>) of each inertial planes.</p>
Full article ">Figure 14
<p>Dynamic movement of a person under our 3D reconstruction framework.</p>
Full article ">Figure 15
<p>Online, real-time streaming of our 3D reconstruction results for a dynamic movement of a leg of a person.</p>
Full article ">
593 KiB  
Article
Performance Comparison of a Novel Adaptive Protocol with the Fixed Power Transmission in Wireless Sensor Networks
by Debraj Basu, Gourab Sen Gupta, Giovanni Moretti and Xiang Gui
J. Sens. Actuator Netw. 2015, 4(4), 274-292; https://doi.org/10.3390/jsan4040274 - 8 Oct 2015
Cited by 3 | Viewed by 6941
Abstract
In this paper, we compare the performance of a novel adaptive protocol with the fixed power transmission protocol using experimental data when the distance between the transmitter and the receiver is fixed. In fixed power transmission protocol, corresponding to the distance between the [...] Read more.
In this paper, we compare the performance of a novel adaptive protocol with the fixed power transmission protocol using experimental data when the distance between the transmitter and the receiver is fixed. In fixed power transmission protocol, corresponding to the distance between the sensor and the hub, there is a fixed power level that provides the optimal or minimum value in terms of energy consumption while maintaining a threshold Quality of Service (QoS) parameter. This value is bounded by the available output power levels of a given radio transceiver. The proposed novel adaptive power control protocol tracks and supersedes that energy expenditure by using an intelligent algorithm to ramp up or down the output power level as and when required. This protocol does not use channel side information in terms of received signal strength indication (RSSI) or link quality indication (LQI) for channel estimation to decide the transmission power. It also controls the number of allowed retransmissions for error correction. Experimental data have been collected at different distances between the transmitting sensor and the hub. It can be observed that the energy consumption of the fixed power level is at least 25% more than the proposed adaptive protocol for comparable packet success rate. Full article
Show Figures

Figure 1

Figure 1
<p>State transition diagram of the adaptive algorithm.</p>
Full article ">Figure 2
<p>The curves behave differently depending on the value of R. A low R value indicates slow back off while a high R indicates fast back off. When the number of successes is 0, the probability of transition is 0. This drop-off algorithm takes into account of all the previous successes indicating that it also uses past history while dropping-off.</p>
Full article ">Figure 3
<p>Comparison of the PSR, efficiency, and average cost of successful transmission when the distance between the sensor and the hub is 14 m. The minimum cost at fixed power is achieved at 0 dBm. The PSR of fixed power at 0 dBm is almost similar to the PSRs of the adaptive protocol. The adaptive protocol consumes 55% less energy than at 0 dBm when value of R is 0.5.The efficiency of the fixed power transmisison (0 dBm) is a touch higher than the adaptive protocol at R = 0.5.</p>
Full article ">Figure 4
<p>Comparison of the PSR, efficiency and average cost of successful transmission when the distance between the sensor and the hub is 18 m. The minimal cost of fixed power transmission is achieved at –6 dBm. The minimum energy consumption is at −6 dBm, primarily because of similar PSR and efficiency as at 0 dBm. In terms of energy efficiency, the adaptive protocol consumes 30% less energy than the fixed power transmission at −6 dBm when R is 1. The efficiency of the adaptive protocol at R = 1 is higher than fixed power transmission at –6 dBm.</p>
Full article ">Figure 5
<p>Comparison of the efficiency and average cost of successful transmission based on the PSR when the distance between the sensor and the hub is 20 m. The minimal cost of fixed power transmission is achieved at 0 dBm. In this case the PSR of fixed power at 0 dBm is same as the PSRs of adaptive protocol. In terms of energy efficiency, the adaptive protocol consumes 55% less energy than the fixed power transmission at 0 dBm when R = 1. The efficiency of the fixed power transmisison is a touch higher than that of the adpative protocol at R = 1.</p>
Full article ">Figure 6
<p>Comparison of the efficiency and average cost of successful transmission based on the PSR when the distance between the sensor and the hub is 24 m and collected during the busy hour. The minimum energy consumption of fixed power is achieved at 0 dBm, primarily because it has much higher PSR and efficiency than at −6 dBm. The adaptive protocol consumes 6% less energy than the fixed power transmission at 0 dBm when R = 0.5. The efficiency of the fixed power transmisison at 0 dBm is a touch higher than that of adpative protocol at R = 0.5.</p>
Full article ">Figure 7
<p>Comparison of the efficiency and average cost of successful transmission based on the PSR when the distance between the sensor and the hub is 24 m and collected during non-busy hours. The minimum energy consumption of fixed power is achieved at 0 dBm The adaptive protocol consumes 29% less energy than the fixed power transmission at 0 dBm when R = 1 The efficiencies of the adaptive protocol (at R = 1) and fixed power transmission (0 dBm) are comparable.</p>
Full article ">Figure 8
<p>Comparison of the efficiency and average cost of successful transmission based on the PSR and data collected during a gathering in a house. The minimum energy consumption of fixed power is achieved at 0 dBm. In terms of energy efficiency, the adaptive protocol consumes 26% less energy than the fixed power transmission at 0 dBm when R = 0.5. The protocol efficiencies of both fixed (at 0 dBm) and adaptive R = 0.5) are the same.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop