Skip to main content
Top

2022 | Book

Wireless Algorithms, Systems, and Applications

17th International Conference, WASA 2022, Dalian, China, November 24–26, 2022, Proceedings, Part III

insite
SEARCH

About this book

The three-volume set constitutes the proceedings of the 17th International Conference on Wireless Algorithms, Systems, and Applications, WASA 2022, which was held during October 28-30, 2022. The conference took place in Dalian, China.The 95 full and 62 short papers presented in these proceedings were carefully reviewed and selected from 265 submissions. The contributions in theoretical frameworks and analysis of fundamental cross-layer protocol and network design and performance issues; distributed and localized algorithm design and analysis; information and coding theory for wireless networks; localization; mobility models and mobile social networking; underwater and underground networks; vehicular networks; algorithms, systems, and applications of edge computing

Table of Contents

Frontmatter

Theoretical Frameworks and Analysis of Fundamental Cross-Layer Protocol and Network Design and Performance Issues

Frontmatter
DC-Gossip: An Enhanced Broadcast Protocol in Hyperledger Fabric Based on Density Clustering

Low transaction efficiency remains one of the primary constraints to the development of permission blockchain. To enhance the communication performance of blockchain, the majority of research focuses on optimizing the local architecture of blockchain and improving consensus. In practice, increasing the block dissemination capability at the network layer can significantly improve transaction efficiency. We find that the redundancy and instability of the gossip protocol as a broadcast method in Hyperledger Fabric have a significant impact on communication performance. In this work, we introduce the idea of density clustering to propose the DC-Gossip broadcast protocol, constructing a stable network architecture with highly dense connectivity for the blockchain network layer. This architecture can effectively reduce the propagation latency and ensure the integrity of the distributed ledger. In our experiments with Fabric, DC-Gossip reduces latency by more than 19% after 40 blocks are propagated in a stable network environment with more than 100 nodes. Moreover, the latency decreases by 14% in a dynamic network under the identical circumstances.

Zhigang Xu, Kangze Ye, Xinhua Dong, Hongmu Han, Zhongzhen Yan, Xingxing Chen, Duoyue Liao, Haitao Wang
A Time Utility Function Driven Scheduling Scheme for Managing Mixed-Criticality Traffic in TSN

With the development of the industrial Internet, IEEE Time-Sensitive Networking (TSN) has attracted more and more attentions due to its capability of providing deterministic network performance. Unlike most existing studies that only considered a single type of traffic, our work addresses the scheduling problem of mixed-criticality traffic in TSN. Time utility function (TUF) is a utility curve that measures the quality of service (QoS) of streams with respect to end-to-end delays. In this paper, we introduce a variety of TUFs for different streams in TSN according to specific timing requirements. To match the transmission protocol of TSN, we first categorize mixed-criticality traffic into periodic and aperiodic streams, and then design a novel scheduling scheme aiming to maximize the total TUF value of all streams. We compare our proposed scheme with two benchmark schemes, and evaluation results show that our proposed one outperforms the counterparts, especially under the worst-case network settings.

Jinxin Yu, Changyan Yi, Tong Zhang, Fang Zhu, Jun Cai

Distributed and Localized Algorithm Design and Analysis

Frontmatter
Distributed Anti-manipulation Incentive Mechanism Design for Multi-resource Trading in Edge-Assistant Vehicular Networks

In response to the vast and ever-changing task demands of vehicle terminals, the edge-assistant vehicular network (EAVN) supported by the mobile computation offloading (MCO) technic constituted a new paradigm for improving system performance. The existing edge resource trading mechanisms in EAVN were all centralized processing and suffered from several critical drawbacks of the centralized systems, which inspired the research design of distributed trading mechanisms. In this paper, we proposed an efficient distributed reverse combinatorial auction-based trading mechanism under the anti-manipulation check, namely DRCA, to solve the joint multi-task offloading and multi-resource allocation problem in EAVN with overlapping areas, and prevent the participants from manipulating the auction results. We proved that DRCA has achieved the property of faithfulness and analyzed its network complexity. Besides, compared with existing auction-based mechanisms, DRCA could achieve suboptimal social welfare with relatively low system overhead.

Dongyu Guo, Yubin Zhou, Shenggang Ni

Information and Coding Theory for Wireless Networks

Frontmatter
Communication Optimization in Heterogeneous Edge Networks Using Dynamic Grouping and Gradient Coding

Communication load in heterogeneous edge networks is becoming heavier because of excessive computation and delay caused by straggler dropout, leading to high electricity cost and serious greenhouse gas emissions. To create a green edge environment, we focus on mitigating computation and straggler dropout to improve the communication efficiency during the distributed training. Therefore, we propose a novel scheme named Dynamic Grouping and Heterogeneity-aware Gradient Coding (DGHGC) to speed up average iteration time. The average iteration time is used as a metric reflecting the effect of mitigating computation and straggler dropout. Specifically, DGHGC firstly uses the static grouping to evenly distribute stragglers in each group. After the static grouping, considering the nonuniform distribution of nodes due to straggler dropout during the training process, a dynamic grouping depending on dropout frequency of stragglers is employed. The dynamic grouping tolerates more stragglers by examining the dropout threshold to improve the rationality of the static grouping for stragglers. In addition, DGHGC applies a heterogeneity-aware gradient coding to allocate reasonable data to stragglers based on their computing capacity and encode gradients to prevent stragglers from dropping out. Numerical results demonstrate that the average iteration time of DGHGC can be reduced largely compared to the state-of-art benchmark schemes.

Yingchi Mao, Jun Wu, Xiaoming He, Ping Ping, Jianxin Huang
Design on Rateless LDPC Codes for Reliable WiFi Backscatter Communications

This paper designs a rateless low density parity check (LDPC) code for the information transmission of the WiFi backscatter communications. Since WiFi has the characteristics of burst data packages and low anti-jamming capability, the reliability becomes a problem. Therefore, the encoding is significantly crucial when using WiFi signals as the excitation in backscatter communications. Rateless LDPC code can be applied to not only solve these two shortcomings, but also adjust the link state and the bit rate without knowing the channel state information. It ensures that transmission resources are not wasted and the computational resources are saved. We conduct simulation experiments and the obtained results show that the rateless LDPC still performs well under the restriction of the number of retransmissions. Furthermore, the proposed scheme works against the intermittent nature of WiFi excitation signals.

Sicong Xu, Xin He, Fan Wu, Guiping Lin, Panlong Yang
Design of Physical Layer Coding for Intermittent-Resistant Backscatter Communications Using Polar Codes

Backscatter communications enable the connection of the large scale of the Internet of things (IoT) devices, due to their extremely low power consumption characteristic. As the number of IoT devices is increasing, the effective and reliable communication between devices becomes a key factor to offer services with the desired quality by the IoT. However, due to the impact of noise and the low power of the backscatter signal itself, the system performance is usually unreliable. To this end, in this paper, we propose an integrated cyclic redundancy check code and Polar codes (CRC-Polar) to improve the performance of the ambient backscatter communications. The performance is verified indicating by the bit error rate from the following aspects: excitation source time intervals, excitation source signal-to-noise ratios, coding rates and code lengths. We conduct extensive computer simulations using Matlab platform to verify that the designed method achieves a better enhancement of the excitation source signal transmission process. The experimental results show that our proposed CRC-Polar scheme can effectively improve the communication reliability of backscatter communication with medium and long distances and effectively reduce the influence of environmental factors on the communication quality.

Xing Guo, Binbin Liang, Xin He
MEBV: Resource Optimization for Packet Classification Based on Mapping Encoding Bit Vectors

Packet classification plays a key role in network security systems such as firewalls and QoS. The so-called packet classification is to classify packets into different categories according to a set of predefined rules. When the traditional classification algorithm is implemented based on FPGA, memory resources are wasted in storing a large number of identical rule subfields, redundant length subfields, and useless wildcards in the rules. At the same time, due to the rough processing of range matching, the rules are extended. These problems seriously waste memory resources and pose a huge challenge to FPGAs with limited hardware resources. Therefore, a field mapping encoding bit vector (MEBV) scheme is proposed, which consists of a field-splitting-recombination architecture that can accurately divide each field into four mapping preparation fields according to the matching method, field reuse rate, and wildcard ratio, and also consists of four mapping encoding algorithms to complete the length compression of the rules, to achieve the purpose of saving resources. Experimental results show that for the 1K OpenFlow 1.0 ruleset, the algorithm can achieve a significant reduction in memory resources while maintaining high throughput and support range matching, and the scheme method can save an average of 38% in memory consumption.

Feng Guo, Ning Zhang, Qian Zou, Qingshan Kong, Zhiqiang Lv, Weiqing Huang
NT-RP: A High-Versatility Approach for Network Telemetry Based on FPGA Dynamic Reconfigurable Pipeline

Network telemetry provides more accurate and reliable services for intelligent network control by pushing fresh status information actively with help of data plane. However, most existing network telemetry methods are difficult to be deployed effectively in business environment due to the lack of runtime reconfigurability, huge time-space overhead, and high probability of information loss. In this work, we propose a high-versatility approach for network telemetry based on FPGA dynamic reconfigurable pipeline called NT-RP to maintain the balance between the accuracy of the measurement and the overhead in different scenarios. NT-RP can change the processing logic in runtime to obtain different network measurement spontaneously desired by users. Benefiting from distributed cyclic storage strategy and telemetry function integration mechanism, NT-RP can greatly reduce the overhead during measurement and mitigate the telemetry information missing problem caused by packet loss. The implementation of NT-RP in FPGA is evaluated in a real network testbed which consists of a few programmable nodes. Experimental results show that the influence of NT-RP in large traffic scenarios is less than 1%. It is not only able to successfully change the telemetry task during operation, but also perform more accurate network measurements with little telemetry information occupancy.

Deyu Zhao, Guang Cheng, Yuyu Zhao, Ruixing Zhu
An Effective Comprehensive Trust Evaluation Model in WSNs

Wireless Sensor Networks (WSNs) are vulnerable to many security threats from compromised nodes. A trust management system is an effective method to detect the malicious behaviors in WSNs. In this paper, an effective Comprehensive Trust Evaluation Model (CTEM) for WSNs is proposed and two kinds of trusts are considered, the direct trust and the indirect trust. The direct trust is assessed by monitoring the data collection, the energy consumption and the data forwarding of node. More significantly, the entropy theory is introduced to measure the uncertainty of direct trust. The indirect trust is integrated to evaluate a comprehensive trust when the uncertainty of direct trust is high enough so as to improve the one-sidedness of direct trust. CTEM can not only reduce the computation overhead of node but also prolong the lifetime of network. Simulation results show that the proposed strategy can defend against internal attacks and have better performances compared with some typical trust evaluation mechanisms.

Chengxin Xu, Wenshuo Ma, Xiaowu Liu
Precise Code Clone Detection with Architecture of Abstract Syntax Trees

In the field of code clone detection, there are token-based similarity and abstract syntax tree-based detection methods. The former consumes less resources and is faster to detect, while the latter consumes more space and is less efficient. And there are few tools that scale to large-scale databases. To address the challenges, an approach is proposed that can detect code clones using the similarity of tokens and architecture of abstract syntax trees. Architecture of the syntax trees preserves the precision of detecting clone pairs, at the same time, the method also preserves the speed of matching code similarity. In the approach, it first parses the tokens of the code fragments and gets the features of the syntax trees. It can eliminate the unqualified parts of them based on the architecture when matching the candidates quickly, and then detects the similarity in detail. Finally, the results are output according to the input threshold range. The experiments confirm that the method substantially improves the precision of code clone detection while keeping the recall rate unabated.

Xin Guo, Ruyun Zhang, Lu Zhou, Xiaozhen Lu
Multi-view Pre-trained Model for Code Vulnerability Identification

Vulnerability identification is crucial for cyber security in the software-related industry. Early identification methods require significant manual efforts in crafting features or annotating vulnerable code. Although the recent pre-trained models alleviate this issue, they overlook the multiple rich structural information contained in the code itself. In this paper, we propose a novel Multi-View Pre-Trained Model (MV-PTM) that encodes both sequential and multi-type structural information of the source code and uses contrastive learning to enhance code representations. The experiments conducted on two public datasets demonstrate the superiority of MV-PTM. In particular, MV-PTM improves GraphCodeBERT by 3.36% on average in terms of F1 score.

Xuxiang Jiang, Yinhao Xiao, Jun Wang, Wei Zhang

Localization

Frontmatter
Discover the ICS Landmarks Based on Multi-stage Clue Mining

In recent years, the rapidly increasing landscape of industrial control systems (ICS) devices has made the ICS geolocation more important. However, IP-based geolocation cannot provide high accuracy geographical locations for ICS devices. Commercial databases only provide coarse mappings between IP hosts and physical locations. Measured-based geolocation relies on the number of high-quality landmarks. In this paper, we present a novel framework called OSI-Geo for serving high-quality landmark mining of ICS devices. The main idea is that there are many location-indicating clues in the open-source information exposed by ICS devices, which can be utilized to find their physical locations. The OSI-Geo automatically collects location-indicating clues to generate ICS landmarks at large-scale. We conduct real-world experiments for validating the effectiveness and performance of our method. The results show that OSI-Geo can accurately collect clues with over 99% recall and precision. Based on those clues, 36,872 stable landmarks, covering 162 countries and 5,596 cities, are obtained. Among them, there are 30,290 (82%) fine-grained landmarks accurate to street-level at least. The accuracy of IP geolocation has been improved significantly based on the ICS landmarks. Thus, OSI-Geo achieves effectively landmark mining for ICS devices.

Jie Liu, Jinfa Wang, Peipei Liu, Hongsong Zhu, Limin Sun

Mobility Models and Mobile Social Networking

Frontmatter
Dynamic Mode-Switching-Based Worker Selection for Mobile Crowd Sensing

Along with intelligent device popularization, mobile crowd sensing (MCS) has garnered considerable interest as a novel way of sensing data acquisition. Active and continuous worker engagement in tasks is a critical concern for sustainability when selecting workers to accomplish tasks in continuous MCS. Previous worker selection approaches are unsuitable for continuous MCS to ensure a large enough workforce. This paper proposes a framework for dynamic mode-switching-based worker selection called DMWS. DMWS lets temporary low-quality workers at tasks improve their competitiveness through hybrid mode switching based on task completion quality to ensure long-term sustainability. Therefore, they have the opportunity to be selected by the MCS platform again. The ultimate objective is to maximize space coverage at the lowest possible cost by increasing worker participation. As evidenced by experimental results on two real-world data sets, DMWS outperforms other methods in terms of space coverage under budget constraints.

Wei Wang, Ning Chen, Songwei Zhang, Keqiu Li, Tie Qiu
A Distributed Simulator of Mobile Ad Hoc Networks

The simulation of a Mobile Ad Hoc Network (MANET), before deployment or during the system running, provides a priori design validation and insightful observation of the real system. But existing simulation tools mainly enable these by means of centralized instead of distributed deployment, which in some sense, cannot truly replicate the real system settings. In this paper, we present a DIstributively deployable Simulation tool for MANet (DISMAN), to accurately simulate MANET in a fully-distributed fashion thus allowing the emulation to scale with the network nodes without sacrificing accuracy. DISMAN is a fully functional tool that can be integrated with Kubernetes, support link layer (e.g., bandwidth limitation, delay, packet loss) and the multi-path as well as multi-hop transmission simulations. DISMAN is based on a four-layer architecture design, where on the top is a graphical user interface (GUI) layer for presentation and interaction. We further evaluate DISMAN with micro- and macro-benchmarks and show that DISMAN is easy to use and can assist MANET design by high level qualitatitive and quantitative simulations.

Xiaowei Shu, Hao Wang, Zhou Xu, Kaiwen Ning, Guowei Wu
Social-Network-Assisted Task Selection for Online Workers in Spatial Crowdsourcing: A Multi-Agent Multi-Armed Bandit Approach

The popularity of smart devices and the availability of wireless networks bring considerable attention to Spatial Crowdsourcing (SC). Existing studies mainly focus on solutions to different optimization objectives of the SC platform, ignoring the entitlement of workers. This paper starts from the perspective of workers and investigates how to select suitable tasks for each online worker such that everyone can maximize their individual profit. Since the profit is related to the completion degree of tasks that is determined by the prior unknown parameter, we model the problem as a Multi-Agent Multi-Armed Bandit (MAMAB) problem. We propose a Payment-Estimation-Based Solution (PEBS), allowing workers to sequentially make decisions on task selection based on their observations and estimations. Specifically, the proposed PEBS first utilizes the social network among workers and assists workers in learning the information of tasks from the historical data. Then, it introduces the idea of probability matching in Thompson Sampling (TS) to help estimate the profit of workers and deal with the task selection problem. Finally, extensive simulations show that our proposed mechanism is efficient in optimizing the individual profit of workers.

Qinghua Sima, Yu-E Sun, He Huang, Guoju Gao, Yihuai Wang
Privacy-Aware Task Allocation Based on Deep Reinforcement Learning for Mobile Crowdsensing

Mobile crowdsensing (MCS) is a new paradigm for data collection, data mining and intelligent decision-making using large-scale mobile devices. The efficient task allocation method is the key to the high performance of MCS. The traditional greedy algorithm or ant algorithm assumes that workers and tasks are fixed, which is not suitable for the situation where the location and quantity of workers and tasks change dynamically. Moreover, the existing task allocation methods usually collect the information of workers and tasks by the central server for decision-making, which is easy to lead to leakage of workers’ privacy. In this paper, we propose a task allocation method with privacy protection using deep reinforcement learning (DRL). Firstly, the task allocation is modeled as a dynamic programming problem of multi-objective optimization, which aims to maximize the benefits of workers and platform. Secondly, we use DRL for training and learning model parameters. Finally, the local differential privacy method is used to add random noise to the sensitive information, and the central server trains the whole model to obtain the optimal allocation strategy. The experimental results on the simulated data set show that compared with the traditional methods and other DRL based methods, our proposed method has significantly improved in different evaluation metrics, and can protect the privacy of workers.

Mingchuan Yang, Jinghua Zhu, Heran Xi, Yue Yang
Information Sources Identification in Social Networks Using Deep Convolutional Neural Network

With the ubiquity of electronic communication devices, detecting the information sources is a critical task in reducing the damage caused by malicious sources. However, in the contemporary research of sources identifications and information propagation identifications are calculated through social network topology structure or mathematics inference. In this paper, we borrow the training tool of neural network and propose a deep convolutional neural network to identify the sources in social networks. Initially, we utilize the 20% of data set to play the role of training set and substitute into the proposed model. Subsequently, we employ a bi-graph to classify the trained sources into truth or rumor vertexes. Finally, we utilize our proposed model to test 80% of data set as evaluation results of our identification mechanism. From the experimental results, our developed method can identify more than 85% of information sources and the classification accuracy can reach 80% in both test and train process. The obtained results further indicate that our model can effectively and accurately identify the information sources with reasonable computation costs.

Jiale Wang, Jiahui Ye, Wenjie Mou, Ruihao Li, Guangliao Xu

Underwater and Underground Networks

Frontmatter
MineTag: Exploring Low-Cost Battery-Free Localization Optical Tag for Mine Rescue Robot

Inertial navigation adopts localization base stations to correct cumulative errors for mine rescue robots, while requirements of explosion-proof safety hinder the application of regular powered base stations in harsh coal mine environments. Therefore, we propose MineTag, a novel localization base station for self-positioning of coal mine robots, which is built with low-cost and battery-free optical tags via a differ-neighbor deployment strategy. The main innovation of the tag is to modulate the light retro-reflection with a light absorption mechanism, allowing the tag to reflect a specific light intensity without the need for a power source. According to the topological relationship of tags, we propose a novel tag recognition algorithm based on trajectory matching to determine which tag the robot is under. Finally, we implemented MineTag and evaluated its performance in a real coal mine. Experimental results show that MineTag can achieve the tag recognition accuracy of more than 95%, and the localization accuracy is 98% error of 2.6 m or less.

Xiaojie Yu, Xu Yang, Yuqing Yin, Shouwan Gao, Pengpeng Chen, Qiang Niu
TSV-MAC: Time Slot Variable MAC Protocol Based on Deep Reinforcement Learning for UASNs

With the increasing variety and number of ocean applications, the underwater transmission of heterogeneous ocean data has become a hot spot in the research field of underwater acoustic sensor networks (UASNs). However, due to lack of flexibility in time slot allocation, the existing multiple access control (MAC) protocols for UASNs cannot be effectively applied to the transmission of heterogeneous ocean data. In order to solve the above problem in UASNs with heterogeneous ocean data, we propose a time slot variable MAC protocol (TSV-MAC) based on deep reinforcement learning. In TSV-MAC, the long short term memory (LSTM) deep learning model is constructed and is trained by considering the usage efficiency of time slots and the data collection condition of underwater nodes. Then, the trained LSTM model is applied to predict the generation and transmission of data from each underwater node and a Q-learning model is adopted to allocate a suitable number of time slots to underwater nodes. The TSV-MAC protocol periodically updates the time slot allocation table, to enable UASNs to adapt the different data packets which are dynamically generated. Finally, the effectiveness of the protocol is verified by extensive simulation results.

Yuchen Wu, Yao Liu, Zhao Zhao, Chunfeng Liu, Wenyu Qu
Localization for Underwater Sensor Networks Based on a Mobile Beacon

In Underwater Sensor Networks (UWSNs), the location information of sensor nodes is essential for making the measured data meaningful. However, UWSNs have a complex node deployment environment. Node mobility caused by ocean currents and other factors would lead to a bigger ranging error and make some nodes cannot receive enough data packets. In this paper, a Localization algorithm based on a Single Mobile Beacon (LSMB) is proposed. LSMB makes use of the attenuation law of signal strength and the geometric relationship between a sensor node and the path of the mobile beacon, reducing the impact of random error on distance measurement. On this basis, by analyzing the overall movement trends of sensor nodes, this paper analyzes and studies the counter-current movement and downstream movement of the mobile beacon respectively, so as to make LSMB suitable for dynamic marine environment. The simulation shows that the algorithm reduces the impact of node mobility on localization and has small average localization error.

Ying Guo, Longsheng Niu, Rui Zhang, Hongtang Cao, Jingxiang Xu

Vehicular Networks

Frontmatter
Dataset for Evaluation of DDoS Attacks Detection in Vehicular Ad-Hoc Networks

Vehicular ad-hoc networks (VANETs) are core components of the cooperative intelligent transportation system (C-ITS). Vehicles communicate with each other to obtain traffic conditions on the current road segment by broadcasting authenticated safety messages using their digital certificates. Although this method protects the system against external threats, it is ineffective when faced with internal adversaries who possess legal certificates. Consequently, an increasing number of researchers have focused on intrusion detection (misbehavior detection) technology. VeReMi and its extension version are the only public misbehavior datasets of VANETs in its field, allowing researchers to compare their studies with those of others. We note that denial of service (DoS) attacks in these datasets are insufficiently comprehensive. As a result, we designed a more complete dataset than existing datasets by implementing multiple attacks, including different types of distributed denial of service (DDoS) attacks. We present the detection results of some machine learning algorithms on our proposed dataset. These results indicate that our dataset can be utilized as a reference for future studies to evaluate different detection methods.

Hong Zhong, Fan Yang, Lu Wei, Jing Zhang, Chengjie Gu, Jie Cui
Vehicle-Road Cooperative Task Offloading with Task Migration in MEC-Enabled IoV

Mobile edge computing (MEC) is considered as a key technology for addressing computation-intensive and delay-critical applications in the Internet of vehicles (IoV). In MEC-enabled IoV, vehicles lighten their computing load by offloading tasks to edge servers. However, the high speed mobility of vehicles and time-varying network environment brings tough challenges to task offloading. In addition, considering only roadside units (RSUs) or vehicles as offloading objects lead to the waste of computing resources and increase the process delay of task. To this end, we formulate the reduction of task processing delay and improvement of service reliability as an utility maximization problem and propose a distributed vehicle-road cooperative task offloading scheme with task migration. Then we use RSUs and surrounding vehicles as offloading objects and divide offloading tasks into multiple subtasks for offloading objects and local parallel processing, which improves the utilization rate of computing resources. Meanwhile, we reduce the task processing failure by migrating the computing results of offloading subtasks. The offloading scheme is formulated as a mixed-integer nonlinear optimization problem, and a multi-agent deep Q-learning network (MADQN) algorithm is proposed to find the near-optimal offloading objects and number of offloading subtasks. Simulation results show that the proposed approach significantly improves the total task processing speed and service reliability.

Jiarong Du, Liang Wang, Yaguang Lin, Pengcheng Qian
Freshness-Aware High Definition Map Caching with Distributed MAMAB in Internet of Vehicles

The high-definition (HD) map is the foundation for autonomous driving, which has a huge data volume and needs to be updated frequently. To ensure low download latency, HD map contents are usually pre-cached at roadside units (RSU) or vehicles. However, the HD map contains a lot of dynamic data, and maintaining its freshness is crucial for ensuring driving safety, which is ignored by the existing HD map caching methods. In this paper, we propose a freshness-aware HD map caching method to minimize both download latency and loss of freshness. First, we introduce a cost function to incorporate both the download latency and the loss of freshness. Next, we formulate the HD map caching problem as an optimization problem to minimize the total cost. To reduce computation complexity, we decompose the original problem into two subproblems. Consequently, we propose a freshness-aware vehicle request algorithm to optimize vehicle request decisions and then leverage a distributed multi-agent multi-armed bandit (MAMAB) algorithm to make optimal caching decisions. Finally, simulation results verify that the proposed freshness-aware HD map caching method outperforms other baseline methods.

Qixia Hao, Jiaxin Zeng, Xiaobo Zhou, Tie Qiu
A Scalable Blockchain-Based Trust Management Strategy for Vehicular Networks

In recent years, the dynamic environment on the Internet of Vehicles led vehicular communication networks’ trust management mechanism to become a research hotspot. Most of the existing trust management in vehicular networks relies on a centralized third party. However, it causes trust management to be limited to a single node and has high requirements for device performance. In order to improve the reliability of information exchanged between vehicles, we propose a scalable blockchain-based trust management strategy, which employs vehicle-related objective factors to evaluate the credibility of information transmitted between vehicles to determine the vehicles’ trust level. We also design a consensus mechanism to make all RSUs (Roadside Units) maintain a consistent and reliable distributed ledger as nodes so that vehicles can obtain global trust information more quickly during interaction to improve its reliability. The security and performance analysis shows that our strategy has high reliability and availability.

Minghao Li, Gansen Zhao, Ruilin Lai
BP-CODS: Blind-Spot-Prediction-Assisted Multi-Vehicle Collaborative Data Scheduling

The most important thing for Connected and Automated Vehicles (CAVs) is to ensure driving safety and prevent the loss of life and property due to danger. The existence of vehicle blind spots can lead to incomplete or ineffective access to information, which will bring risks. At the same time, the transmission of a large amount of duplicate data will lead to information redundancy and bandwidth waste. In this paper, we design BP-CODS, which uses blind-spot prediction assistance to schedule image data between vehicles with the support of the Edge Server. We model the data scheduling transmission as two processes of uploading and downloading, form the set coverage problem, and propose a heuristic algorithm to solve it. We conduct extensive simulation experiments in CARLA to verify the effectiveness of BP-CODS in reducing a large number of redundant data.

Tailai Li, Chaokun Zhang, Xiaobo Zhou
Performance Analysis of Partition-Based Caching in Vehicular Networks

Partition-based caching is emerging as an appealing solution to improve the performance of the content caching by increasing the content diversity at the network edge. In this paper, we model and analyze a vehicular network where the vehicles can obtain the requested contents from the roadside units (RSUs) by adopting the random linear network coding in the partition-based caching scheme. Specifically, the geographic distribution of the roads and RSUs are modeled by the stochastic geometry tools. The required content can be obtained from the multiple nearest RSUs and the content can be decoded by using the successive interference cancellation approach. We derive the distance distribution between the typical vehicle and the nearest RSUs, and obtain the analytical expression of the successful transmission probability of the content caching. The numerical simulations verify the analytical results and provide the guidelines for the application of the partition-based caching in vehicular networks.

Siyuan Zhou, Wei Wu, Guoping Tan

PHY/MAC/Routing Protocols

Frontmatter
A Service Customized Reliable Routing Mechanism Based on SRv6

Reliable routing is a classic problem in the field of computer networks. After a network fault occurs, how to choose the recovery path directly determines the performance of network services. This paper introduces service customized techniques into reliable routing. By meeting customized traffic protection requirements, network service quality can be ensured after fault recovery. Topology Independent Loop-free Alternate (TI-LFA) supported by SRv6 is a new reliable routing technology. In this paper, an SRv6-based service customized reliable routing mechanism is designed for the single link failure in the case of P-Q space adjacency in TI-LFA. For traffic with QoS requirements, this paper uses fuzzy theory to make the optimal decision for SRv6 candidate protection schemes. Finally, three representative topologies are selected to build an experimental network supporting SRv6 based on ONOS, Mininet, and the programmable data plane. The results show that when responding to a network service customized request, the recovery path selected by the mechanism proposed in this paper is superior to the comparison mechanism of related QoS indicators.

Peichen Li, Deyong Zhang, Xingwei Wang, Bo Yi, Min Huang
PAR: A Power-Aware Routing Algorithm for UAV Networks

Unmanned Aerial Vehicles (UAVs) have been widely used in both military and civilian scenarios since they are low in cost and flexible in use. They can adapt to a wide variety of dangerous scenarios and complete many tasks the Manned Aerial Vehicles (MAVs) can not undertake. In order to establish connectivity and collect data in large areas, numerous UAVs often cooperate with each other and set up a UAV wireless network. Many multi-hop routing protocols have been proposed to efficiently deliver messages with high delivery ratio and low energy consumption. However, most of them do not consider that the power level of UAVs is adjustable. In this paper, we propose a Power-Aware Routing (PAR) algorithm for UAV networks. PAR utilizes the pre-planned trajectory information of UAVs to compute the encounters at different power levels, and then constructs a power-aware encounter tree to calculate the transmission path with minimum energy consumption from the source to the destination within the delay constraint. Through extensive simulations, we demonstrate that compared with three classic algorithms, PAR significantly reduces the energy consumption and improves the network performance on the basis of ensuring timely delivery of packets.

Wenbin Zhai, Liang Liu, Jianfei Peng, Youwei Ding, Wanying Lu
Multi-Channel RPL Protocol Based on Cross-Layer Design in High-Density LLN

Low power and lossy network (LLN) massive terminal deployment has become an inevitable trend. However, traditional routing protocols cannot meet the large-scale data transmission requirements. In this paper, we introduce the multi-channel communication technology into LLN and propose a multi-channel routing protocol based on cross-layer design (MC-RPL), which can increase the data transmission capacity of the network via a parallel data transmission strategy. Specifically, we design a novel super-frame structure to decouple the communication period into a route maintenance phase and a data transmission phase. Nodes can transmit data in parallel during the data transmission phase. Besides, we improve the trickle algorithm to enhance routing maintenance efficiency during the route maintenance phase. Simulation results have demonstrated the effectiveness of the MC-RPL protocol compared to the MRHOF and IRH-OF protocols.

Jianjun Lei, Tianpeng Wang, Xunwei Zhao, Chunling Zhang, Jie Bai, Zhigang Wang, Dan Wang
Routing Protocol Based on Improved Equal Dimension New Information GM(1,1) Model

Aiming at the problems of high end-to-end transmission delay and low packet delivery rate caused by the high mobility of unmanned aerial vehicle (UAV) nodes, a routing protocol based on the improved equal dimension new information GM(1,1) model (IEDNI-GM) is proposed. By analyzing the motion characteristics of the UAV node, combine the gray prediction model and the Markov chain model to construct IEDNI-GM to predict the location of the UAV node at the next moment. Meanwhile, the paper combines the advantage that clustering structure can optimize network management. We consider the motion state and the communication link state between nodes and use the predicted value of node position to calculate the value of link holding time, motion similarity and expected transmission count. The cluster-head election indicator is constructed by combining these three values, and the UAV nodes in the network are clustered. This clustering structure is adopted to improve the AODV routing protocol. Therefore, the source node can find an effective communication route to the destination node. Experiments under the network simulator NS-3 show that compared with routing protocols such as AODV and AODV-ETX, the routing protocol in this paper can effectively reduce the end-to-end average transmission delay, increase the delivery rate of data packets, and is more suitable for UANET.

Jian Shu, Hongjian Zhao, Huanfeng Hu

Algorithms, Systems, and Applications of Edge Computing

Frontmatter
An Asynchronous Federated Learning Optimization Scheme Based on Model Partition

Federated learning based on edge computing environment has great potential to facilitate the implementation of artificial intelligence at the edge of the network. However, because of the limited resource at the edge, place the complete Deep Neural Networks (DNN) model on the edge for training may not a good choice. In this paper, we study the time optimization for asynchronous federated learning based on model partition. That is, the DNN model is divided into two parts and deployed separately on the device and the edge server for the model training. First, we give the metric of the relationship between learning accuracy and iteration frequency, and then we build a mathematical model based on this. Because the solution space of mathematical model is too large to be solved directly, we propose an algorithm to minimize the total time by dynamically adjusting the model partition point and bandwidth allocation. Simulation results show that our algorithm can reduce the time by 32% to 60% compared with the other three methods.

Jing Xu, Lei Shi, Yi Shi, Chen Fang, Juan Xu
QoE and Reliability-Aware Task Scheduling for Multi-user Mobile-Edge Computing

Mobile-edge computing (MEC) has become a popular research topic from both academia and industry since it can alleviate the computation and power limitations of mobile devices by offloading computation-intensive and energy-consuming tasks from mobile users to nearby edge servers for remote execution. Existing papers have studied related problems, however, none of them considers the reliability of MEC systems that may suffer soft errors during execution and bit errors during offloading. In this work, we study the task offloading and scheduling problem targeting to maximize the quality of experience (QoE) of multi-user MEC systems under a certain reliability requirement. We propose to decompose the original problem into i) a task offloading optimization problem, ii) a task-to-server assignment problem for ensuring system reliability constraint, and iii) a computing resource allocation problem for maximizing system QoE. To address these sub-problems, we first obtain the optimal offloading decision using the discrete particle swarm optimization method. We then propose a reliability-optimality analysis-based task assignment heuristic and a utility-optimal resource allocation algorithm. Simulation results show that our scheme outperforms two state-of-the-art approaches and two baseline methods. The average improvement on QoE (quantified by offloading utility) achieved by our scheme is up to 63.2% under reliability requirement.

Weiming Jiang, Junlong Zhou, Peijin Cong, Gongxuan Zhang, Shiyan Hu
EdgeViT: Efficient Visual Modeling for Edge Computing

With the rapid growth of edge intelligence, a higher level of deep neural network computing efficiency is required. Visual intelligence, as the core component of artificial intelligence, is particularly worth more exploration. As the cornerstone of modern visual modeling, convolutional neural networks (CNNs) have greatly developed in the past decades. Variants of light-weight CNNs have also been proposed to address the challenge of heavy computing in mobile settings. Though CNNs’ spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks, these models are spatially local. To acquire a next-level model performance, vision transformer (ViT) is now a viable alternative due to the potential of multi-head attention mechanism. In this work, we introduce EdgeViT, an accelerated deep visual modeling method that incorporates the benefits of CNNs and ViTs in a light-weight and edge-friendly manner. Our proposed method can achieve top-1 accuracy of 77.8% using only 2.3 million parameters, 79.2% using 5.6 million parameters on ImageNet-1k dataset. It can achieve mIoU up to 78.3 on PASCAL VOC segmentation while only using 3.1 million parameters which is only half of MobileViT parameter budget.

Zekai Chen, Fangtian Zhong, Qi Luo, Xiao Zhang, Yanwei Zheng
Joint Optimization of Computation Task Allocation and Mobile Charging Scheduling in Parked-Vehicle-Assisted Edge Computing Networks

In this paper, we study the joint optimization of task allocation and charging scheduling of mobile charging vehicles (MCVs) for parked-vehicle-assisted edge computing networks. In the proposed model, a group of electric vehicles (EVs) that have been parked for a long time must be recharged to their expected energy level within a specified time frame. Meanwhile, an optimal set of parked vehicles (PVs) is selected to compute a machine learning task utilizing their hardware resources and local data while satisfying the task’s training performance requirements. Within the calculated time window, an MCV is dispatched to provide power replenishment to the PVs. By jointly deciding the task allocation and MCV charging sequence, the proposed model seeks to minimize the total energy consumption of the parked vehicular network, which includes the PV computation and MCV traveling consumption, subject to the PVs’ expected energy level, task target utility and time window. To address this joint optimization problem, a marginal-product-based algorithm is designed, where a deep reinforcement learning method is integrated to solve the MCV scheduling problem. Simulation results demonstrate that the proposed method can efficiently solve the problem and outperform the compared algorithms in terms of energy consumption.

Wenqiu Zhang, Ran Wang, Changyan Yi, Kun Zhu
A Secure Authentication Approach for the Smart Terminal and Edge Service

Smart home applications make our lives more comfortable, more convenient than ever before. However, deploying smart home applications and smart terminals could pose a potential security threat to personal information and home privacy. In order to prevent illegal use of terminals and applications, it is very necessary to establish secure and reliable communication between terminal and edge server. In this paper, we design a two-party authentication and key negotiation protocol for the smart terminal and edge service. The edge-based authentication and key negotiation scheme offloads the terminal’s main computational overhead to the edge side, and exploits cryptographic algorithms to achieve user anonymity and untraceability. Security is verified by the BAN logic and AVISPA. We also evaluate the performance by comparing our scheme with other related schemes in terms of computational overhead. The security and performance results show that our proposed scheme is suitable for edge-assisted smart home applications.

Qian He, Jing Song, Shicheng Wang, Peng Liu, Bingcheng Jiang
End-Edge Cooperative Scheduling Strategy Based on Software-Defined Networks

With the development of the Internet of Things (IoT), more and more applications are increasingly demanding latency. Traditional single-task scheduling strategy is difficult to satisfy low-latency demand. This is because the task scheduler usually schedules tasks to a closer server, which leads to an increase in task latency when there are more tasks, which in turn leads to an increase in task rejection rate. In this paper, we propose an end-edge cooperative multi-tasks scheduling (MTS) strategy based on improved particle swarm optimization (IPSO) algorithm. At first, we design a Software-Defined Networks controller algorithm to cluster task offload requests. Then, we set the scheduling priority for the multi-task clusters. At last, we minimize the total offloading cost of total tasks as the optimization goal to satisfy its delay. The results demonstrate that the strategy we proposed can effectively reduce the service cost of the system, and the processing delay of tasks, which improves the success rate of task processing.

Fan Li, Ying Qiao, Juan Luo, Luxiu Yin, Xuan Liu, Xin Fan
Joint Optimization of Bandwidth Allocation and Gradient Quantization for Federated Edge Learning

Federated Edge Learning (FEEL) is becoming a popular distributed privacy-preserving machine learning (ML) framework where multiple edge devices collaboratively train an ML model with the help of an edge server. However, FEEL usually suffers from a communication bottleneck due to the limited sharing wireless spectrum as well as the large size of training parameters. In this paper, we consider gradient quantization to reduce the communication traffic and aim at minimizing the total training latency. Since the per-round latency is determined by both the bandwidth allocation scheme and gradient quantization scheme (i.e., the quantization levels of edge devices), while the number of training rounds is affected by the latter, we propose a joint optimization of bandwidth allocation and gradient quantization. Based on the analysis of total training latency, we first formulate the joint optimization problem as nonlinear integer programming. To solve this problem, we then consider a variation of this problem where the per-round latency is fixed. Although this variation is proved to be NP-hard, we show that it can be transformed into a multiple-choice knapsack problem which can be solved efficiently by a pseudopolynomial time algorithm based on dynamic programming. We further propose a ternary search based algorithm to find a near-optimal per-round latency, so that the two algorithms together can yield a near-optimal solution to the joint optimization problem. The effectiveness of our proposed approach is validated through simulation experiments.

Hao Yan, Bin Tang, Baoliu Ye
Federated Learning Meets Edge Computing: A Hierarchical Aggregation Mechanism for Mobile Devices

Federated learning (FL) has been proposed and applied in edge computing scenarios. However, the complex edge environment of wireless networks, such as limited device computing resources and unstable signals, leads to increase communication overhead and reduced performance for federated learning. Therefore, we propose a hierarchical aggregation mechanism to improve federated learning performance in a resource-constrained wireless edge environment. We propose three feature models to quantify the FL performance and design a fuzzy $$\mathcal {K}$$ K -means clustering mechanism. We construct an optimization problem for the process of hierarchical aggregation. And a cluster-based hierarchical federated learning algorithm (CluHFed) is designed, which consists of fuzzy clustering, asynchronous aggregation, and topology reconstruction. At last, we make an experiment with Pytorch. The results show that the proposed algorithm improves the accuracy by 2.6%–35.8% and reduces the latency of FL networks by 5.9% compared with other popular federated learning algorithms.

Jiewei Chen, Wenjing Li, Guoming Yang, Xuesong Qiu, Shaoyong Guo
QoS-oriented Hybrid Service Scheduling in Edge-Cloud Collaborated Clusters

Service scenarios under edge-cloud collaboration are becoming more diverse in terms of service performance requirements. For example, smart grids require both intelligent control and long-term optimization, which poses considerable challenges for service providers to meet quality of service (QoS). However, current pioneering work has not yet explored both system utility and QoS guarantees. Therefore, this paper investigates the optimization problem of edge-cloud collaborative scheduling for QoS guarantees. First, we model the edge-cloud collaborative scheduling scenario and derive two sub-problems such as service deployment and request dispatch. Second, we design a near-optimal scheduling algorithm based on a submodular function optimization approach with the objective of maximizing the number of requests that are processed within the edge-cloud cluster under QoS constraints. Finally, our experiments verify the beneficial effects of the proposed algorithm in terms of throughput rate, scheduling time cost, and resource utilization.

Yanli Ju, Xiaofei Wang, Xin Wang, Xinying Wang, Sheng Chen, Guoliang Wu
Deep Reinforcement Learning Based Computation Offloading in Heterogeneous MEC Assisted by Ground Vehicles and Unmanned Aerial Vehicles

Compared with traditional mobile edge computing (MEC), heterogeneous MEC (H-MEC), which is assisted by ground vehicles (GVs) and unmanned aerial vehicles (UAVs) simultaneously, is attracting more and more attention from both academia and industry. By deploying base stations (along with edge servers) on GVs or UAVs, H-MEC is more suitable for access-demand dynamically-changing network environments, e.g., sports matches, traffic management, and emergency rescue. However, it is non-trivial to perform real-time user association and resource allocation in large-scale H-MEC environments. Motivated by this, we propose a shared multi-agent proximal policy optimization (SMAPPO) algorithm based on the centralized training and distributed execution framework. Due to the NP-hard difficulty of jointly optimizing user association and resource allocation for H-MEC, we adopt the actor-critic-based online-policy gradient (PG) algorithm to obtain near-optimal solutions with low scheduling complexities. In addition, considering the low sampling efficiency of PG, we introduce proximal policy optimization to increase the training efficiency by importance sampling. Moreover, we leverage the idea of centralized training and distributed execution to improve the training efficiency and reduce scheduling complexity, so that each mobile device makes decisions based only on local observation and learns other MDs’ experience from a shared replay buffer. Extensive simulation results demonstrate that SMAPPO can achieve more satisfactory performances than traditional algorithms.

Hang He, Tao Ren, Meng Cui, Dong Liu, Jianwei Niu
Synchronous Federated Learning Latency Optimization Based on Model Splitting

Federated Learning (FL) is a distributed machine learning approach which is suitable for edge computing environment. While in this environment, how to take full advantage of the computing resources on end devices and edge servers is still a difficult problem. Especially for the synchronous federated learning, computing resources among different participants will lead to extra time cost and cause resource waste. In this paper, we try to reduce the time cost and the computing resource waste by using model splitting and task scheduling. We first establish the mathematical model and find it can not be solved directly. Then we design our algorithm which we name as the Federated Learning Offloading Acceleration (FLOA) algorithm to obtain a sub-optimal solution. The FLOA algorithm first uses the Partition Points Selection method to reduce the size of the solution space, then proposes a task offloading method based on matching theory. Experiments and simulations show that compared to the other three calculation methods, the single iteration time is reduced by $$47\%$$ 47 % , $$28\%$$ 28 % , $$14\%$$ 14 % under our algorithm in turn.

Chen Fang, Lei Shi, Yi Shi, Jing Xu, Xu Ding
CodeDiff: A Malware Vulnerability Detection Tool Based on Binary File Similarity for Edge Computing Platform

Malware detection has become a hot research pot as the development of Internet of Things and edge computing have grown in popularity. Specifically, various malware exploits firmware vulnerabilities on hardware platform, resulting in significant financial losses for both IoT users and edge platform providers. In this paper, we propose CodeDiff, a fresh approach for malware vulnerability detection on IoT and edge computing platforms based on the binary file similarity detection. CodeDiff is an unsupervised learning method that employs both semantic and structural information for binary diffing and does not require label data. Through the SkipGram with Negative Sampling, we generate the word vocabulary for instruction data. The Graph AutoEncoder is then used to embed both the semantic and structure information into the representation matrix for the CFG. After this, we employ the Improved Graph AutoEncoder to fuse all the function structures, function characteristics and function features to the fusion matrix. Finally, we propose the specific matrix comparison to achieve the high accuracy similarity results in short amount of time. Furthermore, we test the prototype on binary datasets OpenSSL and Curl. The results reveal that CodeDiff gives high performance on the binary file similarity detection, which contributes to identify malware vulnerability and improves the security of Internet of Things platforms.

Kang Wang, Longchuan Yan, Zihao Chu, Yonghe Guo, Yongji Liu, Lei Cui, Zhiyu Hao
Multi-dimensional Data Quick Query for Blockchain-Based Federated Learning

Due to the drawbacks of Federated Learning (FL) such as vulnerability of a single central server, centralized federated learning is shifting to decentralized federated learning, a paradigm which takes the advantages of blockchain. A key enabler for adoption of blockchain-based federated learning is how to select suitable participants to train models collaboratively. Selecting participants by storing and querying the metadata of data owners on blockchain could ensure the reliability of selected data owners, which is helpful to obtain high-quality models in FL. However, querying multi-dimensional metadata on blockchain needs to traverse every transaction in each block, making the query time-consuming. An efficient query method for multi-dimensional metadata in the blockchain for selecting participants in FL is absent and challenging. In this paper, we propose a novel data structure to improve the query efficiency within each block named MerkleRB-Tree. In detail, we leverage Minimal Bounding Rectangle (MBR) and bloom-filters for the query process of multi-dimensional continuous-valued attributes and discrete-valued attributes respectively. Furthermore, we migrate the idea of the skip list along with an MBR and a bloom filter at the head of each block to enhance the query efficiency for inter-blocks. The performance analysis and extensive evaluation results on the benchmark dataset demonstrate the superiority of our method in blockchain-based FL.

Jiaxi Yang, Sheng Cao, Peng Xiangli, Xiong Li, Xiaosong Zhang
Joint Edge Server Deployment and Service Placement for Edge Computing-Enabled Maritime Internet of Things

With the growing activities of diverse Maritime Internet of Things (MIoT), mobile edge computing (MEC) becomes a promising paradigm to provision computation and storage for computation-intensive tasks of marine users. Although the edge server (ES) deployment and service placement are important issues in the field of MEC, research on joint placement is often overlooked, particularly in the MIoT. In this paper, we propose the buoy-based ES deployment and service placement (BESDSP) problem for MIoT networks, aiming at maximizing the total profit while considering the location constraints of buoys, the different service request rates, the income and delay cost of service provided by ESs, as well as the characteristics of maritime channels. Then, we propose a heuristic approach, the genetic-BESDSP (G-BESDSP) algorithm, to solve the BESDSP problem. Simulation results demonstrate that the proposed G-BESDSP algorithm outperforms existing state of art solutions.

Chaoyue Zhang, Bin Lin, Lin X. Cai, Liping Qian, Yuan Wu, Shuang Qi
Optimal Task Offloading Strategy in Vehicular Edge Computing Based on Game Theory

In vehicular edge computing, when there are many vehicles requesting offloading services at the same time, relying only on the resources of edge servers often cannot meet the needs of delay-sensitive tasks. Most existing task offloading studies tend to only consider pure offloading strategies for vehicles, which may not be the optimal strategy for some splittable tasks. In this paper, we jointly optimize the vehicle hybrid offloading strategy and the server resource pricing strategy. For a requesting task, it can be executed locally, be offloaded to the edge server, and be offloaded to the cloud center at the same time. We model the interaction between vehicles, the edge server and the cloud center as a game model. Based on the analysis of backward induction, we prove that the game has a unique Nash equilibrium. Meanwhile, an algorithm that can converge to the equilibrium point in polynomial time is proposed. Numerical experimental results show that the proposed algorithm has better performance in terms of delay and cost than existing algorithms.

Zheng Zhang, Lin Wu, Feng Zeng
Aerial-Aerial-Ground Computation Offloading Using High Altitude Aerial Vehicle and Mini-drones

Unmanned Aerial Vehicles (UAV) supported by 5G networks can play an important role in providing aerial-aerial/aerial-ground computing services to remote and isolated areas at a low cost. In this paper, we present an aerial-aerial-ground network (AAGN) computing architecture using High Altitude Unmanned Aerial Vehicle (HAU) and Mini-Drones (MDs) based on Mobile Edge Computing (MEC) services where HAU provides computation offloading services for MDs, while MDs can serve as edge computing servers that can be equipped with appropriate capabilities to provide computing services for User Equipments (UEs) on demand. This study focuses on the computation offloading services provided by HAU to MDs, where the MD offloads all or a part of the task to the HAU, and the remaining of the task can be executed by MD. The proposed AAGN framework aims to reduce the MDs’ energy consumption and minimize the processing delay by optimizing HAU mobility, MDs scheduling, flight speed, flight angle, and tasks offloading, equipping HAU with the required computing resources. We investigate the computation offloading problem using Deep Deterministic Policy Gradient (DDPG) as a computing offloading approach to learn the optimal offloading policy from a dynamic AAGN environment, considering this problem as a non-convex problem. The simulation results show the feasibility and effectiveness of the proposed AAGN environment where DDPG algorithm can achieve an optimal decision offloading policy and obtains a critical optimization in delay and task offloading ratio compared with Deep Q Network (DQN) and Actor-Critic (AC) algorithms.

Esmail Almosharea, Mingchu Li, Runfa Zhang, Mohammed Albishari, Ikhlas Al-Hammadi, Gehad Abdullah Amran, Ebraheem Farea
Meta-MADDPG: Achieving Transfer-Enhanced MEC Scheduling via Meta Reinforcement Learning

With the assistance of mobile edge computing (MEC), mobile devices (MDs) can optionally offload local computationally heave tasks to edge servers that are generally deployed at the edge of networks. As thus, the latency of task and energy consumption of MDs can be both reduced, significantly improving mobile users’ quality of experience. Although considerable MEC scheduling algorithms have been designed by researchers, most of them are trained to solve specific tasks, leaving the performance in other MEC environments remaining dubious. To address the issue, this paper first formulates the optimization problem to minimize both task delay and energy consumption, and then transforms it into Markov decision problem that is further solved by using the state-of-the-art multi-agent deep reinforcement learning method, i.e., MADDPG. Furthermore, aiming at improving the overall performance in various MEC environments, we integrate MADDPG with meta-learning and propose Meta-MADDPG which is carefully designed with dedicated reward functions. The evaluation results are given to showcase the more satisfactory performances of Meta-MADDPG over the state-of-the-art algorithms when confronting new environments.

Yiming Yao, Tao Ren, Meng Cui, Dong Liu, Jianwei Niu
An Evolutionary Game Based Computation Offloading for an UAV Network in MEC

With the rapid development of information technology, low-cost unmanned aerial vehicles (UAVs) appear. With advanced sensing and actuating technologies, they are being increasingly applied to a variety of scenarios. However, considering their limited computing resource and restricted battery capability, the computation-intensive tasks or data-intensive tasks will face tough challenges. With the aid of Mobile Edge Computing (MEC), moving computation-intensive tasks from resource-constrained UAVs to edge cloud servers can significantly save energy and finally achieve impressive performance.This paper proposes an evolutionary game based algorithm to solve the computation offloading problem for UAVs. By replicator dynamics, UAVs select the suitable service provider to offload the computation tasks via achieving a tradeoff between time delay, energy consumption and monetary cost when network externality exists. Simulation results show that the proposed algorithm can rapidly converge to evolutionary equilibrium and achieve desirable performance.

Qi Gu, Bo Shen
Edge Collaborative Task Scheduling and Resource Allocation Based on Deep Reinforcement Learning

With the development of the sixth generation mobile network (6G), the arrival of the Internet of Everything (IoE) is accelerating. An edge computing network is an important network architecture to realize the IoE. Yet, allocating limited computing resources on the edge nodes is a significant challenge. This paper proposes a collaborative task scheduling framework for the computational resource allocation and task scheduling problems in edge computing. The framework focuses on bandwidth allocation to tasks and the designation of target servers. The problem is described as a Markov decision process (MDP). To minimize the task execution delay and user cost and improve the task success rate, we propose a Deep Reinforcement Learning (DRL) based method. In addition, we explore the problem of the hierarchical hash rate of servers in the network. The simulation results show that our proposed DRL-based task scheduling algorithm outperforms the baseline algorithms in terms of task success rate and system energy consumption. The hierarchical settings of the server’s hash rate also show significant benefits in terms of improved task success rate and energy savings.

Tianjian Chen, Zengwei Lyu, Xiaohui Yuan, Zhenchun Wei, Lei Shi, Yuqi Fan
Improving Gaming Experience with Dynamic Service Placement in Mobile Edge Computing

Mobile cloud gaming (MCG) can provide users with high-quality gaming services anytime, anywhere, but suffers from long network latency and huge wide-area traffic. In order to solve these problems, mobile edge computing (MEC) is envisioned as a promising approach to enable relevant computing at the edge. Since the quality of experience (QoE) of the game requires high frame rates and low network latency, the placement of service entities can affect the performance of MEC-enabled MCG. In addition, users have a high degree of mobility while enjoying MCG, so service migration is proposed to reduce QoE impairment, and service migration means an increase in system cost. To address these challenges, we investigate the service placement of MEC-enabled MCG. Considering the dynamics of the system, we propose to minimize the QoE impairment according to the constraint cost of migration. We design the ECP algorithm to solve the problem.

Yongqiang Gao, Zheng Xu
Cooperative Offloading Based on Online Auction for Mobile Edge Computing

In the field of edge computing, collaborative computing offloading, in which edge users offload tasks to adjacent mobile devices with rich resources in an opportunistic manner, provides a promising example to meet the requirements of low latency. However, most of the previous work has been based on the assumption that these mobile devices are willing to serve edge users, with no incentive strategy. In this paper, an online auction-based strategy is proposed, in which both users and mobile devices can interact dynamically with the system. The auction strategy proposed in this paper is based on an online approach to optimize the long-term utility of the system, such as start time, length and size, resource requirements, and evaluation valuation, without knowing the future. Experiments verify that the proposed online auction strategy achieves the expected attributes such as individual rationality, authenticity and computational ease of handling. In addition, the index of theoretical competitive ratio also indicates that the proposed online mechanism realizes near-offline optimal long-term utility performance.

Xiao Zheng, Syed Bilal Hussain Shah, Liqaa Nawaf, Omer F. Rana, Yuanyuan Zhu, Jianyuan Gan
Incentive Offloading with Communication and Computation Capacity Concerns for Vehicle Edge Computing

With the popularity of intelligent vehicles, computation-intensive vehicle tasks rise dramatically. Vehicle edge computing (VEC) is a promising technology that offloads overloaded computation tasks of intelligent vehicles to the edge. However, VEC servers are constrained by their available computation capacity while dealing with numerous tasks. To this end, we propose multi-party cooperation to complete vehicle task offloading. Computation-assisted vehicles (CAVs) with free resources assist VEC servers to offload Computation-required vehicles (CRVs), which enables computation resources of VEC servers and CAVs for CRVs’ task execution. To motivate positive participation of VEC servers and CAVs, we design a resource management and pricing mechanism by quantifying their gains and costs. Such design efficiently integrate and leverage the communication mode and computing mode among participants to describe their interactions, which composes two two-stage Stackelberg games. While Nash equilibrium (NE) for each Stackelberg game reaches, none of participants violates unilaterally. Simulation results demonstrate its effectiveness of the proposed model.

Chenliu Song, Ying Li, Jianbo Li, Chunxin Lin
A Dependency-Aware Task Offloading Strategy in Mobile Edge Computing Based on Improved NSGA-II

Rapid development of mobile communications has led to respectable latency-sensitive and computation-intensive mobile applications. There is a huge contradiction between high resource demands of these applications and limited resource of mobile devices. In this regard, mobile edge computing (MEC) is a promising technology, where computation tasks can be offloaded from mobile devices onto network edges with stronger capability. However, the dependency between tasks leads to high complexity for offloading decision. In this paper, we investigate the optimal offloading problem for completing dependency-aware tasks by minimizing the latency and energy cost. An improved non-dominated sorting genetic algorithm-II (INSGA-II) is proposed to solve this multiobjective problem. Simulation results validate the advantage of the proposed algorithm in terms of the performance of low latency and cost.

Chunyue Zhou, Mingxin Zhang, Qinghe Gao, Tao Jing
Federated Reinforcement Learning Based on Multi-head Attention Mechanism for Vehicle Edge Caching

Vehicles request road condition information, traffic information, and various audio-visual entertainment frequently. Repeat Download will burden the core network and seriously affect the user experience. Edge caching is a promising technology that can effectively alleviate the pressure of repeatedly downloading content from the cloud. There are many existing edge cache scheduling methods, but they all have limitations. his paper proposes an edge cache scheduling method based on the multi-head attention mechanism federal reinforcement learning (FRLMA). Firstly, the problem is modeled as a Markov decision model. The local models are trained through a deep reinforcement learning method. Finally, the federated reinforcement learning framework of edge Cooperative Cache is established. In particular, the multi-head attention mechanism is introduced to weigh the contribution of the local model to the global model from multiple angles. Simulation results show that the FRLMA method has better convergence and is superior to the most current popular methods in terms of hit rate and average delay.

XinRan Li, ZhenChun Wei, ZengWei lyu, XiaoHui Yuan, Juan Xu, ZeYu Zhang
Research on NER Based on Register Migration and Multi-task Learning

The insufficient number of tags is currently the biggest constraint on named entity recognition (NER) technology, with only a small number of Registers (means the domain of language, which will be explained in Part I) currently having a corpus with sufficient tags. The linguistic features of different Registers vary greatly, and thus a corpus with sufficient labels cannot be applied to NER in other Registers. In addition, most of the current NER models are more designed for large samples with sufficient labels, and these models do not work well in small samples with a small number of labels. To address the above problems, this paper proposes a model T_NER based on the idea of migration learning and multi-task learning, which learns the common features of language by using the idea of multi-tasking, and passes the model parameters of neurons with common features of language learned from multiple well-labelled source domains to the neurons in the target domain to achieve migration learning based on parameter sharing. In baseline experiments, T_NER’s neurons outperformed the original models such as BiLSTM and BiGRU on a small-sample NER task; in formal experiments, the more the Registers in source domains, the better T_NER’s recognition of the target domain. The experiments demonstrate that T_NER can achieve NER for small samples and across Registers.

Haoran Ma, Zhaoyun Ding, Dongsheng Zhou, Jinhua Wang, ShuoShuo Niu
Backmatter
Metadata
Title
Wireless Algorithms, Systems, and Applications
Editors
Lei Wang
Michael Segal
Jenhui Chen
Tie Qiu
Copyright Year
2022
Electronic ISBN
978-3-031-19211-1
Print ISBN
978-3-031-19210-4
DOI
https://doi.org/10.1007/978-3-031-19211-1