YU Shuai

PhD graduated
Team : Phare
Departure date : 12/31/2018
Supervision : Rami LANGAR

Multi-user computation offloading in mobile edge computing

Mobile Edge Computing (MEC) is an emerging computing model that extends the cloud and its services to the edge of the network. Consider the execution of emerging resource-intensive applications in MEC network, computation offloading is a proven successful paradigm for enabling resource-intensive applications on mobile devices. Moreover, in view of emerging mobile collaborative application (MCA), the offloaded tasks can be duplicated when multiple users are in the same proximity. This motivates us to design a collaborative computation offloading scheme for multi-user MEC network. In this context, we separately study the collaborative computation offloading schemes for the scenarios of MEC offloading, device-to-device (D2D) offloading and hybrid offloading, respectively.
In the MEC offloading scenario, we assume that multiple mobile users offload duplicated computation tasks to the network edge servers, and share the computation results among them. Our goal is to develop the optimal fine-grained collaborative offloading strategies with caching enhancements to minimize the overall execution delay at the mobile terminal side. To this end, we propose an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively. Simulation results show that compared to six alternative solutions in literature, our single-user OOCS can reduce execution delay up to 42.83% and 33.28% for single-user femto-cloud and single-user mobile edge computing, respectively. On the other hand, our multi-user OOCS can further reduce 11.71% delay compared to single-user OOCS through users' cooperation.
In the D2D offloading scenario, we assume that where duplicated computation tasks are processed on specific mobile users and computation results are shared through Device-to-Device (D2D) multicast channel. Our goal here is to find an optimal network partition for D2D multicast offloading, in order to minimize the overall energy consumption at the mobile terminal side. To this end, we first propose a D2D multicast-based computation offloading framework where the problem is modelled as a combinatorial optimization problem, and then solved using the concepts of from maximum weighted bipartite matching and coalitional game. Note that our proposal considers the delay constraint for each mobile user as well as the battery level to guarantee fairness. To gauge the effectiveness of our proposal, we simulate three typical interactive components. Simulation results show that our algorithm can significantly reduce the energy consumption, and guarantee the battery fairness among multiple users at the same time.
We then extend the D2D offloading to hybrid offloading with social relationship consideration. In this context, we propose a hybrid multicast-based task execution framework for mobile edge computing, where a crowd of mobile devices at the network edge leverage network-assisted D2D collaboration for wireless distributed computing and outcome sharing. The framework is social-aware in order to build effective D2D links. A key objective of this framework is to achieve an energy-efficient task assignment policy for mobile users. To do so, we first introduce the social-aware hybrid computation offloading system model, and then formulate the energy-efficient task assignment problem by taking into account the necessary constraints. We next propose a monte carlo tree search based algorithm, named, TA-MCTS for the task assignment problem. Simulation results show that compared to four alternative solutions in literature, our proposal can reduce energy consumption up to 45.37%.
Last but not least, the deployment problems require combinatorial optimization and are NP-hard. To overcome the great complexity involved, we formulate the offloading decision problem as a multi-label classification problem and develop the Deep Supervised Learning (DSL) method to minimize the computation and offloading overhead. As the number of fine-grained components n of an application grows, the exhaustive strategy suffers from the exponential time complexity O(2n).
Fortunately, the complexity of our learning system is only O(mn)2, where m is the number of neurons in a hidden layer which indicates the scale of the learning model.
Defence : 05/04/2018 - 10h30 - Campus Pierre et Marie Curie ,Sorbonne Université LIP6 : Couloir 24-25 salle 405
Jury members :
Nadjib AITSAADI, ESIEE Paris - France [Rapporteur]
Paolo BELLAVISTA, University of Bologna - Italy [Rapporteur]
Sidi-Mohammed SENOUCI University of Bourgogne - France
Stefano PARIS Huawei - France
Stefano SECCI LIP6, Sorbonne Université
Rami LANGAR Université Paris Est Marne-la-Vallée - France

2018 Publications

 Mentions légales
Site map |