The three main research axes of the Adopnet team are :
The traffic related to multimedia content, and in particular video, has increased significantly over the past few years. This growth is expected to continue with the advent of new video formats (e.g., HEVC, multi-view, and Ultra High Definition) and the integration of multimedia into our daily lives (e.g., video in education). More generally, the world is switching from TV with a handful of broadcasters to OTT (Over-The-Top) video services with thousands of broadcasters. And even bigger challenge is presented by the new features of multimedia services, such as interactivity, personalization, and adaptability.
Today’s multimedia services offer some interactive features, where the end-users can control the video consumption to some extent. Multimedia services have more stringent requirements related to interactivity. For example, cloud gaming requires an overall response time below 120~ms for an acceptable Quality of Experience (QoE). This trend is expected to be even stronger in the next years due to the popularity of haptic controllers. The latency of today’s cloud architecture is not low enough to guarantee QoE for users of such interactive services. To address the needs in content delivery with ultra-low response time, the most appealing architecture is a Content Delivery Network (CDN) with servers that are very close to the end-users, in other words at the edges of the network. It is thus natural that network operators develop their ability to leverage devices close to the end-users.
In the meantime, the personalization of multimedia services is also a major, sustainable trend. With the wide adoption of HTTP Adaptive Streaming technologies, the servers propose several representations of a given video, and it is up to the end-users to choose the representation that best matches their characteristics. The CDNs have to take into account the characteristics of every end-user to prepare the content, distribute it to the edge servers, and deliver it to the end-users.
The objective of ADOPNET is twofold:
- to contribute to the development of new technologies to enhance multimedia delivery. For example Adaptive Streaming for Multimedia Broadcast Multicast Services (eMBMS) and video-friendly Multi-Path End-to-End Protocols (MPTCP).
- to work on architectures for content delivery. For example content placement, network dimensioning and server management in the fog.
Today, customers can access services via fixed line networks or via radio access networks (RAN). Controlling these access networks consists in both performing control of each access network, and allowing concurrent access to several such networks. Up to now, fixed and mobile access networks have been optimized and have evolved independently, with partly contradicting trends (e.g., centralization of fixed networks, decentralization of mobile networks). Currently, there is a complete functional and physical separation of fixed line access/aggregation networks and mobile networks.
Fixed Mobile Convergence (FMC) at network level focuses on the design of procedures enabling the users to dynamically select one access network (or possibly several) for a given service, and enabling network operators to effectively share deployed resources (links and equipment) between fixed and mobile accesses. The advent of Digital-Radio-over-the-Fiber technologies (and the companion Cloud-RAN concept) and the generalization of heterogeneous cellular networks increases both the dynamicity and the heterogeneity of the traffic flows that the access/aggregation networks should accommodate. It raises new issues for optical networks, which can be addressed by developing virtualization techniques in order to have easily manageable networks and optical switching in order to combine energy efficiency and high quality of service. From a pure radio point of view, it also extends the possibility of developing multi-radio-access-technology (RAT) selection algorithms and opportunistic energy efficient radio resource management procedures.
- Virtualization of optical networks. Transmissions on optical fibers have unique features: large bandwidth, low loss, low cost, light weight, immunity to electromagnetic interference and corrosion resistance. However, the management of optical network is a very challenging task. Network virtualization can provide a very efficient management and thus, a very efficient use of available network resources. By using network virtualization solutions, network resources can be managed as logical services, rather than physical resources. Due to the high degree of manageability provided by network virtualization, network operators can improve network efficiency and maintain high standards of flexibility, scalability, security, and availability. As a result, it reduces capital and operational costs for network operators.
- Advanced optical networks. Several forecasts have emphasized that distribution/aggregation networks, also called Metro Area Networks (MAN), are particularly impacted by traffic evolution. Future MANs should fulfill several requirements: quick adaptation to varying traffic demands, efficient support of both fine granularity and large volumes of traffic demands, possible isolation of different clients’ flows, together with an excellent QoS, energy efficiency and low Operational Expenditures (OPEX). Optical packet/burst switching (OPS/OBS) combines sub-wavelength granularity, optical transparency and is thus energy efficient. The challenge is to achieve a high multiplexing gain together with a QoS similar to the one provided by electronic switching and to develop efficient MAC (Medium Access Control mechanisms) with contention avoidance. In the context of Fixed-Mobile Convergence, fiber-based access technologies can be used for fronthauling and backhauling traffic generated by mobile users. Our objective is to propose a dynamical and adaptive control of interfaces and routes to allow an efficient use of available resources in access and aggregation networks.
- Multiple Access Technology Selection. Different RATs, including 3GPP families and IEEE ones, are now widely deployed. A key feature will be an increased integration of both the fixed access and the different RATs. Our objective is to consider two aspects: i) the optimization of the architecture to allow a better integration of the different access technologies in a convergence perspective, ii) the optimization of the selection algorithms.
- Radio Resource Management. Radio Resource Management (RRM) algorithms or heuristics are a key element for providing high system throughput and high mobile user satisfaction. We focus on two aspects of RRM: power allocation and scheduling. We work on RRM issues in cellular networks where part of the energy comes from renewable sources such as wind and solar. We also consider RRM proposals for cellular M2M with different QoS requirements and according to different criteria, starting with energy efficiency. We propose opportunistic scheduling techniques, which take advantage of multi-path fading and multi-user diversity to provide high throughput. Our specific approach is to take into account the variability of the traffic and the queuing aspects. We propose scheduling algorithms for hybrid networks where a terminal can relay the traffic of some others and propose to combine it with opportunistic routing.
Network monitoring refers to the observation of network and traffic by means of probes of different types and by the analysis of those measurements. The goal is to gain information about the traffic or the state of the network and its devices.
The dramatic increase of the user data traffic due to the popularity of video contents and increased data rates in access networks puts high pressure on the design of probes: they should be fast enough to capture traffic without sampling and easily configurable to cope with the dynamicity of the network and the needs of monitoring applications. Advanced data analysis methods should be used in order to process measurement data and extract traffic descriptors, build traffic models or rise alarms in case of anomalies. It is also necessary to orchestrate the measurements at different probes, to semantically analyse the different sources of measurements and to communicate from the measurement layer to other layers to trigger counter-measures.
Network monitoring finds application in various area. A first application is the characterization of network usage (e.g. bandwidth consumption and variability). A second application is to characterize the network infrastructure in order to assist the network operator in the task of operating and maintaining the network. A third application of network monitoring adresses security issues. For example, the early detection of attacks distributed through botnets is an application of traffic analysis at the level of different probes in the network.
- traffic monitoring acceleration for flexible and very high capacity traffic monitoring probes. We develop the concept of traffic monitoring acceleration in order to reach bit rates of dozens to hundreds of Gb/sec. We develop different approaches, some are based on hardware acceleration on FPGA and others are based on specific capture engines and optimization mechanisms in software.
- detection/localization of failures in access networks. Failures in access networks trigger hundreds of alarms and it is very difficult to find the root causes of these alarms with rule based methods. Indeed the number of rules to maintain in order to take into account any possible case is very large. It is interesting to complement rule based approaches with probabilistic approaches that model the dependencies between failures, alarms and signal levels on the network equipments. In particular we develop an approach based on Bayesian network modelling in order to locate failures in GPON-FTTH networks.
- traffic anomaly detection for network security. Anomalies in traffic can reveal ongoing attacks such as flooding attacks. Traffic anomaly detection involves building traffic models, continuously monitoring traffic in order to extract appropriate traffic descriptors and triggering an alarm when the observed behavior significantly diverges from the model.
- Big Data technologies for network management. One of our objectives is to conceive a mechanism that allows the gathering of fine-tuned data about the QoS actually perceived by the end-users in (almost) real time. Moreover, a related objective is to identify whether the cause of a QoS degradation is internal to the network operator or due to an actor out of the scope of the network operator. A consequence of such approach is that the network operator would be able to manage their network based on the client-perceived QoS (or QoE) rather than on the traditional network equipment QoS. This objective requires to analyze data from a huge number of sources, and thus to develop statistical tools that group data flows and find correlations in subset of data.