Within experimental sciences, computer science has a very particular characteristic: it generates itself some of its most central objects of study, such as the internet or large software. De facto, computer scientists know perfectly well the atoms and cells composing these objects: they actually design, document, build, and... assemble them. For the internet, the atoms are routers, cables, protocols. For large software, the cells are programming languages, function libraries, compilers. Computer scientists may easily zoom in and zoom out, from the electrical signal to the global system, they may refer to documentation, maps defining the architecture of the object or even source codes. These key computer science objects are indeed engineered objects, which makes computer science a priori very different from natural or social sciences.
Yet, these computer science objects are far from being mastered; this is particularly clear for large software, whose bugs pace our interactions with the digital world and can cause disasters. More generally, they raise challenging questions regarding their structure, behavior, resilience, performance... which one is unable to answer only from the detailed knowledge of their construction. Indeed, this construction, far from strictly following the vision of a single and consistent architect, actually is rooted in long time periods (decades for the internet and large software), and most decisions regarding them are not centralized (they are distributed). From these processes emerge systems consisting of millions or billions of building blocks a priori simple, but whose collective behavior escapes our understanding.
In this situation, the naturally bottom-up vision (from the elementary bricks to the global system) of the computer scientist is no longer enough. It must be supplemented by a top-down approach, based on measurements, probes, in silico or IRL experiments, observation. The history of computer science has experienced several revolutions stemming from such approaches. One can think of the emergence of RISC processors in the 1980s, for example, following an analysis of the set of processor instructions used in practice; or measures of the internet since the 1990s that are constantly contradicting the intuition of its designers. But top-down approaches remain largely under-considered in computer science, seen as an abdication of our position as Creators, very little present in the courses: the emphasis is on design of systems, not on analysis of existing ones.
The Complex Systems department at LIP6 proposes to tackle this challenges by putting together both top-down and bottom-up approaches in the design and study of computer science objects. Its research is conducted by 35 permanent researchers grouped into three teams: APR (methodological aspects based on algorithmics and programming theory), ComplexNetworks (focused on structural aspects with key formalism graphs and networks) and MoVe (study, design, security and verification of complex software).
Matthieu.Latapy (at) nulllip6.fr