Liquid Architectures

In the Liquid Architectures research, we are turning the tables on the traditional design of computing systems by enabling the underlying hardware (processors, memories, and networks) to dynamically adapt itself towards the application requirements and the operating environment by tuning the run-time parameters or even adding new functionalities via reconfigurable technologies. This is a departure from static (fixed) designs that cannot change their own functionality or configurable designs that only change their functionality in between applications. The main advantage lies in the efficient use of hardware resources at run-time with capabilities to only turn on (coarse-grain approach) or instantiate the necessary hardware components (fine-grain approach) running at the speed to meet all functional and temporal requirements. Consequently, large power/energy savings can be achieved at the same time as only the necessary hardware needs to be online during execution. We leverage useful information from different sources (application designer, compiler, and real-time monitors) to dynamically determine the most optimum set of run-time parameters to execute the running application(s).

Nowadays, there are several research trends that are pointing towards liquid architectures design. In the following, we will briefly present an overview of these trends:

  • Reconfigurable computing (RC) has traditionally focused on static (re-)configuration via coarse-grain or fine-grain approaches that allow changes to the hardware and, therefore, adaptability towards functional and temporal requirements of applications. Many researchers have worked on different aspects ranging from reconfiguration technologies to reconfiguration strategies to tools performing C-to-VHDL translation. The latter is to make RC more accessible for software programmers, opposed to hardware designers, to exploit the full potential of RC. The RC field of research is by no means finished as new reconfiguration technologies and new applications are still being introduced every day. However, it is expected that many researchers will make the switch towards dynamic reconfigurability as it is the next logical step to reduce resource utilization and address the issue of power utilization that has always been the Achilles' heel of RC. 
  • Low-power designs have their merits when they are targeting a certain application(s) domain. Lengthy and manual design of dedicated hardware optimizations combined with rigorous software tuning is needed in order to meet stringent requirements. An additional caveat is that these designs are strongly limited to their targeted domain(s). However, when requirements can be relaxed there is a huge opportunity to utilize this freedom to come up with more generic (parameterized) designs that can be applied to additional application domains and, thereby, amortize their high design costs over more products or product generations.
  • Multi-core designs are the result of the end of the Dennard scaling combined with the continuing Moore's Law. More precisely, the number of transistor keeps on increasing, but this has not resulted in increased performance. The main reason lies is that the power density no longer remained constant, meaning that the reduction in feature sizes did not result in lower power consumption (as voltages and current can no longer be further decreased without sacrificing reliability). As transistors effectively performed "less work", the industry has chosen to use additional transistors to build multi-core designs and, thereby, relying on parallelism to further improve performance. However, parallel programming has proven to be as elusive as decades ago and simply adding more cores will not result in the same linear increase in performance. Moreover, the notion of dark silicon also means that not all transistors can be used simultaneously. In addition, the generality of the (initially: homogeneous) cores also meant that they might perform badly for certain applications. This led to the introduction of heterogeneous cores to deal with a varying set of applications, but how many cores should be included in a design and how many types of (heterogeneous) cores is enough? The limitation clearly lies in the fixed design and all design choices prior to fabrication. Consequently, this is clearly pointing to new designs in which the functionality of the cores can be adapted over (run-)time.
  • The use of static designs demands increasingly more sophisticated software tools to translate application requirements to the underlying hardware as increasingly more applications need to be supported. The field of compiler research is already vast and has only become only larger with the introduction reconfigurable computing. The vastness of this research field also means that the transformation of code written in a high-level programming language to the binaries running on the hardware and its associated design-space exploration (DSE) has exploded in recent years. With adaptable hardware designs, this DSE can be greatly reduced as it can rely on the hardware to further optimize its own execution. An interplay between compilers and the underlying liquid architecture can be imagined. 
  • Fault tolerance is increasingly becoming important due to the smaller feature sizes making chips more susceptible to radiation and at the same time less reliable. The latter is usually combined with aging that further reduces the reliability over time due to use (wear-and-tear). Without reliance on costly inclusion of redundant hardware, researchers have been starting to look for new hardware and software techniques that monitor and adapt the hardware according to its (reliability) state and needed level reliability that needs to be provided.

With these trends in mind, we defined the following foci within our liquid architectures research:

  • In liquid computing, we focus on the design of new parameterized processors that can change its own functionality and performance through the run-time parameters. In particular, we developed a dynamically reconfigurable VLIW processor that can tune its number of issue slots per core depending on the instruction-level parallelism of the application. Unused issue slots can be turned off or combined to construct more cores in order to achieve task-level parallelism. The work is available under an academic license giving access to our designs (in VHDL) and supporting tools, e.g., back-ends for HP VEX, gcc, Open64, Cosy, and LLVM compilers. At the same time, we employ a fine-grain reconfiguration method to employ FPGAs to add functionalities when needed. Finally, combined with binary translation, we intend to develop a universal binary translation engine capable of running any code.
  • In liquid memories, we focus on the design of new parameterized memory hierarchies that can dynamically change their organization to match the dynamic core (mentioned in the bullet above). For example, we are focusing on the dynamic merger and separation of caches to complement the core requirements.
  • In liquid networking, we focus on the design of parameterized NoCs that can adapt its topology and organization to the requirements of applications mapped on top of them. At compile-time or run-time, the network needs, e.g., bandwidth, can be determined and be provided during execution through the propoer use and timely configuration of the NoC.
  • In liquid tools, we are developing the supporting toolchains for our work and it includes the back-end compiler, simulators, and an operating system running on our new dynamic core. Here, we focus on the communication interplay between the tools and the underlying hardware and on new (run-time) scheduling techniques that can fully exploit the capabilities of our liquid architecture.

The work described above is being used in several application domains: space, automotive, multimedia, and hig-performance computing. In all these domains, energy efficiency, performance, availability, and reliability play important roles and where the liquid architecture can prove its benefits. 

The work is carried out by faculty members, PhD and MSc students and it is being funded from different sources, including sponsoring from companies, EU project funding, and national funding agencies.