Humans of Electrical Engineering, Mathematics and Computer Science

Said Hamdioui

There is a need for a revolution, rather than an evolution, in Computer Engineering

Said Hamdioui of TU Delft’s Computer Engineering section was recently appointed full professor of Dependable and Emerging Computer Technologies. He shared his crystal-clear perspective on the challenges his research field is facing, as well as the approaches and resources needed to address them. His work spans two lines of research: dependability and emerging computing paradigms based on emerging device technologies.

Dependability

Said: “As a result of constant field scaling, the globalisation of the integrated circuits  IC supply chain and the internet of things (IoT, networked devices), the dependability research of the future is being shaped by major challenges, especially those relating to security, testability and reliability.”

Security: “Attacks are likely to increase in size and diversify in nature, driven by the expanding number of services available online and the increasing sophistication of cybercriminals, who are engaged in a cat-and-mouse game with security experts. Use of the cloud and data centres, the development of the IoT and the globalisation of the IC supply chain is likely to produce new online threats, as these will create new opportunities for cybercriminals. There is an urgent need to search for solid cyber-defence strategies, not only for known types of attack, but also for unknown types that hackers may develop in future; and not only at the software level, as is the case today, but also at the hardware level. The solutions should be end-to-end (sensor to actuators/cloud), with hardware security playing a critical role. Security intelligence (design-for-security) at the chip hardware level should be incorporated into each connected object (such as secure storage, smart data, secure processing, secure communication, and so forth) and the run-time cooperation of all on-chip components and all connected objects should be integrated.”

Testability and reliability: “Technology trends, such as reduced reliability and high costs and complexity due to scaling, and huge pressure on businesses, such as time-to-market, are redefining the design, manufacturing, testability and reliability paradigms of ICs. In order to sustain the IC business, we need to design reliable systems that can use unreliable devices and components. For this reason, appropriate tests, design-for-treatability schemes   and fault tolerance IC/systems-integrating features for lifetime extension and failure-rate reduction are currently hot research topics. Providing self-managing dependable systems, in which on-the-fly (run-time) testing and detection, diagnosis and characterisation, and on-line repair and recovery take place is the only way to go in order to deploy technologies below 20 nm in critical applications such as the automotive industry, healthcare, the military, and so forth.  Moreover, due to their nature – which is unlike that of digital computing – new computer technologies such as new resistive computing and quantum computing will need radically new test schemes. We need new fault models, test algorithms, test scheme designs, diagnosis designs, yield designs, fault tolerant designs, and so forth.”  

Emerging computer technologies

Said: “Today’s applications and emerging applications are extremely demanding in terms of storage and computing power. Data-intensive/big-data applications and the IoT will transform the future; they will not only affect every aspect of our lives, but they will also transform the IC and computer world. Emerging applications require a level of computing power that was typical of supercomputers a few years ago, but with the constraints on size, power consumption and guaranteed response time that are typical of the embedded applications. Today’s computer architectures and the device technologies used to manufacture them are facing major challenges, meaning that they are incapable of delivering the required functionalities and features. If we want computing systems to continue to deliver sustainable benefits for the foreseeable future, we have to explore alternative computing architectures and notions in the light of emerging new device technologies. Computation-in-memory and neuromorphic computing are examples of alternative computing notions, while memristive devices are examples of emerging device technologies that have the potential to enable such computing paradigms.”


Computation-in-memory (CIM) architecture: “This is a foundational breakthrough in computer architecture that we developed; it aims at integrating the processing units and the memory in the same physical location. This entails moving the working set into the core itself rather than in the cache as is the case in modern computers. CIM architecture drastically reduces the memory/communication wall of today’s architectures, provides huge scalable bandwidth, has almost zero leakage, and supports massive parallelism. This breakthrough has the potential to solve data-intensive application problems in minutes rather than days (or in the worst scenario, never). This represents a major step towards the development of scalable ultra-low power in computing memory architectures, not only for data-intensive applications, but also for low-power applications such as IoT devices (for example, wearables). Our aim is to  design and demonstrate the huge potential of this architecture.”

Neuromorphic computing: “The basic idea of neuromorphic computing is to exploit the massive parallelism of such circuits and to create low-power and fault-tolerant information-processing systems. It offers alternative ways to build (embedded) artificial intelligence. The challenge is to understand, design, build and use new architectures for nano-electronic systems that bring together the best brain-inspired information-processing concepts and nanotechnology hardware, including algorithms and architectures.”