A Methodology and ecosystem for many-core programming

Computers are going through a radical redesign process, leading to novel architectures with large numbers of small cores. Examples of such many-cores are Graphics Processing Units and the Intel Xeon Phi, which are used by about 65% of the top 50 fastest supercomputers.

Many-cores can give spectacular performance results, but their programming model is totally different from traditional CPUs.

It currently takes an unacceptable amount of time for application programmers to obtain sufficient performance on these devices. The key problem is the lack of methodology to easily develop efficient many-core kernels.

We will therefore develop a programming methodology and compiler ecosystem that guide application developers to effectively write efficient scientiffc programs for many-cores, starting with a methodology and compiler that we have developed recently. We will apply this methodology to two highly diverse applications for which performance currently is key: Bioinformatics and Natural Language Processing (NLP). We will extend our compiler ecosystem to address the applications’ requirements in three directions: kernel fusion, distributed execution, and generation of human-readable target code.

The project should provide applications and eScientists with a sound methodology and the relevant understanding to enable practical use of these game-changing manycores, boosting the performence of current and future programs.

Subcribe and stay informed about all our researchprojects and achievements

Recent news

CIMPLO - Cross-Industry Predictive Maintenance Optimization Platform
Almost every enterprise has to deal with it: maintenance of machines and infrastructure. Traditional maintenance concepts rely on a 'fixed interval approach', taking into account a considerable safety margin. As a result, maintenance almost always oc...
23 February 2024
Real-Time Data for Products to Move Data-Driven Real-Time Decision Making in Supply Chains and Logistics
The project, here referred to as Data2Move, was started in 2017. The goal of Data2Move is to develop and demonstrate real-time data-driven logistic methods and techniques, initially focusing on inventory and transportation. Over the past years, signi...
27 November 2023
Virtual Interiors as Interfaces for Big Historical Data Research
The Semantic Web needs interfaces for critical, reliable analyses of Big Data for research in humanities, cultural heritage, and creative industries. Utilizing data on the production and consumption of cultural goods, geodata, maps, and building plan...
21 November 2023

Actuele themas

eScience