A key for any Strategy in an uncertain world

Accelerating technologic mutations, massive evolution of customer's usages and behaviors, disruptive business models, all combined with the degradation of the ecosystems, lead to more and more uncertainties. For all actors of the economy, and at a global scale.

Investment on Research and Innovation is one of the key elements in the definition of a strategy for any company facing these challenges.

More and more computing power, explosion of big-data technologies.

How to develop new algorithms for small companies?

With its experience in the management of collaborative research projects, in particular for start-up companies, LogexArmor advices its partners during the early stages of the development of a research project.
Its technical expertise on high performance computing, and in a lesser extent on massive data analysis technologies, helps small companies dealing with the design and the development of new algorithms and technologies with a potential disruptive impact on the market.

LogexArmor expertise and contributions

Innovative and Research projects are co-financed by major institutions

All these agencies finance research and innovation using collaborative research projects associating small and medium companies, major companies, public and private laboratories, research centers, schools and universities. The participation to these projects provides a direct access to large company research teams, and to the top level academics in the domain.

Managing collaborative research projects in a professional an experienced way ensures a good integration with the research community, a better impact on the company development, all together with a sustainable innovation and financing framework over multiple years.

Parallel programming and code optimization
The gateway for high performance computing

Optimizing and parallelize a compute intensive software comes with a strong methodology.
It must combine a clear validation protocol, a serious analysis of the code properties, and a properly defined hardware target.
  • The validation protocol is usually the most difficult part to obtain but is often necessary before any serious parallelization. It depends heavily on the field, and on the expertise of the program owner.
    When based on legacy codes using floating point arithmetic, and without the original programmers, it can be a real challenge. Among the problems: the numerical stability.

    Parallelize a code does not only mean optimizing the code for a given machine. It also implies changing the computation order, and sometimes in a radical way. A clear and a mastered validation protocol will leverage more parallelization opportunities, and a better selection and adjustment to the hardware capabilities.
  • The application properties are best get with an extensive usage of all profiling and code analysis tools available for the platform. In the case of a C/C++ or Fortran codes, used on a Linux system, key profiling tools are PAPI, Likwid, or Valgrind. More code analysis and insight can be optained on x86 architectures using the Intel VTune toolset.
    After the parallelization process, the parallel execution can be analyzed in a deeper way using Periscope, Vampir, Scalasca, or TAU.
  • The parallelization process can then offer all the best using a large variety of paradigms depending on the application and the targeted architecture.
    The OpenMP language is the easiest one. It is used to parallelize loops and tasks on a shared memory and multi-core machine. With its simple memory morel, it is usually the first parallelization step, and sometimes the last!

    The Message Passing Interface or MPI is the major parallelization method in high performance computing. It targets a cluster of machines, seen as a grid of nodes linked with a dedicated interconnect. Quite invasive, its requires a careful partitioning of the computation, and an explicit management of data transfers.

    The OpenCL and OpenACC languages will open the power of hardware accelerators. These accelerators are boards or units specialized in massively parallel computations. The best example: GPUs. The OpenCL language extends the C language with the appropriate paradigms (computation in blocks, streams) to exploit this power. The OpenACC language simplifies and makes the accelerator programming more portable but usually at the cost of less performance. These approaches usually improve the energy efficiency of a computing system, but require some hardware specific optimization.

This overview is superficial, and covers only a selected set of models and tools on the subject. A lot a research and standardization efforts are ongoing at both academic and industrial levels.

LogexArmor provide courses and support during all the parallelization and code optimization process. It uses a methodology with a strong emphasis on the code validation and on the usage of performance analysis tools. Combined with a long expertise in hardware architectures, it provides a better productivity and a better control of the optimization effort.

LogexArmor SASU


SASU au capital de 4000 €
RCS Rennes 809 683 683
APE (6202A) Conseil en systèmes et logiciels informatiques
Siège social: 1 Rue victor Janton 35000 Rennes
Contact: contact at

Collaborative research and high performance software development