Dejan Petkow, Dr.-Ing.

Responsible for Plasma Modelling and Simulation NEAT Project Manager Gradel sárl 6 Z.A.E. Triangle Vert, L-5691 Ellange Luxembourg E: d.petkow@gradel.lu T: +352 39 00 44 202


-] What application will you create (or enhance) as part of the project (what does it do)

We will create a software for the simulation of non-continuum flows at low and high energies. Those solvers exist since many years for continuum flows (CFD - Computational Fluid Dynamics) and are used for R&D purposes in the predictive sense, i.e. to perform many parameter variations at a given technical problems to find optimum design parameters.

-] Who will use the application (will it be used by end users in the community, or people inside a company using it as a tool, or for advancing science..)

In the next 5 years, we (the company) will use it in-house for a) advancing our plasma technologies, and for b) providing simulation services to customers with related products where purely experimental R&D is too costly. After 5 years of software development and application we intent to go to market, i.e. the end users would be then researchers and engineers from R&D entities (including companies).

-] What are the application's computation needs that will benefit from parallel computation

  • 1. PIC solver: A plasma can be fully ionized with several 100 Mio particles in the domain. All these particles do interact with each other via non-collisional long range interaction. Depending on the degree of accuracy, all Maxwell equations can be solved. From a minimalistic point of view, at least the Poisson equation needs to be solved (which is the one we start with). The gridless PIC approach leads to a ~N(log N) scaling where N is the particle number.
  • 2. DSMC/MC-FP solver: parallelization of short range (i.e. collisional) processes. Having each time step several 10 or 100 Mio. particles which are considered for collisional interaction leads to an additional high computational load. In fact, 100 Mio particles means that up to 50 Mio collisions per time step might have to be calculated. The scaling is ~N.
  • 3. Visualisation: Single point precision should be enough and leads to the conclusion that numerics engine and graphics engine require different architectures (DP/CPU vs SP/GPU).
  • 4. Speech recognition: Parallelization requirements unknown so far but user friendliness requires a high quality recognition and hence an efficient parallelization.

-] What has blocked you from using parallel computation up to this point

Two things: a) the non-existence of the code which will get 2D capability (and other features) in the next two years and b) the lack of parallelization expertise.

-] How will your application provide higher benefit as a result of the parallel computation (what currently can't be done that will be enabled, or what aspect will be improved. For example, will weeks of waiting for simulation results drop to hours?

Yes, it will! Another benefit comes from the ability to perform complex simulations coupled to an optimizer which will allow full scale optimization at reasonable times. Even without coupling to an optimizer, the fact that the software will have implemented convergence criteria will help to get users work efficiently with the product because, in the PIC-DSMC world, convergence criteria do not really exist and are a topic of research. And as long as no convergence criterion is implemented, the user has to guess(!) when the simulation has finished. In fact, we will conduct the related research in-house and with academic partners in order to provide this functionality in time).

-] Will a researcher be able to interactively search, rather than doing a scatter shot of simulations and hoping one of them was the right one?

If the simulation will be covered with an optimizer, the user will shorten simulation times for finding the optimum solution. Moreover, we team up with experts which have experience in coupling visualisation technologies with psychology in order to identify application specific visualisation strategies (not just high resolution pictures of particle positions and properties, but "flow features of relevance"). Apart from these visualisation strategies: Three central requirements for performing those kind of simulations have to be implemented: A convergence criterion, a statistical approach for the collisional treatment, and active particle management to cope with rare but important events. All these things are scheduled for development and implementation. The development of those algorithms is accompanied by publications in peer reviewed articles.

-] Will a product be producible with less material or less design effort?

Yes, if formulated as an optimization problem. Of course, simulation costs won't be negligible, however, the saved experimental R&D costs will be even more significant such that, making a balance, overall R&D costs will be reduced.

-] Will the graphics be richer, or render faster, or use less battery?)

The numerics engine and the graphics engine will follow different parallelization strategies (DP for numerics, SP for graphics). Also, after 5 years we intent to start adding speech recognition to the overall system such that, in about 10 years, we will have a system of complex units (numerics engine, graphics engine, speech engine) which will demand different parallelization strategies. Being then independent of computational architectures is absolutely crucial for the commercial success. In fact, more and more simulation results will be discussed in a group with heterogeneous expertise such that, single point control of data visualisation (mouse) will be a bottleneck for the work process, the only reasonable way out we see in speech recognition. In other words: we do not ask ourselves whether the visualisation will be faster or better, but how the visualisation interface can be designed such that multiple users are able to discuss the results at the same time (while they are potentially at different locations).

-] Who will receive this benefit and how (for example, will the application help cure cancer for millions of EU citizens by enabling doctors to use personalized genetics?)

The answer depends on the application:

  • In the field of low plasma pressure based surface treatment applications, the end user will benefit from cheaper and better products with a wider range of functionalities. The society will benefit from the reduction of required resources to be used during the process.
  • In the field of medical plasma applications, a huge number of citizens will benefit from new plasma based wound healing technologies. Despite the fact that these plasma processes happen at atmospheric pressures, recent reserach demonstrates that electron energies are highly non-maxwellian such that, for accurate results, the standard approaches should not be used but kinetic methods like PIC, DSMC, and FP. As medical reserach evolves, further benefits and medical applications should be expected to come up in the next years.

-] What market(s) will be affected by this benefit, and will this benefit be passed on, to yet further markets

This question has some non-negligible element of guessing the future. I don't think I can answer it.

-] Separately, for logistics of the project, we would like to start by coding in C/C++, but are open to integrating into other languages, such as Python, Java, or even Javascript. Could you say a bit about your development process:

-] What language(s) do you plan to use for the application

C/C++ for the numerics engine, graphics engine to be discussed, speech recognition to be discussed, WebInterface HTML5 (if agreed to be followed as part of the strategy).

-] Is the application.. desktop based, Cloud (SAAS) based, browser based, or mobile

Probably browser based with the possibility to use internal (i.e. non-public) networks for the whole process. However, students will have free and truly platform independent access to learn and practice the physics and models behind. Depending on the customer' requirements, a safe data solution will be provided for the geometry input and for the simulation output data.

-] A little bit about the architecture (do you have a server with database, or a large data set that is churned through such as Big Data style, what parts of the computation are performed on the end-user device versus in a server, and so on)

We do have an own server on which the simulation will be executed. Simulation control will happen via desktop browser. Visualisation data will be generated locally (desktop) such that fast network connectivity to server is recommended.


the code we develop is supposed to solve technical problems in the plasma physics domain. The type of code is called "fully kinetic". The underlying set of equations which are solved are the full Boltzmann equations. For this, we develop and couple three different solvers: PIC (Particle-In-Cell), DSMC (Direct Simulation Monte Carlo), and FP (Fokker Planck). Each solver solves plasma physics processes on a certain temporal and spatial scale - hence the coupling of specialized solvers. By coupling them, a realistic collisional plasma can be simulated.

The code development is primarily motivated by R&D of other technologies which we develop as products here at Gradel: a nuclear fusion based neutron source (product level), and a plasma thruster for space applications (prototype level). Further, we participate in different research projects which allow us to partially fund the software development and apply it to all kinds of relevant plasma problems (relevant with respect to the underlying models). Currently, we have developed a 1D PIC code which is going to extended towards a 2D PIC-DSMC-FP code within the next two years.

Apart from supporting R&D activities of in-house plasma technologies mentioned above we do consider to develop the software further until it reaches a level of maturity which would allow us to sell it as a product.

Our team consists currently of - a theoretical physicist, - a full time senior software developer which will join in May, - a Nuclear and software Engineer, - and me. Additionally, we are working with students.

Typical application domains are space technology research (Electric Propulsion, Atmospheric re-entry of e.g. capsules, Satellite charging and contamination) and plasma based surface treatment processes at vacuum conditions (various coating and de-coating processes).

Our overall strategy in terms of software development and application is: 1. Apply it to in-house technologies 2. Apply it in institutional research projects 3. Apply it to industrial R&D projects, 4. Sell it as a product.

Oh yes, the code is written in C/C++. Parallelization is supposed to be added also within the next two years on basis of multi-threading (shared memory).

Please let me know if further information is required.

I am happy to say that the story for the EU, or the way you drafted it, represents exactly the reality. In 2013 we performed a market analysis regarding available software solutions for the simulation of rarefied plasma flows in the industrial surface treatment context. Since the required knowledge and resources for the development of such a code are extremely high, no proper solution exists despite the fact that, only in the EU, R&D companies (both small and big ones) spend per year at least 1 billion € for advancing their low pressure plasma processes. They either use in-house codes (which typically are insufficiently verified & validated), or some barely applicably Open Source Codes, or nothing, i.e. progress by expensive experimental parameter variation.

This is the market that we would address in, say, 5-10 years: Software as a service in 5 years, software as a product in 10 years. Of course I was thinking about the problem of the customer's available computational architecture and how we would address the variety. But with EuroDSL, although I still haven't really understood how it works, I see a chance that at least this problem might be solved. Our envisaged software-related business model is about reducing the customer's R&D costs by applying a highly specialized software which is able to deal with both a) the particularities of the low pressures in those sophisticated surface treatment processes and b) the variety of the available parallel architectures for solving the technical problems in reasonable times.

Some low pressure processes for surface treatment of choice are: - ALD (Atomic Layer Deposition) - PE-ALD (Plasma Enhanced ALD) - PE-CVD (Plasma Enhanced Chemical Vapor Deposition) - PVD (Physical Vapor Deposition) - Sputtering

By the way, the most exotic feature of our software approach is that it works mesh-free. There might be a mesh for visualisation purposes, or for adding external field information to the local values, but we do not use meshes for solving equations on it.