What’s New in FieldView 21
FieldView 21 includes several new features and improvements, including a first for a CFD post-processor:
the FieldView Auto Partitioner.
- Auto Partitioner [01:18;06]
- Challenge: FieldView Parallel Load Balancing [03:12;15]
- Solution: The Auto Partitioner [06:20;02]
- Test Case #1: Single-grid PLOT3D [08:14;09]
- Test Case #2: Multi-grid PLOT3D [10:21;10]
- How to Turn It On [12:58;00]
- Support for Ghost Cells [14:49;03]
- Q&A on the Auto Partitioner [16:51;16]
- Customer Portal on MyTecplot [17:50;00]
- New License Manager (RLM) [20:22;21]
- Linear Duplication: Translate [28:33;12]
- Modern Development Environment and Methods [30:34;06]
- Q&A [32:54;29]
Q&A From the Webinar
When you auto partition, do you preserve gradients across the new partitions? [33:42]
Yes, if the gradients were exported from the solver, you’re not going to need the ghost cells because you will have continuity. Just to clarify that the ghost cells are needed if you do the gradient computation inside of FieldView. So that’s when you’ll want to use the ghost cells.
The ghost cells introduce very little overhead in terms of read time and memory, but they do have the drawback of adding faces to your surfaces. That’s why I encourage people to only use them if they see gaps or discontinuities being introduced in their gradient functions computed in FieldView, which may not always be the case. If you have a very complex turbulence, high social face, chances are, you are not going to see the difference.
How are FVBND files supported in the new auto partitioning? [35:12]
That’s a great question, and they are fully supported. I didn’t go into that level of detail in the webinar. To clarify for everyone, FVBND files are a way to define your boundaries for the Plot3D format.
You specify them in a text file that is either created by the user or the solver. The boundary surfaces are simply IJK surfaces. As we read the case, we rework the FVBND file to adjust the surfaces. There is full support for FVBND files.
Is auto partitioning, repeatable? Can you split the same grid and results again, for a new total grids or nodes per grid? [36:08]
So that’s a very interesting question, too. It is going to be totally repeatable if you read the same case on the same number of processes.
Now, if you move the data set to another system where you have more cores or more threads and run with more processes, then you’re going to create more divisions. In that case, it will not be repeatable. But if you stick to the same case and same system, yes, it will repeat itself.
The auto partitioner setting gets saved in our restart in very much a FieldView way. We try to keep the restarts fuzzy as much as we can and read them by in a fuzzy mode. It’s going to remember – the restart – that your case was loaded with auto partitioner on. If you use a restart to reload the case on the same system, you will not see the difference.
Now, if you do it on a different system, you may end up with is a different partition. So we’ll try to reapply the restart, that could however have some impact on things like computational surfaces. Then if you have a different number of grids with different IJK they will have no impact on coordinated isosurface, streamlines, isosurfaces, on so on, but that’s something to be aware of.
Note from a member of the FieldView team [43:11]
You can fix the number of cores used by the auto partitioner, even when you’re reading it on a different system by using the server config or .srv file.
That’s a good point. And it goes back to the question about repeatability of the auto partitioner. My answer was misleading because I said, “If you move to another system, you could run on more processes.”
We have this mode that is called Local Licensed Parallel that will use the maximum number of processes allowed by your license and by the number of cores available on your system. FieldView will use the maximum number of processes. In that configuration, moving to a different system, let’s say you had 24 cores, then you move to a system with 32 cores, FieldView will use 32 cores. BUT, if you define a server config file called “local parallel 24” for instance, you can use that to read using 24 processes on a system with 24 cores or 32 cores. You can easily control that with the server configuration file. In that case, moving from system to system, the partitioning will be repeatable
Any plan for FUNS or OpenFOAM support for the auto partitioner? [39:32]
Ah, very, very interesting question as well. People who are familiar with the Plot3D format or the OVERFLOW, I’m sure they understand why we started there.
The way these formats are organized, it makes it much easier to partition on the fly based on the IJK structure of the file. Doing it for unstructured data will be a lot more challenging, especially if we want to do it on the fly.
When you do it on the fly, really you want to limit your overhead so that the benefits exceed the overhead that you introduce. So that’s something that I do want to investigate for FUNS and OpenFOAM, depending on the pickup we get for this capability now, with the auto partitioner for structured data. The requirement will be that we find an option that is going to be fast enough to make it interesting.
One potential solution will be to do it in two steps, that is, partition the data separately from the read and save to a different file. That’s something we are looking into as well, but it depends on whether the user asking the question will be willing to give away a bit of disk space for this extra performance.
This is a discussion we would like to have with you if you are interested. If you’d like to see support for OpenFOAM or FUNS, please contact us. We’d love to talk to you.
When you’re using unsteady OVERFLOW and Plot3D with changing grids, does the auto partitioner work as the grid changes? [41:35]
The Auto Partitioner will look at every time step individually. For an unsteady case with a changing grid, FieldView will have to redo the work at each time step – reading the case and reading the results.
If you have the same grid, we can skip the grid read and go to the results right away, but not if your grade is changing. FieldView will process every time step individually and move on to the next time step. So that means that there is no incompatibility, and the auto partitioner will work.
Does auto partition with OVERFLOW background brick grids? [42:28]
Currently the Auto Partitioner ignores the brick files. Brick files are extremely fast to read, so they don’t require repartitioning.
They will be processed separately which means that they are compatible with your auto partitioner, but the auto partitioner doesn’t get applied to them because they have no need.
Will the auto partitioner always change or override the original mesh?
No. There is no override. It’s done on the fly in FieldView memory. And so the file remains intact, it’s untouched.
How do we license the auto partitioner? Does a person need to get a separate password or key?
To be totally clear, it is controlled by the same password that controls the number of cores that you can run. So if your license allows 32 cores or more, then the capability will be enabled.