johnny comelately wrote: ↑02 Apr 2022, 13:13
Do you think there will be a better alternative to the meshing required for existing simulations?
Better in terms of less computing power needed etc.
Or is there something being developed already?
I don't think that meshing actually needs that much compute power to begin with... it's the solving which is the killer.
For example, ANSYS mesher is a serial mesher (i.e. a single core) and I've generated meshes approaching 200 million cells using it before... it takes a while, for sure (annoyingly at times); but there are also ways to mesh using multi-cores too (e.g. snappyHexMesh in OpenFOAM). However, when you try to split your mesh up over a ton of cores (and also depending on what it is your simulating) you can run into issues at the interface of your partitioning. In general, I think I've tended to stick to 4 or maybe 8 cores max for meshing, depending on size. Alternatively, in ANSYS Mesher, if I was say meshing some MRF's for the wheels, each of those could be considered a separate "part", meaning that they could be meshed individually by a single core, in parallel to one another (so long as you adequately control the mesh cells on either side of the MRF interface, of course).
For example, let's assume you're trying to model a multiphase flow of water in a bathtub that is sloshing around. You have water, air, and an interphase of your phases. If you were to split your meshing (and solving too btw) in a normal cube grid, then it is possible that you might have a partition interface right on the surface of the water; meaning a lot of back and forth between cores on what they are meant to do with that water (introducing errors in the process). You'd instead want to employ a different decomposition method which, effectively, took a knife and made vertical cuts of the entire domain from top to bottom. That way, you only have partition overlaps in the horizontal direction, perpendicular to the surface of the water.
I think that there definitely could be ways to improve meshing times in general though; for example, I know a few F1 teams use ANSA for surface meshing, and then parse that into Fluent Mesher or snappyHexMesh for the volume meshing. But whilst I suspect it's a bit of both here, most of that is probably to do with chasing a higher mesh quality, rather than the fastest possible mesh generation time ever.