Zynerji wrote: β25 Jun 2021, 14:53
It is point cloud, so no mesh.
The "bounce behavior" can be turbulent (curved). The horsepower to calculate was the only issue.
He/we never explored much past that conversation, so I'm sure there are more devils in the details. It just seemed reasonable to speculate that someone has done this by now with the huge computing power available in a multi-GPU desktops and super-computers.
It's not *just* compute - it convergence or the ability to converge. The whole engineering art and the bounds on usefullness of most of these approaches is differentiability of the transform and the implied smoothness or shape of the energy/parameter space to be explored.
(at least as you describe what is almost a complete target -> design reverse optimization problem)
So describing these things as click and go makes assumptions about the shape of the optimization landscape and not *only* the compute cost to explore it. The best these things will be is a minor optimization for the last x% to refine the draft given by a designer - because any other case the convergence will be sketchy as a generalized tool.
There are first principles in play here that in general can't be ignored or brute forced around.
A centering example I communicate to beginners is something like; ML of the 2000s (generative, evolutionary, etc) used to surprisingly often not dramatically better than just randomly exploring the parameter space, in the same way that many of the complex optimizers or DL solutions today are often not much better than just exhaustive linear regression. Compute made gradient descent cheaper, that's it, basically. It means you can be wrong about the gradient more often, but thats not changing the fundementals.
There is way more subtlety here, and I'm not going to claim to know the properties of the CFD optimization landscape. My intuition is that irrespective of the shape of the CFD landscape, because the cost to explore it will be higher than (my field, visual DL), the confidence in which you can say a tool just works 'converges somewhat globally optimally' is much lower.