CFD - PC Build Guide

Here are our CFD links and discussions about aerodynamics, suspension, driver safety and tyres. Please stick to F1 on this forum.
User avatar
PlatinumZealot
559
Joined: 12 Jun 2008, 03:45

Re: Guide to Building a CFD Machine

Post

For my side hustle... I only have 7700K and 24GB of DDR4.. Which is actually quite OK for my needs. When i do CFD it is usually very small projects.

U can see how I used CFD to figure out the coanda exhaust in that thread.

Next build will be my third full build. Definitely going for bigger monitors and top end graphics cards.
🖐️✌️☝️👀👌✍️🐎🏆🙏

Racing Green in 2028

User avatar
SiLo
138
Joined: 25 Jul 2010, 19:09

Re: CFD - PC Build Guide

Post

Only just found this post. Now I just need to understand how to use CFD software so I can run my own simulations...
Felipe Baby!

Fluido
Fluido
1
Joined: 25 Mar 2022, 17:17

Re: CFD - PC Build Guide

Post

2x xeon gold 6134 (8cores, base 3.2Ghz, turbo 3.7Ghz,cache 24,75MB)
or
2x xeon gold 6142 (16cores ,base 2.6Ghz, turbo 3.7GHz, 22MB cache)

Which is better for CFD?

johnny comelately
johnny comelately
110
Joined: 10 Apr 2015, 00:55
Location: Australia

Re: CFD - PC Build Guide

Post

Do you think there will be a better alternative to the meshing required for existing simulations?
Better in terms of less computing power needed etc.
Or is there something being developed already?

Fluido
Fluido
1
Joined: 25 Mar 2022, 17:17

Re: CFD - PC Build Guide

Post

johnny comelately wrote:
02 Apr 2022, 13:13
Do you think there will be a better alternative to the meshing required for existing simulations?
Better in terms of less computing power needed etc.
Or is there something being developed already?
Is this question for me?
If yes I dont understand it.

johnny comelately
johnny comelately
110
Joined: 10 Apr 2015, 00:55
Location: Australia

Re: CFD - PC Build Guide

Post

My question about an alternative to meshing was meant for Vyssion

User avatar
Vyssion
Moderator / Writer
Joined: 10 Jun 2012, 14:40

Re: CFD - PC Build Guide

Post

johnny comelately wrote:
02 Apr 2022, 13:13
Do you think there will be a better alternative to the meshing required for existing simulations?
Better in terms of less computing power needed etc.
Or is there something being developed already?
I don't think that meshing actually needs that much compute power to begin with... it's the solving which is the killer.

For example, ANSYS mesher is a serial mesher (i.e. a single core) and I've generated meshes approaching 200 million cells using it before... it takes a while, for sure (annoyingly at times); but there are also ways to mesh using multi-cores too (e.g. snappyHexMesh in OpenFOAM). However, when you try to split your mesh up over a ton of cores (and also depending on what it is your simulating) you can run into issues at the interface of your partitioning. In general, I think I've tended to stick to 4 or maybe 8 cores max for meshing, depending on size. Alternatively, in ANSYS Mesher, if I was say meshing some MRF's for the wheels, each of those could be considered a separate "part", meaning that they could be meshed individually by a single core, in parallel to one another (so long as you adequately control the mesh cells on either side of the MRF interface, of course).

For example, let's assume you're trying to model a multiphase flow of water in a bathtub that is sloshing around. You have water, air, and an interphase of your phases. If you were to split your meshing (and solving too btw) in a normal cube grid, then it is possible that you might have a partition interface right on the surface of the water; meaning a lot of back and forth between cores on what they are meant to do with that water (introducing errors in the process). You'd instead want to employ a different decomposition method which, effectively, took a knife and made vertical cuts of the entire domain from top to bottom. That way, you only have partition overlaps in the horizontal direction, perpendicular to the surface of the water.

I think that there definitely could be ways to improve meshing times in general though; for example, I know a few F1 teams use ANSA for surface meshing, and then parse that into Fluent Mesher or snappyHexMesh for the volume meshing. But whilst I suspect it's a bit of both here, most of that is probably to do with chasing a higher mesh quality, rather than the fastest possible mesh generation time ever.
"And here you will stay, Gandalf the Grey, and rest from journeys. For I am Saruman the Wise, Saruman the Ring-maker, Saruman of Many Colours!"

#aerosaruman

"No Bubble, no BoP, no Avenging Crusader.... HERE COMES THE INCARNATION"!!"

graham.reeds
graham.reeds
16
Joined: 30 Jul 2015, 09:16

Re: CFD - PC Build Guide

Post

Cache is king. In my line of work (not CFD but still computationally expensive) you want as few cores as possible with as much cache as possible.

No HyperThreading as that splits the cache between them so if you have 32kb per core then you would get 16kb per Thread.

Likewise less cores generally means more cache per core. Higher clock speed is preferable over wider clock (4x4ghz > 8x2ghz).

We use 8 core Arm chips with Nvidia TPU.

Do any prosumer CFD products support TPUs?

johnny comelately
johnny comelately
110
Joined: 10 Apr 2015, 00:55
Location: Australia

Re: CFD - PC Build Guide

Post

Vyssion wrote:
04 Apr 2022, 17:47
johnny comelately wrote:
02 Apr 2022, 13:13
Do you think there will be a better alternative to the meshing required for existing simulations?
Better in terms of less computing power needed etc.
Or is there something being developed already?
I don't think that meshing actually needs that much compute power to begin with... it's the solving which is the killer.

For example, ANSYS mesher is a serial mesher (i.e. a single core) and I've generated meshes approaching 200 million cells using it before... it takes a while, for sure (annoyingly at times); but there are also ways to mesh using multi-cores too (e.g. snappyHexMesh in OpenFOAM). However, when you try to split your mesh up over a ton of cores (and also depending on what it is your simulating) you can run into issues at the interface of your partitioning. In general, I think I've tended to stick to 4 or maybe 8 cores max for meshing, depending on size. Alternatively, in ANSYS Mesher, if I was say meshing some MRF's for the wheels, each of those could be considered a separate "part", meaning that they could be meshed individually by a single core, in parallel to one another (so long as you adequately control the mesh cells on either side of the MRF interface, of course).

For example, let's assume you're trying to model a multiphase flow of water in a bathtub that is sloshing around. You have water, air, and an interphase of your phases. If you were to split your meshing (and solving too btw) in a normal cube grid, then it is possible that you might have a partition interface right on the surface of the water; meaning a lot of back and forth between cores on what they are meant to do with that water (introducing errors in the process). You'd instead want to employ a different decomposition method which, effectively, took a knife and made vertical cuts of the entire domain from top to bottom. That way, you only have partition overlaps in the horizontal direction, perpendicular to the surface of the water.

I think that there definitely could be ways to improve meshing times in general though; for example, I know a few F1 teams use ANSA for surface meshing, and then parse that into Fluent Mesher or snappyHexMesh for the volume meshing. But whilst I suspect it's a bit of both here, most of that is probably to do with chasing a higher mesh quality, rather than the fastest possible mesh generation time ever.
OK, thank you for that.
Just sticking with the meshing for the moment (because if that was simplified the solving would be easier), what I imagined was an intuitive approach of the machine for the awareness of curved and irregular surfaces of solids.
From what I understand, currently the meshing is along the STL type of methodology.
If you consider what a CAM machine does it point to points with the feeds producing a curve between points as a product of the necessary movement.
If a machine used this type of interpretation it still requies points of reference.
The alternative could be a different recognition system of irregular shapes, just as humans do with visual and touch, rather than mathematical point plotting.
Fluid behaviour is another matter.

johnny comelately
johnny comelately
110
Joined: 10 Apr 2015, 00:55
Location: Australia

Re: CFD - PC Build Guide

Post

Is it correct to assume that all the current CFD and similar work is mathematical point based calculations?
How do "they" calculate the fluid behaviour? Is it by applying known (ie, proven by experiments) fluid equations that represent behaviour?
And being a calculation it involves now huge mathematics
By the way this is an interesting insight into the power needed at the beginning of this lecture (a few years old now)


From looking for a less onerous and costly and limited method for calculating these enormous algorithms, I came across the Bezier ( French engineer Pierre Bézier, who discovered them independently and used them to design automobile bodies at Renault) curves principles, which unfortunately is still points based calculating


So the best way I can think of to describe the idea is to equate it with how an artists mind creates curves even though it is in 2D.
Are there any such principles that exist that will produce less costly simulation solutions to problems such as CFD?
For instance when thermal simulations (non points based) are done I presume they use formulas arisen from experiments and that then becomes straight mathemetics, but when modelling CFD simulations it still requires a creation of the shapes via a points method.
And I'm not talking about validation from data received.