Zynerji wrote: β29 May 2023, 04:28
dialtone wrote: β29 May 2023, 02:37
I'm a bit confused by your question.
GPT4 provides no privacy to teams, anything you put in it will be used by GPT-4 so other teams asking similar questions will end up getting your learnings, obviously no team wants that.
Setting that aside:
* GPT is a language model, it can't help with CFD calculations
* GPT4 isn't an open model
* If you strike a deal with OpenAI a node would hardly be enough to run it, as I said running it costs millions of dollars in hardware.
* GPT4 is slow as hell even with expensive hardware
* If you use it for CFD I would reasonably assume that FIA is going to see that as CFD TFlops that you're using
* If you have hardware usable for CFD training it will likely be counted in the pool of TFlops for training that they are all limited by
The CFD limitations have been put in place because teams were already setting up clusters with 8000+ nodes to run their simulations and not everyone could afford it. I fail to see how allowing running LLMs is going to be instead affordable for all.
Local nodes run in-house. This is obvious, and compromises zero data. It can run on a single 48gbRAM graphics card. If the teams already own the hardware, yet have to idle it at an imposed limit, they would be stupid to NOT USE IT.
I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.
The issue with this is, it is great on the surface, but that isn't enterprise.
F1 teams will not use hardware in any sense like this, especially not consumer grade hardware (although Β£ for Β£ actually faster, they just aren't fit for purpose.)
48gbram isn't even enough memory to power a head node for data processing, let alone fit any useful workers nodes for the level of data we are talking about here.
It is entirely off topic, AI processing and data-ops are two different things, but I appreciate AI needs data to consume.
As mentioned before, GPU is not favoured in Data ops, ML ops or AI anymore due to its limitations at an enterprise level.
I would flip the original question on its head and ask this:
I CFD accurate enough to even bother wasting significant budget on?