F1 computational restrictions.

Post here all non technical related topics about Formula One. This includes race results, discussions, testing analysis etc. TV coverage and other personal questions should be in Off topic chat.
dialtone
dialtone
110
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

I'm a bit confused by your question.

GPT4 provides no privacy to teams, anything you put in it will be used by GPT-4 so other teams asking similar questions will end up getting your learnings, obviously no team wants that.

Setting that aside:
* GPT is a language model, it can't help with CFD calculations
* GPT4 isn't an open model
* If you strike a deal with OpenAI a node would hardly be enough to run it, as I said running it costs millions of dollars in hardware.
* GPT4 is slow as hell even with expensive hardware
* If you use it for CFD I would reasonably assume that FIA is going to see that as CFD TFlops that you're using
* If you have hardware usable for CFD training it will likely be counted in the pool of TFlops for training that they are all limited by

The CFD limitations have been put in place because teams were already setting up clusters with 8000+ nodes to run their simulations and not everyone could afford it. I fail to see how allowing running LLMs is going to be instead affordable for all.

User avatar
Zynerji
109
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

dialtone wrote: ↑
29 May 2023, 02:37
I'm a bit confused by your question.

GPT4 provides no privacy to teams, anything you put in it will be used by GPT-4 so other teams asking similar questions will end up getting your learnings, obviously no team wants that.

Setting that aside:
* GPT is a language model, it can't help with CFD calculations
* GPT4 isn't an open model
* If you strike a deal with OpenAI a node would hardly be enough to run it, as I said running it costs millions of dollars in hardware.
* GPT4 is slow as hell even with expensive hardware
* If you use it for CFD I would reasonably assume that FIA is going to see that as CFD TFlops that you're using
* If you have hardware usable for CFD training it will likely be counted in the pool of TFlops for training that they are all limited by

The CFD limitations have been put in place because teams were already setting up clusters with 8000+ nodes to run their simulations and not everyone could afford it. I fail to see how allowing running LLMs is going to be instead affordable for all.
Do you even read before you post your nonsense? Or just copy paste from Reddit?


Local nodes run in-house. This is obvious, and compromises zero data. It can run on a single 48gbRAM graphics card. If the teams already own the hardware, yet have to idle it at an imposed limit, they would be stupid to NOT USE IT.

I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.

I'm contracted by a company that owns 6 franchises to synergize their best practices across their locations and improve sales. Being able to just add the store data to the working folder instantly sorts out the tasking. And even sends its own emails to employees with their daily focus areas. What I'm also finding is that several other franchises are for sale, so when I load their data, I can instantly see what impact our methods would have on those locations. 🀯

AutoGPT+plugins literally erase all of your arguments here.

Your box is tiny on this one. I hope you escape soon.πŸ™„

dialtone
dialtone
110
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

Zynerji wrote:
dialtone wrote: ↑
29 May 2023, 02:37
I'm a bit confused by your question.

GPT4 provides no privacy to teams, anything you put in it will be used by GPT-4 so other teams asking similar questions will end up getting your learnings, obviously no team wants that.

Setting that aside:
* GPT is a language model, it can't help with CFD calculations
* GPT4 isn't an open model
* If you strike a deal with OpenAI a node would hardly be enough to run it, as I said running it costs millions of dollars in hardware.
* GPT4 is slow as hell even with expensive hardware
* If you use it for CFD I would reasonably assume that FIA is going to see that as CFD TFlops that you're using
* If you have hardware usable for CFD training it will likely be counted in the pool of TFlops for training that they are all limited by

The CFD limitations have been put in place because teams were already setting up clusters with 8000+ nodes to run their simulations and not everyone could afford it. I fail to see how allowing running LLMs is going to be instead affordable for all.
Do you even read before you post your nonsense? Or just copy paste from Reddit?


Local nodes run in-house. This is obvious, and compromises zero data. It can run on a single 48gbRAM graphics card. If the teams already own the hardware, yet have to idle it at an imposed limit, they would be stupid to NOT USE IT.

I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.

I'm contracted by a company that owns 6 franchises to synergize their best practices across their locations and improve sales. Being able to just add the store data to the working folder instantly sorts out the tasking. And even sends its own emails to employees with their daily focus areas.

AutoGPT+plugins literally erase all of your arguments here.

Your box is tiny on this one. I hope you escape soon.
This is a reply from someone that feels threatened by the conversation we’re having.

Unsurprising coming from you. Happy continuation.

User avatar
Zynerji
109
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

Bro. It's cool. I don't know the super-fine details, but I gave my IT guy a $20k budget and 4 weeks to set it up. He spent $12k, and he had it in the office in 10 days. It's already moved enough mountains that i can now see how its going to put me personally out of business (as a Business Improvement Contractor). This AI is now my boss. I now do as it directs me because it misses nothing.

This post was specifically to ask if F1 should get ahead of an AI takeover by making rules now.

I never came here to argue about the minutiae of the steps involved, or respond to people saying it "can't happen for the XYZ" beliefs that they hold.πŸ™„ I had a bunch of my own a few weeks ago. πŸ˜’

We can however talk about areas that it will take over in F1, and the humanitarian crisis that actually is for a "team sport". I'm actually terrified of where that goes...πŸ˜ͺ

Greg Locock
Greg Locock
233
Joined: 30 Jun 2012, 00:48

Re: F1 computational restrictions.

Post

@Zyn, no I lost interest, at the moment it is good for writing standard scripts quickly, and I'll use it for that. To be fair /because I knew the answer/ it probably wrote a good readable script more quickly than I would.

mzivtins
mzivtins
9
Joined: 29 Feb 2012, 12:41

Re: F1 computational restrictions.

Post

Zynerji wrote: ↑
29 May 2023, 04:28
dialtone wrote: ↑
29 May 2023, 02:37
I'm a bit confused by your question.

GPT4 provides no privacy to teams, anything you put in it will be used by GPT-4 so other teams asking similar questions will end up getting your learnings, obviously no team wants that.

Setting that aside:
* GPT is a language model, it can't help with CFD calculations
* GPT4 isn't an open model
* If you strike a deal with OpenAI a node would hardly be enough to run it, as I said running it costs millions of dollars in hardware.
* GPT4 is slow as hell even with expensive hardware
* If you use it for CFD I would reasonably assume that FIA is going to see that as CFD TFlops that you're using
* If you have hardware usable for CFD training it will likely be counted in the pool of TFlops for training that they are all limited by

The CFD limitations have been put in place because teams were already setting up clusters with 8000+ nodes to run their simulations and not everyone could afford it. I fail to see how allowing running LLMs is going to be instead affordable for all.
Local nodes run in-house. This is obvious, and compromises zero data. It can run on a single 48gbRAM graphics card. If the teams already own the hardware, yet have to idle it at an imposed limit, they would be stupid to NOT USE IT.

I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.
The issue with this is, it is great on the surface, but that isn't enterprise.

F1 teams will not use hardware in any sense like this, especially not consumer grade hardware (although Β£ for Β£ actually faster, they just aren't fit for purpose.)

48gbram isn't even enough memory to power a head node for data processing, let alone fit any useful workers nodes for the level of data we are talking about here.

It is entirely off topic, AI processing and data-ops are two different things, but I appreciate AI needs data to consume.

As mentioned before, GPU is not favoured in Data ops, ML ops or AI anymore due to its limitations at an enterprise level.

I would flip the original question on its head and ask this:

I CFD accurate enough to even bother wasting significant budget on?

User avatar
Zynerji
109
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

Honestly, I'm less concerned about AI generative design, and more about everything else.

maygun
maygun
2
Joined: 20 Mar 2023, 14:31

Re: F1 computational restrictions.

Post

I wonder if real F1 engineers feel like me when they read aero comments on other topics as I am reading comments on this topic related to AI as someone working on AI.

When you have a formula and know how the variables affect the outcome, using an 'AI' model like chatgpt is beyond stupidity.

On the other hand, as an outsider, I feel there is a large playing field to use modern machine learning methods for designing F1 cars if they are not already doing it.

Unfortunately, without historical data, it is impossible to try and do some showcase about these problems and only F1 teams have this data to see if these methods can work or not.
With the budget cap in place, I cannot see any teams trying such an adventure.

From my previous experiences if you dont hire good talent to develop in-house AI models, it will always gonna fail, and good AI talents in the UK, now making 100k+ pounds. Creating a such team, and giving some computational budget would cost at least 2-5M per year.

Billzilla
Billzilla
11
Joined: 24 May 2011, 01:28

Re: F1 computational restrictions.

Post

dialtone wrote: ↑
29 May 2023, 02:37
* GPT is a language model, it can't help with CFD calculations
I know, I asked it that a few weeks ago. And same for FEA .... for now at least.

Apologies if it's been answered before, but how is this maximum computational limit policed? It would seem to be incredibly easy to get around with off-site computers analysing the iterations.

zeph
zeph
1
Joined: 07 Aug 2010, 11:54
Location: Los Angeles

Re: F1 computational restrictions.

Post

Zynerji wrote: ↑
29 May 2023, 04:28

I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.
Sorta OT, but that reminds me: my buddy's former employer installed some ML software on his work computer. For a few years it learned exactly what my buddy did every day, and at the end he was let go because they had a digital clone of him that could do his job. His job was fairly basic administrative stuff, but still.

User avatar
Zynerji
109
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

zeph wrote: ↑
15 Jun 2023, 22:59
Zynerji wrote: ↑
29 May 2023, 04:28

I'm currently using it in a comparable manner (32 core Thread ripper with 4 4070s). It's fast, accurate, and has literally removed the salaries of 4 redundant people. Which means that it has not only already saved more than it cost to set up, but it's also set to do purchasing of upgrade for itself when it finds good deals online.
Sorta OT, but that reminds me: my buddy's former employer installed some ML software on his work computer. For a few years it learned exactly what my buddy did every day, and at the end he was let go because they had a digital clone of him that could do his job. His job was fairly basic administrative stuff, but still.
The 4 ladies that were replaced were there for insurance billing. What took them a week to accomplish (about 30 very tricky paperwork billing coded and legal loophole abusing, accepted submissions), the new Boss completed in under 60 seconds.

I was stunned by the instant production of perfect billing submissions and thoroughly gutted that 4 ladies lost their 10+ year careers. 😒 Anything, paperwork wise, that requires extreme perfection but is prone to human error will be replaced very quickly. We have 0% rejection rate now. If instituted 5 years ago, this company would have made 155M$ more in profit for the same labor spend.

Purchasing and inventory logistics is in process as I write this. That's another 41 ppl....πŸ˜’

It's going to be just a small office with a General Manager and an IT contractor very, very soonπŸ˜ͺ

zeph
zeph
1
Joined: 07 Aug 2010, 11:54
Location: Los Angeles

Re: F1 computational restrictions.

Post

Zynerji wrote: ↑
16 Jun 2023, 05:32

If instituted 5 years ago, this company would have made 155M$ more in profit for the same labor spend.
Yup, Almighty Profit. Who needs people.

User avatar
Zynerji
109
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

zeph wrote: ↑
16 Jun 2023, 08:46
Zynerji wrote: ↑
16 Jun 2023, 05:32

If instituted 5 years ago, this company would have made 155M$ more in profit for the same labor spend.
Yup, Almighty Profit. Who needs people.
It's hard for me, but the only reason for a business to exist is to make profit. See the current Target and Budweiser stock debacle for reference.

And PS: If I found a human that could produce perfect work, hired them and fired those same 4 ladies, is it any different?

I'm a team guy. It's against my nature for this. But I cannot deny the improvements.

littlebigcat
littlebigcat
1
Joined: 06 May 2017, 19:47

Re: F1 computational restrictions.

Post

Crocodile tears

zeph
zeph
1
Joined: 07 Aug 2010, 11:54
Location: Los Angeles

Re: F1 computational restrictions.

Post

Zynerji wrote: ↑
16 Jun 2023, 14:44
zeph wrote: ↑
16 Jun 2023, 08:46
Zynerji wrote: ↑
16 Jun 2023, 05:32

If instituted 5 years ago, this company would have made 155M$ more in profit for the same labor spend.
Yup, Almighty Profit. Who needs people.
It's hard for me, but the only reason for a business to exist is to make profit. See the current Target and Budweiser stock debacle for reference.

And PS: If I found a human that could produce perfect work, hired them and fired those same 4 ladies, is it any different?

I'm a team guy. It's against my nature for this. But I cannot deny the improvements.
Sorry, didn't mean to criticize/accuse. We all have to try to make the best of the life we've been given. I certainly don't blame you for doing your job.

I'm waiting to train ChatGPT4 to do my admin/taxes for me, I'm not against ML/AI in principle. I see the potential and advantages on so many levels.

On a philosophical level, when AI does everything better than humans, what do we do exactly?