I think in the age of 3nm ASICs, the human eye becomes the weak link. A literal multi TFLOP image processor with Tensor cores can give (near) real-time image enhancement and capture as well as encoding. It can also fit inside a small enclosure.nzjrs wrote: β20 Jan 2022, 10:51Haha, you tell me exactly what you want done and I'll tell you if it can be done in real time. I won't even charge you for the advice .
I think you will be surprised
(very few conv. nn / DL image processing operations are able to be spatially parallelized and doing round-robin parallelization just pushes the latency such most people, myself included, don't define it as real-time any more. there are trade offs here but to say 'patently false' is laughably patently false and shows a misunderstanding of how current AI juiced imaging is implemented and the trade offs involved)
I'm drawing the line at broadcast and presentation as well as archival footage. I'm not trying to suggest point-cloud laser scanning and modeling in real time. The tech was avaliable for small 4k onboard cameras in 2015. Today, it is only exponentially more powerful, and currently driving autonomous vehicles.
If nothing else. A few strategically placed 8k/240fps cameras should be on board for flexture measurements during the race. Those can just be stored for later review however, but would still add some "replay" value if made available during the race.