Blog

FLOPS and Why They Matter for Video Professionals


image not found

On Monday Apple gave us a preview of their new Mac Pro in the typical Apple fashion. Beautiful design, unique engineering, and a very simple look. They also announced some pretty impressive specs, including one that is only talked about when you’re shopping for a new supercomputer; 7 Teraflops. But what does that number mean and why is it important for a video production workstation?

What is a FLOP anyway?

This is a pretty technical conversation, but I’m going to attempt to explain it in as simple terms as I can. FLOPS stands for Floating Point Operations per second and is a term used to measure a computer’s ability to do math; specifically its ability to do complicated math using numbers with lots of decimal places. That’s what the “Point” in Floating Point refers to. Why is this important? The real world doesn’t work in simple integers (numbers without decimal places). Scientific calculations – especially ones that try to map out what’s going on in the real world – require a computer that can work with numbers with countless decimal places. The more of these calculations a computer can do at the same time, the more accurate the results. The most common number people talk about here is Pi since one of the only “real” numbers most people know about.

General purpose CPUs (like a Core i7 or Xeon) are not great performers when it comes to FLOPS. This is by design since the main CPU in a computer has a lot of very simple tasks to do. Things like reading data from a hard drive and loading it into memory involve very little math when compared to the tasks that a GPU handles. A GPU on the other hand has to take a program and calculate how the pixels look on screen. In a video game, if you move a character forward on the screen, the GPU does a bunch of quick math to decide how to display that new information on your screen.

This sounds simple, but remember that all this has to happen in real time at smooth frame rates (30FPS+). In modern computers, even low-end displays have full HD resolution, which means 1920×1080 pixels on the screen at all times. This means the GPU is performing these complex math operations for 2 million+ pixels 30+ times per second. That’s over 62 million pixels being calculated every single second. Sometimes those calculations are very simple (the user interface as an example), but the more depth there is and the more realistic the resulting image, the more math there is to calculate.

How FLOPS Relate to Video Production

Video playback didn’t historically require a GPU. Typically you just needed to make sure your disks can feed the data fast enough and the video would display on the screen. There’s no real complex math going on there. As we started getting higher definition video, our disks couldn’t keep up with the amount of data we were feeding it and keeping all that uncompressed video on our hard drives was simply out of the question. Enter compression. This works great to keep the quality of video high while keeping data rates and file sizes low, but now your computer has to perform some floating-point operations on that compressed data to display it properly. The CPU can mostly keep up with this, but remember that your CPU still has to do all kinds of other things to keep the computer running and since it’s not great at FLOPS, anything else it has to do will make video playback suffer.

This is where hardware accelerated video playback comes in. Offloading video playback to the GPU means your CPU is freed up to do the more menial tasks while the GPU can do what it does best – do some math to figure out how to display pixels on the screen. Actually just decompressing video and playing it back is a pretty easy task for the GPU. Even my 3 year old nVidia GTX470 has the power to playback multiple HD streams in real time and it’s only capable of 1 teraflop.

Modern video software is doing a lot more than just playing back video though. You may know that there are certain effects in programs like Vegas and Premiere that are listed as hardware accelerated. Color correction, grading, transitions and 3D camera motion are all effects that send complex instruction sets to the GPU to transform the pixels of the original image. Some of these calculations are more complex than others and most people will have multiple effects on top of one another, creating more and more complex instruction sets that need to be calculated in real-time. These calculations aren’t anywhere near as complex as those in your favorite 3D rendering software or the newest computer games, but 4K video means you’re processing 4 times as many pixels as most video games.

The more teraflops a computer has then, the more you can play with different stacked effects and masks without fear of slowing things down. This means doing live color grading on-set and less frustration in post, which leads to a better product. I know for me personally I’ve been on enough of a time crunch for some projects that I decided against using certain effects, so the more powerful the GPU, the less I would have to make those kinds of decisions.

The Excitement of the Mac Pro Supercomputer

Being capable of 7 Teraflops should make the Mac Pro able to play back smooth 4K video with complex color correction and grading without breaking a sweat. The fact that they’ve put that much power into something that looks like a bathroom trash can is nothing short of amazing. It excites me a little bit that we’re finally getting out of the habit of talking about GHz when it comes to computers. Teraflops and Petaflops are typically only discussed in the realm of super computers, so this new Mac Pro will be ushering in a whole new way of talking about the little tower under your desk. In an age where most consumers are buying ultrabooks with low voltage CPUs and playing Candy Crush on their tablets, this is a great device to “bring sexy back” to the desktop workstation and make everybody in the video production industry very happy – regardless of if they buy this machine or not.

  • I don’t see much point in editing full-def 4K video. Why not use lower resolution proxies during editing?

    • tommybyrd

      Generally because that’s an additional step. Proxies are not a new idea. Once we had hardware that was capable of working with the full resolution SD/HD in real-time, people started doing it that way. Right now it probably doesn’t matter much since we don’t have 4K displays and broadcasters aren’t outputting in 4K, but as technology advances, people will want to work with 4K video at its native resolution.

      • >since we don’t have 4K displays
        Yeah, that’s the point.

        But I disagree that it’s an additional step. Usually programs like Vegas make it automatically to achieve smooth playback.

        • tommybyrd

          It’s still an additional step the software makes that’s not necessary when you can play back 4K smoothly. But yeh you’re right that it’s not a big deal at this point. There are also benefits to having more powerful GPUs beyond just playing by 4K video, though. More real-time hardware accelerated effects and faster render times are important, but also since After Effects will be including Cinema4D, it’s going to be really nice to work with a real 3D engine in desktops as powerful as these.