I waffled for a few days between those choices - I'm trying to keep the amount of stuff I own to a minimum, and hoped that the laptop would be fixed in time. But that's unlikely with covid, and I went with the desktop route as the new amd chips really intrigued me.
Now, a bit of background, I do vfx for film, and recently quit my job to pursue freelance opportunities because reasons. When I still had a job with a big vfx production house, I'm pretty much used to a multi cpu xeon machine with at least 128gb of ram, and a quadro for my display. There is no way to justify one of these at this point in time. But the last few years, AMD's threadripper cpus have been popping up all over my feed, and I was like, yeah, let's go check out those chips.
The pricing of the current threadrippers simply blew my mind. Yeah, no. 1.9k cad - 5.4k cad for a cpu? I've seen many vfx freelancers with them, but until I can get to where they are, I'm going to stick with something I can justify. I went with the next step down, the top of the line Ryzen 9 cpu. Let me get the built list up before I start on the choices I made, and where I went "wrong".
- AMD 3950x
- Gigabyte Aorus Elite X570 Wifi
- 2x32GB HyperX Kingston ram 3600Mhz DDR4
- EVGA 750 G2 PSU
- Gigabyte 2060 Super Gaming OC
- Fractal Design Meshify C
- Noctua D15 cpu cooler
- 2x WD Black 500GB NVME
- Asus PA248QV
First up: I am NOT a system builder, and I am not in touch with the latest tech things happening! The last PC I built was a pentium 3 in the early 2000s. Ever since I've started this vfx life, I have had no need for a powerful machine since that's what work would provide.
For the cpu it was pretty easy to select, Threadripper was out. At least the newer 39xx series. No way to justify the cost of the cpu, and the mainboards were pretty expensive as well. I _might_ have dropped the cash on a Threadripper system if my rendering solution was purely cpu (e.g., Mantra, PRman, Arnold), but based on what I've been doing at home, I enjoyed the workflow of using Redshift as my primary renderer for its speed of feedback. I'm hoping Arnold GPU will make some big strides, as I think of all the renderers I've dabbled with, I found Arnold the easiest to get a nice image with, and the shaders are the most intuitive to me.
So, Ryzen 9 then. I waffled between the 3950x and 3900x; most of the time it was unlikely I'd be using all the cores all the time, and the 3900x has much better single core speed. Especially when rendering, redshift does not seem to use more than one cpu core when rendering on a single gpu. Given as I'm probably going to be working on fluids and volumetrics for a fair bunch, and these are relatively well multi-threaded, I decided to place my bets on the 3950x with its additional cores.
For the motherboard, I was pretty limited to what I can get here in Vancouver (time constraints), so I basically filtered it by X570 boards that can a) support 128gb of ram b) at least two pcie slots c) wifi onboard as I ain't not using no cat 5 cable if I can help it!
The Aorus Elite X570 Wifi seemed to fit the bill, and it didn't seem to be a bling fest of leds, so I went with that. More on that in the gpu section.
Ram: I didn't really have much choice. For 2x32gb dimms, my choices were Kingston, Crucial or G.Skill. I have no idea what G.Skill was, and it was a tossup between Kingston or Crucial. The Kingston blurb on the online shops mentioned "Ready for AMD Ryzen" so, I went with those.
The 750W PSU seemed like a good choice, I used 200W for the 2950x (power usage figures jump all over the place on review sites; I took the upper limit) and another 200 for the 2060 Super. Lots of overhead for peripherals and future expansion.
Now the GPU might surprise you. If I'm running redshift, shouldn't I be running something faster? Indeed! However with the 3000 series nvidia cards coming up in the next quarter, I thought I'd rather wait it out, and render using an online service if I needed fast turnaround.
This is where it kinda breaks down. I was hoping to start with the 2060, then tack on a 3000 series when they arrive. Unfortunately, I read the blurb on gigabyte's site incorrectly, and thought both slots were x16 pcie. Wrong! Only one pcie slot had the 16 lanes, the second slot only was a 4 lane slot, though it was a x16 connector. Blargh. So, that kind of ruins my plans for running two cards.
What I really should have looked for, was a motherboard that would support nvidia SLI, these would support two gpus no problems. Oh well, live and learn.
Finally, storage. This, I did not figure out a good solution yet. My current plan is to have one nvme to be my primary boot drive with all applications on it. The second nvme drive as a "work drive" for the current project I'm working on. The read/write speeds would really help with simulation caches.
500gb is fine, perhaps even overkill for the application drive. 500gb might be a joke for the work drive. I've occasionally had caches in the terabyte range when dealing with large, detailed simulations, especially when storing multiple revisions of simulations. Given I doubt I'll get one of those in the near future, I thought a 500gb drive for my current project will be enough, then I can dump the data onto a 2.5in for backup when the project is done.
If I do need to get a large working drive for simulations though, I think I will probably get a SSHD drive, perhaps two in raid1 configuration for data security.
As a side note, my thoughts on storage above are on top of how I'm planning to store data on disk. A feature film is broken down into sequences, and each sequence is broken down into a series of shots. While I'm not going to be working on a whole sequence on my own, I still do need a way to store assets in a manner that is easy to recover and work on, not haphazardly stored across folders over multiple disks. That's in the pipeline to develop soon.
So, at the time of this writing the machine has booted up fine with windows 10 installed (its a miracle!). I've got the latest drivers, installed houdini, redshift, as well as resolve and fusion studio. ocio has been installed, and I've moved all my hdrs and assets from the laptop's work drive to the nvme work drive.
Thoughts on the build:
Pretty smooth. The Fractal Design meshify C is amazing to work with. Pretty lightweight, and the quality is top notch. Manual is pretty good. The D15 notua cooler also went on the motherboard easily. All in all it was a very easy pc to build.
I've done limited tests on it so far, but after I got the drivers installed, I turned on the ram's XMP profile to 3600Mhz, so far so good. I did some benchmarking using Cinebench R20, and the results came in at 92xx, which was 200 less than the comparison machine. Wasn't too keen to overclock but I remember reading something about pbo - precision boost overdrive. Some kind of automatic overclocking thing. I just set that to auto and hey, presto! R20 hit 94xx no drama. Ran a few minutes of prime95 and it seems good.
The temperatures with all cores running on R20 was in the 61-62C range, but when I turned on pbo it jumped to the 70+ range. It seems like that's still a good temperature to be at, so I'll leave it there for now. Plus, during simulations, the cores are not pegged permanently at 100%, for example with volume simulations, certain stages are only single threaded, so there will be periods of calm amid 100% cpu utilization spikes.
In terms of noise, I'm pleased by how quiet this system is. My desktop replacement is a banshee screamer, the fans engaging pretty loudly for any task more than reading an email. Not this PC. With all cores running at 100% whilst running an extended R20 session, it's audible only if I really try to listen to it. If I have any audio playing, it's basically covered up. Heck even the fan across the room from me is louder.
For the GPU, it too hit 70-ish degrees under redshift render, and the fans are barely audible.
Finally, for the monitor, I found the Asus PA248QV. 24 inches across is a good size, and I choose it because it supposedly supports the Rec709 colour space. Now, I'm no colourist and only know enough to get textures into the right colour space for rendering, when to apply view luts. What I was impressed that there was a calman calibration certificate in the box. I was like... whuuuuut. Omg how is this monitor only 200 odd bucks!?
Only used it for a few hours, it's pretty solid. No glare, and I'm using a Rec709 profile. I really appreciate the proper working resolution of 1920x1200 that houdini needs. I still need to plug in one of those portable display monitors I was using with my laptops.
I'll need to do some comparison sims between this an my desktop replacement - when it gets back. But based on what I've seen in benchmarks, this pc will likely wipe the floor with the 9900k for multi threaded simulations. That said, the 9900k's single threaded performance at 5ghz+ is nothing to sneeze at; lots of Houdini's nodes are still single threaded. For specific workloads, the i9 could prove to be a better performer.
No comments:
Post a Comment