Tom's Hardware of Finland

ALSO: https://www.evga.com/products/product.aspx?pn=131-SX-E295-KR

EVGA seems to be clearing out their mATX X299 board for 150 USD (note: I am baselessly assuming you are in North America)

Unless you need that level of performance with digital instruments, threadripper looks like the value winner, at 1/4 the.money. And that’s 1.0.

Threadripper 2.0 offers better clocks, better cache and communication between cores (so IPC should be up a little). And if you want it, up to 32 cores. Still, for less money.

The new threadripper reviews are out and it looks like the only improvement from the 1950X to the 2950X (both are 16c/32t parts), is 400-500 mhz extra headroom. Any power usage savings is eaten by the extra mhz.
There seems to be virtually no clock for clock IPC improvement. I expected a little bit.
But that will make used 1950x a real good value!

The new 32c/64t 2990wx is kind of its own beast. Latency is higher in full 32 core mode. But it’s a good value for budget workstation needs. Where that many threads trumps latency in appropriate workloads and even saves power, by doing work so much faster. Even though the chip draws more power in he general. Less time processing.can mean less power used.

You can also disable half the chip, for 16 cores and you get latency improvements which put it a little better than the 2950x in overall performance. Which I don’t full understand why. But clearly the 2950x must have slightly less bandwidth and/or higher latency.

I’m still very, very, very skeptical that there is a consumer use case for that many x86 CPU cores. most workflows that are parallelized that effectively should be run on the GPU, and meanwhile x86 value and single-threaded performance both plummet so dramatically after ~8 cores that I honestly just don’t get it.

I’ll be the first one to admit that I’ve more or less completely lost interest in x86 from the point of view of “exciting new developments in personal computing” because ARM and CUDA have been so much more interesting lately but I still don’t understand who is driving these product cycles. unless you’re running your own VPS datacentre (which is not at all cost competitive or sensible these days), why would you want this? all these benchmarks are synthetic nonsense and the $2000 chip only outperforms the $300 one on like half of them

2 Likes

it feels like a very strange niche of “prestige” products created exclusively for the sake of people who want to argue who has the fastest CPU, but I can’t imagine AMD is spending all the R&D effort for the sake of anandtech comment threads unless these are just re-binned server chips that the marketing department is having fun with

I’m glad they’re back to having the “cheap desktops + game consoles” market locked down because it’s something they’re good at and it’s just enough to keep pressure on Intel but like … there is not a consumer use case for 500w of x86 on one die and I think you’re doing something wrong if there is

2 Likes

Yeah, existing x86 programs are written in a way that depends on instruction-level parallelism. There’s not much point in x86 cores that throw that out. Also the frontend overhead of x86 is negligible with large cores, but probably starts to waste an excessive proportion of the core if you make it smaller overall.

1 Like

this is literally the point of Zen

the entire stack is repurposed server cores (which is to say the designed for servers first and let everything trickle down) arranged for the market the SKU is targetting

Threadripper is even more egregious since it’s their server line (Epyc) on a different socket and with less PCI-E lanes

1 Like

that makes sense, yeah (revenue-wise too). I know everyone wants to throw AMD a bone, it’s just like … it should be very clear at this point that x86’s only remaining advantages over other architectures are a) big single-threaded performance and b) compatibility with existing desktop/server environments. trying to sell an x86 chip without the former advantage to an extreme high-end market segment that won’t be workflow-sliced the way a server would is just goofy.

1 Like

I agree that right now, there are few reasons to have such a CPU. But, blender, cinibench, and those two ray tracing programs show really good scaling. People doing that kind of work, now have a lot of power to play with.

And the fact these products exist. May push certain brands to adapt for them. I’m thinking of Adobe with its varied suite. Even handbrake might get off their but.

I also saw a Linux Vs. Windows benchmark, where Linux killed. So, if MS does some work, we could see some benefit across the board.
148515_7zip-2990wx

Older test with 8700k

if handbrake isn’t using ffmpeg as its backend, then it has no reason not to do so, and if it is, then GPU-based video encoding is already implemented, and is at least 10x faster with current hardware than x86 will ever be. likewise for CUDA in Creative Cloud, and likewise for raytracing and blender and cinebench and bzip:

So I got sick of the periodic live crashes on my NAS (lights are on but nobody’s home and needs a hard reset to get back into a normal state) + buzzing fan so I cracked it open. Turns out the thermal paste had failed and it was hitting the motherboard auto-cutoff thermal limit… repasting and undervolting got it to idle at under 55°

The real wtf is I have no memory of selecting any of the parts in it: I bought an AMD chip? And ITX? It was in shipping/storage for 6 months and I immediately blew up the PSU when I forgot to set it to the right voltage.

2 Likes

55 seems high for idle. Especially after an undervolt. Is it an itx case and low profile cooler?

https://pcpartpicker.com/list/Xy3stg

The PSU rests on and occludes 70% of the stock CPU cooler. It stopped randomly turning off every 3 weeks so I’m prepared to never think about or touch it again.

I’m curious what your load temps are. Max temp for Intel is 100c and their extremen NUC computers run pretty close to that. So I guess it would be fine. But…man…organize some cables if you have any which need it!

Load temps are around 60° when Plex is busy. Everything is tucked away to manage airflow, it doesn’t make much difference and I’m not bothered to dtroubleshoot anything until the whole thing fails catastrophically

1 Like

Oh well 60c is fine for CPU. I wonder if it’s the mobo chipset overheating.

dear all GPU developers:

from now on, I want the power of a chip represented in giga rays

I wonder if the PS5 GPU will ship with any capacity for mixed-precision shaders (possibly) and/or raytracing (probably not) so there’s some incentive to include those in game engines.

Nvidia should be able to get FP32 up from what’s apparently going to be 15w/TFLOP to 10w/TFLOP once they shrink this architecture onto 7nm in a couple years, which is already impressive as hell, but they really want to push lower-precision compute, and for now they’re gimping that feature on the consumer products, but they’re rapidly approaching a point where there will be a massive performance increasing by doing some rendering with FP16 (which is already at like 2w/TFLOP), which should be viable in many cases.

Packed math and FP16 are features of Vega so whatever is in PS5 should probably have fairly mature implementations.

1 Like

The switch apparently has some FP16 capability. But it’s not yet clear if whatever the switch has us even usable for a game engine.