Tom's Hardware of Finland

How about a moto razr that is a normal sized 18:9 smartphone unfolded and half that size folded :thinking:

4 Likes

yeah, it’s not clear why that wouldn’t make way more sense, given how many people want smaller phones, and how little support Android apps have for tablet form factors

1 Like

The real tragedy is that it has samsung’s software on it

https://www.anandtech.com/show/13973/nvidia-gtx-1660-ti-review-feat-evga-xc-gaming

This seems like a decent value but I don’t know if it’s better than used Pascal yet.

it actually makes turing look like a slightly better clock-for-clock improvement than I’d gotten the impression from the higher end parts too, which is good

still not sure why you’d want to buy anything before 7nm at this point though

honestly between AMD freaking out and lowering whatever Vega 56 cards still left in the retail channel to $270 and the RTX 2060 offering significant performance uplift for an extra week or two of saving money up, the 1660 Ti (mostly all of the overclocked/actually good cooler editions) seems like an aggressive waste of time and money

1 Like

Is Ryzen 7nm and some kind of Vega II card going to be viable come Bungie Day?

7nm Vega, as evidenced by the Radeon VII, is already a bust, Wait For Navi, etc.

7nm Ryzen, on the other hand, is going to have a hojillion cores running at 5.0 jiggawatts turbo clock and you can drop it into a 50 buck motherboard from 2017 and it’ll run perfectly fine

3 Likes

(I am not responsible for anyone putting a 16c/32t Ryzen into a cheap B350 board and having the VRMs explode)

1 Like

yeah 7nm ryzen should actually be pretty damn competitive in desktops, I still don’t think anyone will be competing with Intel for notebooks so they’ll continue to languish until 10nm becomes decent but I’m looking forward to it

meanwhile AMD manages to defy every GPU process shrink gain by using the same architecture with almost no changes since 2012

AMD GPUs are where Intel CPUs are at

1 Like

Vega is in a hilarious spot where small amounts of CUs are actually fairly strong and respectful of power but the 56 and 64 did not scale well in the slightest w/r/t performance or power efficiency

of course the problem with desktop/console CPUs is that they are incredibly unexciting in compute terms compared to GPUs, people are waiting for baited breath for a … maybe doubling of performance, and certainly not threadwise, after a decade, for a lot more money, and most of those applications are vastly more efficient on GPUs wherever the relevant codepaths are supported (and the software should be considered flat-out bad where they aren’t)

the idea of having to upgrade just my CPU as the likely limiting factor for next generation console ports is deeply boring

Did not scale well?
Who says that! I beg to differ, i mean -

… What, now you wanna tell me that an intel/nvidia CPU/GPU-combo doesn’t net you 20/98% load at 245W?
Preposterous talk, cannot be real!

#postDefinitelyNotSponsoredByAMD™©®

I’m wondering lately if the dominant focus of CPU design in the next decade will be Spectre mitigation. (Yes, the Spectre security “bug”, really an architectural original sin, is that hard to fix.) With that combined with Moore’s Law having slowed down, CPU performance has probably plateaued.

it’s only in the past year or two that we have ARM (Apple ARM, anyway) basically running even with x86 clock-for-clock, and at much lower overall wattages. when we talk about CPU performance “plateauing” – which is absolutely, unprecedentedly true if you’re looking at consumer x86 CPUs since 2011 – I guess I’d ask whether you’re actually staking out a claim on how far single-threaded performance can go in any architecture. if Intel had a working 7nm process, they could probably make a consumer 12-core, 5ghz chip relatively easily, but even that wouldn’t necessarily be more than 20-25% faster than the 2011 32nm state of the art per thread (especially if you’re taking overclocking and spectre patches into account), and what we think of as GPUs are going to crush what we think of as CPUs from now until forever on any workflow that can be infinitely parallelized as long as the code paths exist for it.

1 Like

Yeah, I’m saying that Spectre shows that single-threaded performance not only cannot improve further, but that it has even already gotten too fast. Truly solving Spectre might require turning off most of the tricks CPU makers have been using to improve single-threaded performance, and resurrect the Cell CPU experiment of demanding that all software implements explicit concurrency and cache management.

well then, here’s to even more GPU code paths!

2 Likes

I do feel for everyone trying to gin up excitement for upgrades on the basis of like, more PCIe lanes, because that’s not historically the most exciting thing in the world

it also has the side effect of pushing Linux back toward second class citizenship as more and more software is flat out bad without the optional hardware acceleration toggles that never get wired up for stability reasons

Also, there’s a lot of room between “single-threaded” and “embarrassingly parallel”. Nvidia, as they add capabilities, is gradually moving down the parallelism continuum (actually Nvidia GPUs are already much less parallel than Google’s TPUs, for example), and CPUs are moving up (by adding cores).

So I’m not predicting exactly “even more GPU code paths” but a more dynamic mix, and a step back from the “general-purpose, never break compatibility, economy of scale” paradigm that Intel established in the 70s.

yeah, I think the interesting question right now is whether CUDA can maintain or grow its (relative) general purpose dominance