are you saying you want an AMD GPU generating heat at all times
I guess if only Nvidia can make a GPU with decent thermals then I would rather they work with Nvidia
really I just want a mobile haswell successor in a Macbook whose keyboard isn’t an insult to god
guess I’m waiting until 2019 at this point, though I wonder if they’ve already taped that out prior to hardware-level KPTI fixes
honestly I’m surprised Intel can even still get people to pay attention to their 14nm LPDDR3 products
that said god help me if my Z77 ITX mobo ever fails
oh hey, GPU prices are getting dumb again
this might actually help out a bunch of people if they end up releasing a platform to go along with it
I’m just not convinced that constantly redefining what constitutes a mid-end part and releasing a bunch of lateral architecture upgrades is a satisfying solution to regular-ass undersupply
undersupply isn’t the main issue, crypto is
also the mid-range has been consistently defined as high-1080p/high refresh rate and low/medium-1440p since Pascal, it’s not the market’s fault that AMD can’t deliver
Volta waiting room
well, crypto is creating the supply bottleneck. same thing.
I’m also not convinced that Volta exists as a product? Maxwell was a massive architectural improvement, Pascal was a massive process shrink (with minor HEVC/VR improvements), I’m not really expecting Nvidia to be able to do either of those again before 2020. they’re just going to be refining that same process a little bit, and I’m not sure how much that’ll help prices.
agree about AMD though, Nvidia is basically in the position Intel was a decade ago right now
the market can bear another 20-30% jump in performance, provided Nvidia can turn the Titan V into a gaming ready product
of course, since Vega is a non-starter, there’s not really a reason to, so
I wonder if the likely switch upgrade will use a Titan V-comparable process or just ship the X2 since they’re already half a generation behind
Having switchable graphics between the Intel graphics and the AMD chip is cool. You can save power and also, Intel Quicksync has some great uses and that runs off the Intel graphics portion.
As far as AMD Vs Nvidia: While it would of course be wonderful if AMD could compete with Nvidia on all levels: they don’t need to be competitive at the ultra high end. At the Mid-high: Vega is a fine product. We could quibble about power usage and whatnot. But, every single card is being sold. People want them. They are good cards. And AMD is doing good stuff with Vulkan and whatnot.
But then we get down to the mid-range and the 470/480/570/580 are still the best cards in their segments. Even if only because AMD doesn’t do dumb shit with VRAM, like Nvidia does.
I expect AMD will become more competitive, from here. I think the overall success and enthusiasm for Ryzen, has probably re-invigorated the company’s spirit as well as their finances (and they might be about to sell a ton of Epyc and Threadripper products, if Intel based servers turn out to be meaningfully crippled due to the forthcoming patches). They are still in two of the main consoles and I don’t expect Nvidia to beat them out for Sony or Microsoft next gen, either (and I think the only reason Nvidia got Switch, is because AMD simply does not have a product to put in Switch). Their graphics division has a new lead. and Intel recently announced they are gonna compete with discrete graphics cards, too.
thermals are actually really important for GPUs. there aren’t that many more meaningful differentiators. AMD can’t be a generation behind indefinitely. Vega is actually not a fine product if it pretends to be a mid-end chip but it needs ultra-high-end thermals; that prevents them from making money on it, or shipping it in the PS5, or anything other than making a cut-rate, loud desktop chip that basically won’t work in anything but full ATX builds. the 480 and derivatives are not really good at anything but being relatively cheap post-pascal. saving power with switchable graphics shouldn’t really be a consideration in <50w packages; you should just have the one GPU.
one way to look at it is that right now AMD has the benefit of the Pascal shrink but not the Maxwell improvements that preceded it; they got their 14nm product out ASAP but it’s at most 60% as good as Nvidia’s, and GPUs functionally have no limitations these days that aren’t thermal. if you can’t put up ~1TFLOP/25W, that is much worse than the competition. I’m going to be frustrated if the next laptop I get with approximately this thermal ceiling can’t get there.
And laptops are explicitly worst-case for AMD. I wonder if Nvidia is just weirdly aggressive in these negotiations because they keep getting passed over in potentially large package deals like consoles and this…thing.
well, AMD’s entire product line for years seemed to be leading up to the PS4 in hindsight, in the same way the Tegra platform was a perfect fit for the switch at last, but I agree that this chip is a particularly weird case because at least as announced it’s not really leveraging any of AMD’s strengths. Nvidia probably has some expectation of higher margins but honestly most of Intel’s desktop lineup since 2014 has been a hobby horse so
These are the specs for AMD’s first Ryzen CPU/Vega GPU based laptop APUs, which are in actual products, right now.
I suspect AMD’s issues with power effeciency at the upper end, are a fab/silicon issue. And by that I mean that they seem to have trouble getting silicon which will clock high, without needing high power. Which probably ultimately means a money issue for R&D. I have no doubt they can get a good power balance from a lower end chip or a mobile chip. GCN is so mature, it would be stupid if they couldn’t. And I mean they are doing it right there, in that product chart. AMD’s issues seem to be mainly at the high end of discrete graphics cards.
The GPU architecture for Vega is indeed, still based on GCN. Albeit matured, its like 5 years old, now. And that’s the reason they need so much power for the 580 (Polaris) and for Vega. There is no new performance. Its the same old architecture, clocked higher. Clock for clock, its only 8% better, at best, than R9 Fury X.
I suspect the reason GCN has been used for so long is A. R&D money issues but also B. mis-management from AMD’s previous graphics lead. Whom they just replaced about two months ago. I have heard insider reports that moral is at an all time high, after the successes of Ryzen.
GCN still works, though. Its not quite the same scenario as the CPU side, where their architecture simply could not compete and feeding it tons of power still couldn’t help it. GCN is still a relevant architecture. AMD’s just needs better silicon fab, at the least.
Vega competes at the $500 level. Polaris competes at the $180 - $250 level. And AMD’s investments into DX12 and Vulkan see their cards with nice gains under those APIs. Imagine what a proper new architecture could do.
Thermals are only gonna be an issue, if someone tries to dump a Vega into an inadequate airflow scenario. All reports I’ve seen show that heatsinks shipping on Vegas are adequate and the average temps are comparable to Nvidia. Indeed, AMD is missing out on ITX customers who use the smallest cases possible. AMD’s cards are simply too large, before you even consider heat. You can’t even get a small form factor 570/580. Nvidia has like three relevant performance levels of cards at 6-7 inches. I’m not sure that a 570 NEEDS to be 11 inches. I wonder if AMD’s previous management screwed that up.
However, many people doing high end ITX are buying cutting edge cases from small builders, anyway. These cases are designed to accept the largest cards and still be smaller cases than a lot of the mass market options. They do that by putting the GPU on the underside of the mobo, in its own isolated space. and it connects back to the PCI-E port, with a special cable. So the GPU is not sharing air space or physical space, with the rest of the computer. As such, the case can be exactly the correct dimensions to accommodate a large card. Needing a smaller card is mainly a limitation of cases designed for traditonally mounting a GPU directly to the PCI-E port. and even those limitations have been mitigated a lot, by SFX power supplies.
and in those 100w parts Intel is making with AMD: Intel says the best one, should out pace a GTX 1060 mobile part.
whew, that’d be impressive for that space (or rather lack of).
It’s great
I ain’t paying a thousand bucks for a NUC



not to dunk on you too hard but I think you’re overestimating the likelihood of “bad silicon” (they use the same fabs as everyone else; it’s an architectural disadvantage plain and simple), underestimating the extent to which thermals are everything in modern GPU architectures (you get as much power out of it as the form factor allows, period; ventilation is a solved problem and Nvidia could make a 375w GPU that was 50% more powerful than their 250w GPU if their 250w GPU wasn’t already selling for $1000), and attributing too much to company morale. I’m not betting against AMD here I just feel like we’re arguing the laws of physics.
yeah the NUCs have seemed like terrible products ever since they started trying to position them as anything other than low-end – basically a microcosm of everything disappointing about Intel’s consumer offerings over the past few years. it’s just difficult to imagine there’s actually a market for these things; they’re like prestige products if anyone actually wanted to make prestige products for a segment of … mid-end PC gamers who aren’t invested in being able to upgrade their own hardware???