here it is
this stupid chip
why is it a 100w desktop part? why in the world would they do this and not make a 25w notebook chip?
I’m gonna get some big workplace accident sandwich boards and change them to say “it’s been ___ days since intel has failed to adopt LPDDR4”
(it’s going to end up being 3+ years)
hey what’s the good/least bad cloud syncing/storage thing now, pref. cross platform at least as far as ios/osx/win
seems to be a bizarre space where icloud is by far the cheapest
iCloud is terrible about selective sync & is as vaguely unpleasant as all Apple software is on Windows but I have noticed that it’s unusually well-priced (mostly in that it has a paid tier below the $10/mo/1TB option, which most other services don’t) and in terms of trustworthiness is probably on par with Spideroak, definitely ahead of Google/Dropbox/Microsoft
I’m not sure what the point was of this whole venture with AMD.
based on some quick googling around, the performance of this thing is gonna be somewhere around Radeon 7790 or Geforce 750 ish.
Thing is, Intel already has integrated graphics about that good in their CPUs which end in “C” for their naming scheme (Starting with Broadwell C). and that stuff has been aound for a couple of years. So A: why not use it more? and/or B: certainly by now they could have an even better one?
AND…Intel announced they are getting into making discrete GPUs.
why is Intel even bothering with this? I can see it for AMD. One way or another, its more money for them. Even though it strikes a weird conflict with the upcoming Ryzen APUs----there are always gonna be market segments who will only buy Intel. But I don’t see the reason for this move, on Intel’s part. They have a potent graphics part ready to go, with as solid track record. and the EDRAM makes the CPU side perform better, too. Because it acts as an additional and large cache. Such that Broadwell C outperforms Skylake a little bit, in cache heavy scenarios (at least when the graphics side of the Broadwell C chip is unutilized).
I think it’s about the “by now they could have a better one” part – they could’ve, and they don’t. the few years of Intel’s iGPUs being relatively impressive (2012-2014) were largely for pre-Maxwell power:performance, saying nothing of Pascal, and mostly impressive for how OK-ish they were for last-gen stuff and in laptops, not current high end titles. they haven’t really gone anywhere since then; all the Kaby chips mostly ship with mid-end designated GPUs that are equal to the high end Haswell and Broadwell parts, except there is no high end successor, presumably because Intel doesn’t want to make one (because R&D is tougher or whatever). they should be able to ship a ~1TFLOP GPU on a ~25w package by now with current process technology, but AMD has done a lot more of the work to get there since 2014 than they have.
I mean, we don’t actually know until benchmarks, but my main point is that this doesn’t look like its gonna be meaningfully better than the graphics in Broadwell C and Skylake C.
*Iris Pro 6200 in Broadwell C. not sure if the name changed in Skylake.
**ok they have Iris Pro Graphics 580 in the i7-6770HQ, which is in the Intel NUC Skull Canyon product:
https://www.newegg.com/Product/Product.aspx?Item=N82E16856102166&cm_re=intel_nuc--56-102-166--Product
sometimes I just want to wade out into the water and let that riptide take me up to the skylake
those were like half a teraflop, and the 580 (which was more like 2/3) was only in 45w chips. that’s way behind where Nvidia is!
Iris Pro 580 is about par with a GTX 750, in real world performance provided you don’t saturate its texturing ability, due to not having a large, dedicated VRAM pool like a discrete 750 does. Albeit the 750 does still lead in some scenarios (which could probably be made up some, with better driver support from Intel). The point is that Intel’s stuff was no slouch, really. and that was in a low wattage package. Imagine an Iris Pro 580 clocked up in a higher wattage setup.
And I’m just wondering what the point is here. Even if its “they don’t have anything newer”, this AMD part doesn’t seem to be meaningfully better than Iris Pro 580. It seems like Intel could clock up Iris, add more EDRAM, and be sitting pretty well. But real world performance of the final product may prove to be better.
again it’s like half the efficiency of Pascal at this point and AMD has come closer to matching that at low clocks than has Intel. as a practical matter, 50% more performance for 50% less wattage than a 580 is the difference between being able to run PS4 ports on a 13" notebook and not.
Meltdown/Spectre: probably backdoors or definitely backdoors?
not that likely imo, they’re too weird to be deliberate. both highly academic vulnerabilities.
if you’re gonna code a backdoor you don’t do it like this
honestly they’re so difficult to exploit consistently I have my doubts about the performance penalty being permanently merged upstream but
it’s likely that someone at Intel knew about the flaw at some point but I don’t think it’s that unlikely that they’ve gone unexploited for 20 years
I don’t think they’re backdoors, the mechanism of action seems like a fairly natural consequence of speculative execution
yeah like in real terms this is definitely less of an issue than, say, desktop OSes allowing any closed-source software to just straight up guess the paths to and read other files on your machine that your user has access to
I kind of wish it hadn’t been reported but I know at that point I’m siding with the silent paternalism of the Intel engineer who decided it was an acceptable tradeoff however long ago
Timing attacks used to be considered like an interesting theoretical security issue but totally impractical in reality.
I’d be surprised if the NSA didn’t know about this. 3 different security teams independently discovered these issues around the same time last year. That’s just because there was interest lately in CPU attacks and it wasn’t so hard to find when people started seriously looking.
That PS4 crashdump exploit is fantastic.
the first chip with Vega GL seems whatever but the second one, the GH, seems like the good stuff as it’s putting out roughly GTX 1050 performance
just put out a desktop version. don’t keep it to mobile OEMs and NUCs.
these are … goofy. they still have the Intel GPU for switchable graphics???
