Unforseen consequence of new keyboard - laptop keys aren’t ortholinear so now I’m making errors when I’m taking notes. This is how it starts. I’m gonna be pulling this thing out of a rolling briefcase before I know it. (Now imagining some kind of 3D-printed iPad mount that sits in portrait mode between the key wells…)
I hear all that! My day-to-day includes long encodes etc; the point of the Prime95 was to heat up the CPU as quickly as possible to check for early faults.
I dunno where else to put this somewhat interesting note:
Dark Souls 2: Scholar of the First Sin eventually crashes on my laptop, when I have hyperthreading turned off (So then its just a dual core/2 threads). You can play just fine for 30 minutes - to an hour. But it will lock up at some point and crash.
With hyperthreading turned on (2 cores, 4 threads), you can play all night.
P.S. framerates are basically the same. Based on some benchmarking i did, 2 cores/2 threads might average 1 frame less.
*note, I’m not looking to troubleshoot this. Just file it under “Huh”.
that really sounds weird, huh.
Do you have some temp readings for both scenarios? It’d be interesting to see whether hyper-threading puts more stress on the cpu/causes higher temps, or if that doesn’t even make a difference.
Its sort of a known rule that Hyperthreading does put more stress on the CPU and does create more heat.
However, with Dark Souls 2, my laptop’s CPU is already under high utilization and its already at the thermal dissipation limits of the cooling solution. So the temp difference isn’t really noticeable. I haven’t actually benched the temps. But I have it constantly showing on the corner of my screen. Hyperthreading OFF doesn’t seem to be much lower temp, if any, in this case.
I’m guessing Dark Souls 2: Scholar of The First Sin requires more than 2 threads, but doesn’t actually check your system. So you can run it and it eventually faults and crashes, with only 2 threads available. But, just a guess.
hmmm, i try to approach this from the point of view of an embedded developer - if you are writing code that is or can be multi-threaded, it shouldn’t make any difference if it is just two cores or two cores that hyper-thread.
If it does though, that would/could either mean that they have fucked up in regards to e.g. guards/semaphores or usage of memory which would cause a segfault/the application crashing, or that the HW/CPU has an issue, e.g. Cache-handling, pipeline-issues related to assets that causes the crashing you experience.
The latter would mean that in the long run, i’d expect also crashing during hour-long usage, tbh… so all in all, sounds a bit weird, clearly only fitting for a weird/mysterious souls game!
my guess is there’s some communication between processes/threads that is very fast when they’re co-located on the same CPU and racy as hell when they’re on separate ones
I dunno. Far Cry 4 requires 3 threads. So there is establishment in the industry for more than 2 threads. And Dark Souls is console first, for development. And the current consoles have very weak CPUs, with a lot of threads. I don’t know what I’m talking about on a technical level, but I feel like there is some precedent here, for a PC port to not like a dual core CPU.
It could very well have something to do with the HD4000. Even though it tends to work in most games-----most games don’t list them as viable/supported. However, I don’t have a way to connect a discrete GPU to this system, to test that. But as you said, it could be some sort of pipeline issue which is subverted with more threads.
Gotta have to agree with Toptube that the HD4000 was some magical shit. There hasn’t been a mainstream CPU/iGPU/APU mobile solution to PS4/One level graphics yet, but when there is… it’s gonna be rad AF.
Considering that darks 1/2 worked on the PS3/360, and assuming that there is at least some middleware used, there are proven examples of significantly different multicores supported when it comes to the system design (thinking of CELL here specifically), so i would expect the team being closer to the HW-related aspects being on top of their game, tbh.
Of course, having to compete with Windows for ressources is a different beast, even for proven experts when coming from the embedded/console domain. and tbh, that’s where i would start trying to pinpointing the issue*, if i would be tasked with debugging it…
just checked whether win10 supports locking applicstions to a core, and it indeed still seems to do:
*: because I wouldn’t even want to consider trying to figure out if the CPU has some design-issues or micro-code glitches, because that would - or rather: SHOULD - have been discovered way before and as well known as the infamous Pentium bug, so… rather unlikely.
be still my heart, ice lake model numbers are actually starting to come out
Yeah, this is Win7, BTW. My desktop is Win10. But I never bothered to upgrade my laptop. I might soon. Because, I have an SSD which should probably take residence in this laptop.
I’ve been doing a lot of bouncing around the internet trying to figure out ways to improve the temps and noise on my itx build and MAN what is the deal with this corner of the internet:
- Best-case-scenario is that all of the information i want is either buried in some reddit thread from 2 years ago OR
- I have to watch some 10 minute video with an insipid white guy for the 15 seconds of data I need AND
- all of the videos have the worst titles and images that always combine some overly dramatic useless headline (“I built the ULTIMATE SFF with two 240mm AIO??”) with a picture of some guy looking incredulously at some PC component like this:
Who agreed to this method of communication, I want names
I’m actually getting excited for computex this year, it really seems like ryzen 3 will be the first unequivocal successor to Sandy/Ivy on desktop and Ice Lake to Ivy/Haswell on laptops
finally getting mainstream CPUs that are baseline at least 3x as good as 7 years ago
the answer is pretty much down to what case you have
gonna drop Optimum Tech’s channel, he’s focused on ITX builds and has tons of videos reviewing different coolers and setups with most of the sandwich (read:expensive) cases
the SFFPC reddit’s beginners part guide
you’ll notice the lower profile Noctua coolers get rec’d as much as possible, respecting for clearence
I pulled the trigger and bought an Ncase M1 this year after running ITX builds out of a Nano s for years and that channel has a lot of information specifically on that case.
From what I’ve read and just confirmed again with OptimumTech the general consensus seems to be to put two static pressure fans as intake under the GPU (or buy an accelero III, I guess), but it doesn’t make sense to me to have 2 fans blowing up into a GPU that has two fans blowing down, but I’m not an expert on airflow physics by any menas.
Ah well, I guess I’ll grab a few from Microcenter and abuse their extremely generous return policy if they don’t help.
I think if I were going to do a new build tomorrow I’d have to go with the Dan A4, it seems like the best mix of very small / can handle a 300w GPU / not excruciating to work in or unbelievably expensive
just on a lark I checked and
yeah I’d pay that given that desktop builds last like a decade now