ehhhh
haha oops!
someone who knows what a null pointer exception is and lacks a weary contempt for computers - this person must be under 25
this thing seriously hurts my soul
toying with the ai to see if i can create a videogame video essayist with heretofore unseen levels of cringe:
after this i asked it to revise it by adding some mild left-wing humor and almost died as a result
i recommend destroying this technology
Longer version here:
But someone in the comments points out that this seems fake, or at least not what it purports to be, because the shadows for the houses appear before they mention houses.
Edit: Same with the trees in the very beginning.
ok come on
the more extreme the claim, the more likely it’s bullshit
give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding
realistically what they have created is a means of turning arbitrary data into plausible-seeming text
the 30 page paper is a little too dense for me to decide for myself how cherry picked the examples given in the vice article (lol) are. im at the intersection of being pretty aware of how research grifting works (quit a job over it) and easily impressed by domain specific parlor tricks
I don’t think it’s arbitrary – there are similar projects attempting to map visual activity to images and they’re generating similary ok-ish results. It’s a straightforward concept that’s been chased for decades: correlate neural activity to sensory stimulation. The real gain is getting more sophisticated pattern matching through AI. The unhelpful flash is the invented output fidelity of the image or text model. A popular news writeup like this one is just the type to be suckered by that second half.
A paper from last year looked at image matching from prompts, combining visual and language scans.
As you can see, with the variance of the output, there’s both not a lot of data being pulled, but I really don’t think it’s arbitrary
They model the relationship as image prompt → brain scan (visual and language) → image model prompt – implying there’s that third mapping, to ‘shared culture assumed by models’, which is a real confounding factor to think about.