Robot: You have 20 seconds to comply ("AI")

ehhhh

4 Likes
2 Likes
1 Like

haha oops!

1 Like

me: OK, I will never read/watch any artworks after 2022

2 Likes

someone who knows what a null pointer exception is and lacks a weary contempt for computers - this person must be under 25

this thing seriously hurts my soul

6 Likes

Just some fun with ChatGPT

7 Likes

toying with the ai to see if i can create a videogame video essayist with heretofore unseen levels of cringe:

Summary

after this i asked it to revise it by adding some mild left-wing humor and almost died as a result

i recommend destroying this technology

15 Likes
3 Likes
1 Like

Longer version here:

But someone in the comments points out that this seems fake, or at least not what it purports to be, because the shadows for the houses appear before they mention houses.

Edit: Same with the trees in the very beginning.

5 Likes

5 Likes

16 Likes

ok come on

2 Likes

the more extreme the claim, the more likely it’s bullshit

4 Likes

give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding give me funding

4 Likes

realistically what they have created is a means of turning arbitrary data into plausible-seeming text

6 Likes

the 30 page paper is a little too dense for me to decide for myself how cherry picked the examples given in the vice article (lol) are. im at the intersection of being pretty aware of how research grifting works (quit a job over it) and easily impressed by domain specific parlor tricks

5 Likes

I don’t think it’s arbitrary – there are similar projects attempting to map visual activity to images and they’re generating similary ok-ish results. It’s a straightforward concept that’s been chased for decades: correlate neural activity to sensory stimulation. The real gain is getting more sophisticated pattern matching through AI. The unhelpful flash is the invented output fidelity of the image or text model. A popular news writeup like this one is just the type to be suckered by that second half.

A paper from last year looked at image matching from prompts, combining visual and language scans.

As you can see, with the variance of the output, there’s both not a lot of data being pulled, but I really don’t think it’s arbitrary

They model the relationship as image prompt → brain scan (visual and language) → image model prompt – implying there’s that third mapping, to ‘shared culture assumed by models’, which is a real confounding factor to think about.

1 Like