Welcome to the Library of Babel. Having dug too close to hell, our punishment this time is to speak gibberish even to ourselves.
OK, this is the one that makes me think this is bullshit. Despite the nightmare movement of the plastic chair, somehow blue shirt person’s free-hanging hair remains glitch free for the entire video. I think this is based on a very small dataset built specifically for these prompts, and it’s merging a few mostly unedited video clips together and then doing some adaptive tweening like microsoft movie maker’s animated gif functionality on the still image focus of the video.
it would explain why everyone in that video looks like they are taking part a hoax which has gone wrong when the chair started floating.
This is likely what they did, based on how closely another video that looked remarkably coherent has in common with a Shutterstock video
https://twitter.com/bcmerchant/status/1758537510618304669
It’s the equivalent of a bullshot in video game journalism terminology; technically true as it is footage generated by an algorithm, but only in the slimmest of ways that would not meet any serious demands on its service.
wow this rules, this is the sort of thing i liked about computers probabilistically generating writing back before it got useful, boring, evil, etc
i got excited and queried chatgpt a few times just now and it was its usual boring self though, sad
i miss markov chains, no markov chain text generator ever gave content moderators ptsd
Schizophrenia or CPTSD induced psychosis both produce similar word patterns
If you accept low enough AI quality, these can be the exact same jobs they already perform except with them not legally having to even sign work for hire to be denied that they legally created anything instead of the company, and since that isn’t a right to negotiate anymore the floor for wages gets dropped further.
Something that gets thrown on a NERV monitor while an angel pounds the shit out of Shinji
yeah there’s something very moreish about this
I imagine this is what reading Finnegans Wake is like.
my theory is that they’re experimenting with throttling the processing power that chatgpt uses; probably doing a bunch of a/b testing to see how much they can get away with too. for a minute there it was chopping responses short, like if you asked for 100 things it’d give you 20 and say “you can figure out the rest buddy.”
it’s the typical Startup playbook: provide a great service at a loss, then raise the prices and make the service worse/cheaper to run. of course there is no way to provide this service at a reasonable level for anything BUT a loss, especially because the amount of labor abuses they can do have mostly already happened i.e. there are no drivers/contractors to start short changing like Uber and the other tech ghouls they’ve modeled themselves after.
so they give less money to the computers that act as the underpaid laborers and the level of service decreases towards unusability because you can’t fudge this stuff as much without humans to abuse.
yeah this is to finnegans wake as sawdust is to cornmeal
these generative ai video horrors have me wondering why a better approach hasn’t been to be using like generative 3D modeling, drawing from a bank of skeletons/objects/things with recognizably human shapes etc., that is then like ‘upscaled’ and ‘detailed’ by the models with a finishing coat of paint rather than generating terrifying nonsensical frame by terrifying nonsensical frame
not that i want that either. or have any idea how that would work techwise outside of the most basic abstraction, but seems to my basic ass like a better way to create images and videos where people don’t have extra appendages or people’s necks don’t become their shirts or whatever
I almost suspended you for this
I meant what it’s like for ME, an IDIOT, to try and read it!
(gonna be a poser about it from now on though)
no pap, it’s pure polentry
pure polenta