iâve decided that the reason normal people (i.e. not CEOs who are just trying to replace people with âgood enoughâ solutions) keep getting excited about AI is because itâs easier to understand than better, simpler solutions
for example, i had someone tell me yesterday that AI would be good for telling someone whether or not their input into a field makes sense. so if the field is looking for IDs with a specific format, then AI could tell you when you put the wrong kind of data in there
and i had to be like âyeah, that is true! it would be good at that. it would also cost exponentially more than a simple, one line regex string that would do the exact same thing with 100% reliability.â
and i just keep having to pop those bubbles, over and over. itâs easier to think about âtelling a guy (computer) to do somethingâ than âwhat solutions already exist, who can tell me about them, and what does it look like to make those.â
the ironic thing is that you could probably get chatgpt to generate the regex string you need anyway! one interaction for a permanent solution. but you have to know to ask that question, how to ask it, how to implement it, and how to test it. so itâs still only removing one piece of that puzzle (although a tool that makes it so i never have to write regex in my life is pretty good).
anyway just pondering once again on why normal, boring people want to use ai at all when itâs obviously (to me) an inferior solution in almost every example.
the most exciting AI application iâve seen thus far personally is like an AI personal assistant that you grant access to all of your files/activity/schedule and which can help you navigate your priorities, remember appointments, assist with searching and filtering, etc. it needs to definitely not âhallucinateâ the time/date on my 9AM monday meeting, or whatever, though. so, it might still be a ways out before we get something really robust like this. also, youâd clearly want to self-host (should become more feasible over time), as who would (explicitly) trust a tech giant with every crumb of minutiae about their life
yeah this would kick ass. i have no idea how reasonable this is rn to set up yourself, mostly because of the access thing. itâs hard to connect tools. but even if it just was able to make API calls and then read the responses, like
you could do some cool stuff like this. and being able to interact with it in natural language? yes please
i would still not use it but thatâs more out of my general curmudgeonly need to have control over things, and my deep annoyance at every single notification i receive. but i think it would be a great use case
We are once again seeing a tech company strangling the reason to come and use their service, from both ends.
Websites are going to have less traffic headed their way and less incentive to be accurate and informative on the hop if that is the case. And as that goes away, searchers will get less and less useful information from Google.
I guess the same complaint can and was made when Google decided to promote sites that bought ad space from them to the top. And itâs following the trend of their search engine slowly becoming worse and worse to use over time so it isnât new. SoâŚ
I think this is missing something obvious: people like novelty and the AI the public has been shown had a few neat magic tricks.
Like typing a description into something and having an AI generate a piece of âartâ in any given style based upon said description is a neat trick! People seeing that and going âwhoa, thatâs neat!â is pretty understandable, and probably why this most recent AI push lead with it. Itâs since gone back a bit to âhey this thing is bad at details and steals from everyoneâ, but individuals who have only paid so much attention to the specifics but remember this probably still think itâs neat and full of other likely neat tricks.
Yes, I think itâs possible to see the AI advances as exciting while also acknowledging the ethical issues, the potential for corporations finding more ways to make the world worse in pursuit of shareholder value, and even the potential for existential risk. Iâd place myself in this group.
Personally, Iâm still impressed with the art stuff in particular. Immoral it may be in some ways but I sometimes hear people call it boring or unoriginal (though this is of course true in a strict sense) and I just canât relate to that. You can really get some interesting results if you put in some effort and move beyond simply trying to make things that are funny, realistic, or in the style of whatever artist.
I feel like Iâve found value in using these tools beyond mere novelty (though thereâs plenty of that).
iâve found a lot of luck using chatgpt for my esoteric programming needs. i have never been good at doing actual coding or math but i am good enough at the logic of pseudocode/visual scripting stuff that it works for me.
i find the âitâs not real artâ and âwhy donât you want to be actually creative and paint/drawâ arguments against AI art to be pretty out of touch even though i fully agree that i feel my soul physically rotting away having to constantly have my internet experience suddenly filled with half remembered nightmares of photographs and [artstation anime babe hd big boob 4k unreal engine].
trying to explain to someone what the difference is between stable diffusion powered photo remixers like lensa vs faceapp touch ups vs a snapchat filter is kind of a losing game.
they are very fun and useful and unethical and cruel and making the world a worse place and iâm very pessimistic about the future
One of the few things I kind of still follow on social media is the type of account that just posts art or interesting pictures. Although I like playing with the image generators, I have a visceral negative reaction when an AI thing pops up on one of those art accounts. Even when itâs a nice one.
Somehow it seems like the AI stuff should be in a separate category that you find only if you go looking for it, though I suppose thatâs a futile idea as the lines continue to blur. (I can imagine the time AI will save traditional artists as new applications are developed. Will a painting or drawing be âdisqualifiedâ in some sense if AI is used as an artist-guided shortcut for some aspects of it, based on an honor system? Seems unlikely.)
Iâve asked myself whether, were I to conclude that all the AI stuff is ultimately going to do more harm than good and should be âpausedâ or halted as some say (never going to happen), I should stop playing with any of it or sharing my positive experiences using it. I think the only result of that would be me missing out. And maybe angering Rokoâs basilisk, I guess.
Maybe it will end up being like motor vehicles. Ridiculously dangerous and causing constant injury and death (replace this with economic and other consequences of AI) but valued by society enough to keep them around with a few precautions. (This was actually a sort of a moral dilemma for me years ago. I eventually embraced driving in part because I felt like I had no choice, and allowed myself to enjoy it even though my wariness still remains in the background.)
LMAO just realized AI development trajectory is basically that scene in every movie where the scientists build the villains their superweapon and then the villains call them into a room to celebrate and execute them all.
Tech was a mistake.
Took less than a week for them to slam themselves in the face with the rake they put down in front of them.
ah, the US military, a unimpeachable source of reasonable truths and not at all susceptible to woo
if this was a modern reinforcement learning AI, they would have had to provide training data covering killing the operator. this sounds like good old goal oriented action planning
but itâs sensationalist/misreporting:
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
if the operator has the final say, a dead operator canât give approval. letting the drone go weapons hot independently??
next section from that topic
This example, seemingly plucked from a science fiction thriller, mean that: âYou canât have a conversation about artificial intelligence, intelligence, machine learning, autonomy if youâre not going to talk about ethics and AIâ said Hamilton.
On a similar note, science fictionâs â or âspeculative fictionâ was also the subject of a presentation by Lt Col Matthew Brown, USAF, an exchange officer in the RAF CAS Air Staff Strategy who has been working on a series of vignettes using stories of future operational scenarios to inform decisionmakers and raise questions about the use of technology.
right down the end of Highlights from the RAeS Future Combat Air & Space Capabilities Summit
I feel safer from our future robot overlords already!